Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. The Contest Between Open-Source AI and Proprietary AI
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

The Contest Between Open-Source AI and Proprietary AI

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    Image

    Partners and collaborators of IBM and Meta include AMD, Intel, NASA, CERN, Hugging Face, Oracle, the Linux Foundation, Red Hat, Harvard University, and other educational and R&D institutions.

    By definition, software development projects that are open to the public for use, modification, or distribution, allowing engineers, developers, and others to collaborate, are referred to as open-source. Ravi Narayanan, Global Practice Head of Insights and Analytics at technology consulting firm Nisum, explains that open-source "fosters community collaboration and transparency, accelerates innovation, and reduces development costs."

    Garry M. Paxinos, CTO of netTalk Connect and NOOZ.AI, elaborated to Spiceworks News & Insights that open-source AI encompasses training corpora, cleaning and preparation of training data, code used in training, trained models, inference code, and guardrail code in outputs. It also includes platforms, tools, datasets, and APIs.

    AI models and the underlying hardware are perhaps the most sought-after AI assets today. Considering that the open-source model is far less advanced and capable than the proprietary model, the absence of notable entities like the AI Alliance, OpenAI, Microsoft, NVIDIA, Google, DeepMind, Amazon, Anthropic, Tesla, and others from the list highlights the divide between open-source and non-open-source approaches.

    In June of this year, during a discussion at Tel Aviv University, an audience member asked OpenAI CEO Sam Altman and former OpenAI Chief Scientist Ilya Sutskever whether open-source large language models could rival GPT-4 without additional technological advancements.

    "Am I wasting my time installing Stable Vicuna, which cost over $13 billion? Tell me, am I wasting my time?" questioned open-source AI researcher Ishay Green, leaving Altman speechless and Sutskever silent for 12 seconds. Here's Sutskever's response:

    "When it comes to the issue of open-source versus proprietary models, you shouldn't think in binary black-and-white terms—like there's some secret sauce that will never be rediscovered. The question is whether GPT-4 will be replicated by open-source models—perhaps one day it will, but by then, companies will have even more advanced models, so there will always be a gap between open-source and proprietary models. This time, the gap might even widen. The effort, engineering, and research required to build such neural networks are increasing exponentially, so even if open-source models exist, they will increasingly be produced by large corporations rather than small groups of dedicated researchers and engineers," said an industry insider.

    Nate McClidge, founder and CEO of QuickBl, believes that strong financial backing helps companies gain technological leadership and competitive advantages. Sandeep Reddy Maru, Senior VP at Gramener, estimates that "there is currently at least a 3x gap between open-source and proprietary AI models today. AI modeling benefits from powerful computing, massive data granularity, and minimal barriers to application."

    Nevertheless, Narayanan noted, "Open-source models still have their strengths. Both open-source and proprietary AI models have advantages—they excel in different domains due to their inherent characteristics. The technical gap varies: open-source models often lead in innovation and community-driven improvements, while proprietary models may offer specialized capabilities and robust support."

    Meta and IBM are spearheading the AI Alliance, leveraging their expertise to promote standardization and ethical frameworks in AI. This aligns with their goals of shaping AI's future, ensuring influence in the evolving landscape, and fostering trust in AI technology. Narayanan added, "For Meta, it's about deeper integration of AI into social platforms and digital interactions, while IBM focuses on enhancing its enterprise AI solutions and services."

    Meta, often criticized for disregarding user privacy, is now at the forefront of open-source AI development. However, it's odd that a company pioneering openness requires developers/users to submit download requests and provide personal details like birthdates for access to its Llama 2 model.

    To its credit, the download link arrived in my inbox within minutes after registration. Perhaps Meta's past is its biggest enemy, casting doubt on its intentions. Additionally, while Meta streamlined its licensing process, Llama 2—restricted from public modification—can't truly be called open-source. This raises questions about why Meta is leading the AI Alliance."

    Meta's positive role in open-source AI development might be considered accidental. Pasinos added: "By observing what happened after the Meta Llama model was leaked and then officially released as Llama 2, one can see the usefulness of open-sourcing trained models. Once a trained model is leaked, numerous open-source projects and models emerge using Llama and/or fine-tuning the model."

    MacLeitch noted that Meta and IBM's acceptance and contributions to open-source might be part of their goal to "challenge the biggest players in the GenAI field and create an alternative ecosystem of AI-related companies and tools."

    Reddy Maru agreed. He believes the AI Alliance's enterprises have two objectives:

    Personally, I am skeptical of alliances. Although they may be useful and beneficial, I have worked on several technical committees where very large companies paid senior employees to participate, primarily to slow down the committee's work. I have chaired some subcommittees where this happened.

    The development and models of private AI may hinder innovation. Jennifer Chayes, Dean of the School of Computing, Data Science, and Society at UC Berkeley, pointed out: "Pursuing open innovation creates a level playing field, allowing everyone to share the benefits of GenAI."

    MacLeitch told reporters that flexibility, the ability to customize and modify according to needs, and the fact that they undergo peer review, thereby providing higher security, are the most notable advantages of open-source AI.

    Narayanan added: "Open-source AI is a catalyst for innovation and accessibility, breaking barriers for smaller entities and creating a collaborative environment for rapid technological advancement. It offers significant cost advantages, reducing development and operational expenses, and promotes transparency, which is crucial for ethical AI development and building trust in AI systems."

    The benefits of GenAI, or AI in general, are key aspects for businesses striving to improve productivity, gain competitive advantages, and design innovative new products and services for end-users. However, deep-seated concerns about the dangers of AI technology persist, including its impact on consumer privacy, its tendency to create bias and discrimination in cybersecurity, and its ambiguous interactions with humans.

    The White House's executive order on AI usage has taken note of open source models, referring to them as dual-use foundational models with publicly available weights. The order states: "When the weights of dual-use foundational models are widely accessible—such as when they are publicly released on the internet—they can bring tremendous benefits to innovation but also pose significant security risks, such as the removal of safeguards within the model."

    Commerce Secretary Gina Raimondo is expected to submit a report to the President by July 2024 on policy and regulatory recommendations, following consultations with the private sector, academia, civil society, and other stakeholders regarding the potential benefits, risks, and impacts of open models.

    "The potential for misuse is significant, including ethical issues and societal harm. Open-source AI projects often face inconsistent quality and maintenance challenges, affecting their reliability. Additionally, they present serious security vulnerabilities and complex compliance issues, particularly in intellectual property and licensing."

    Specifically, McLeech explained: "Beyond spreading misinformation, open-source AI algorithms can be used to create deepfakes and other online scam tools. In extreme cases, open-source AI could be used to develop autonomous weapons."

    Pasinos further pointed out why the dangers of AI are inherent to the technology.

    "These dangers raise a deeper philosophical issue—many risks are actually psychological. Our concern is that models may exhibit numerous biases in their outputs. While these biases are indeed worrisome, in many ways, they reflect our history. Are we losing the ability to understand biases and learn from mistakes? At the same time, depending on the field, recognizing these biases may help us make better decisions—especially when working in adversarial environments."

    "While altruism is a worthy goal, we must also be realistic about human nature and address it appropriately, while ensuring our 'guardrails' do not create hidden conflicts within our AI systems."

    In the U.S. and other parts of the world, the slow development of guardrails or legal provisions for AI, along with the associated liabilities, has introduced uncertainty into this emerging field. AI developers and companies are calling for regulation and expressing a willingness to participate in the process.

    This raises another question—will their involvement influence this process and tilt regulations in their favor?

    In any case, AI legislation is bound to happen. Companies are ensuring they can steer the ship toward their own interests.

    It can be expected that the AI Alliance will play a significant role in shaping AI legislation. As a consortium of billion-dollar companies collaborating with prestigious universities, the alliance certainly possesses the financial resources and political clout to influence policy.

    Narayanan adds: "With its collective expertise and industry influence, the AI Alliance can significantly impact AI legislation. By providing informed insights and recommendations, they can shape policy frameworks to ensure regulations are technically informed and aligned with industry capabilities and needs. Their involvement could lead to more balanced, effective, and innovation-friendly AI regulations."

    On the other hand, Paxinos anticipates that regulating AI through legislation will stifle innovation. Furthermore, he questions its broad applicability, whether to businesses engaged in open-source or proprietary AI development.

    "The question is, which 'actors' will comply with the legislation, and which won't? Will it leave countries following the guidelines lagging behind those that don't?"

    "When dealing with guardrails, who decides what content is safe and what isn't? Is it as arbitrary and capricious as the definitions of misinformation and disinformation? How is the concept of free speech affected? Looking at newspapers and publications before and after the founding of nations, it's clear that better information, not censorship, combats misinformation. When does an 'opinion' become misinformation? Is it possible to commit thought crimes?"

    "At a deeper level, when does AI gain the right to free speech and expression? Interesting times..." — Garry Paxinos, CTO of netTalk Connect and NOOZ.AI.

    Although the opaque nature of AI development has been the norm so far, proprietary AI development does offer some benefits, including:

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups