Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Global AI Giants Gather: What Was Discussed at the AI Era's 'Salt and Iron Conference'?
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Global AI Giants Gather: What Was Discussed at the AI Era's 'Salt and Iron Conference'?

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    No technology has ever achieved such rapid consensus across the tech industry and society as AI large models have in just a few months. Today, all major tech giants and governments firmly believe AI will transform the world and everything in it.

    Yet behind this broad consensus lie numerous unresolved issues. On one hand, there are practical concerns—job displacement, the proliferation of fake content, and unequal access barriers. On the other, there are the long-standing fears depicted in human art about 'the threat to civilization.' According to Wired, OpenAI even outlined an exit strategy in a financial document for scenarios where AI disrupts the global economic system.

    In July, at an event hosted at IBM headquarters, U.S. Senate Majority Leader Chuck Schumer announced he would convene a series of AI meetings to 'bring the best minds to the table, let them exchange ideas, answer questions, and work toward consensus and solutions, while senators, our staff, and others simply listen.' Ultimately, Schumer aims to 'lay the groundwork for AI policy legislation' based on these discussions.

    Two months later, the meetings began. Twenty-two attendees from top tech companies participated in closed-door sessions, including OpenAI CEO Sam Altman, NVIDIA CEO Jensen Huang, Google CEO Sundar Pichai, Meta CEO Mark Zuckerberg, as well as veterans like Microsoft founder Bill Gates and former Google CEO Eric Schmidt.

    No one denies AI's immense potential, but significant gaps remain in consensus around its risks, safeguards, regulation, and future. One certainty is that humanity cannot afford to let AI develop unchecked.

    Image

    Figure/

    After the first closed-door meeting, Elon Musk, CEO of Tesla, SpaceX, and Neuralink, publicly stated that the gathering was 'historic.' He endorsed the idea of establishing a new regulatory body for artificial intelligence and reiterated the significant risks posed by AI.

    'The consequences of AI going wrong are severe, so we must be proactive rather than reactive,' Musk said. 'This is fundamentally a civilization risk issue, posing potential threats to all humanity worldwide.'

    Musk did not elaborate on the specific dangers AI poses to human civilization. However, as AI increasingly manages power grids, water supply systems, and vehicle control systems, any errors could lead to widespread problems. Computer scientist and AI researcher Deb Raji also highlighted how AI's biased decisions could impact all aspects of society.

    Moreover, long-standing human concerns about AI persist.

    So, how can we mitigate and prevent AI-related risks? Microsoft President Brad Smith proposed the need for 'emergency brakes' to address the significant risks AI might create, particularly in critical infrastructure systems like power grids and water supply networks. 'This way, we can ensure that the threats many fear remain confined to science fiction and do not become reality.'

    Brad Smith

    Brad Smith, Image Credit: [Source]

    Smith argues that just as every building and household has circuit breakers that can immediately shut down electrical systems when needed, AI systems require similar 'emergency brakes' to ensure humans are protected from large-scale harm caused by such systems.

    Similarly, William Dally, Chief Scientist and Senior Vice President at NVIDIA, highlighted the potential for AI programs to malfunction and cause serious damage. The key lies in 'human involvement'—keeping humans in critical decision-making loops to constrain certain AI powers.

    William Dally, Image/NVIDIA
    William Dally, Image/NVIDIA

    "I believe that as long as we deploy AI cautiously—with humans in key positions—we can ensure AI won't take over and shut down power grids or make planes fall from the sky," Dally said.

    When ChatGPT first gained traction in tech circles late last year, concerns were raised about AI models being weaponized for scams and malicious acts. Open-source models appear to amplify this 'misuse risk.'

    A central debate at the conference revolved around 'open-source' AI models available for public download and modification. While these models allow businesses and researchers to access ChatGPT-like technology without massive investments, they also enable bad actors to exploit the systems. OpenAI, which stopped open-sourcing after GPT-3.5, warns through co-founder Ilya Sutskever: "In a few years, everyone will realize open-source AI is unwise."

    Ilya Sutskever and Jensen Huang at GDC, Image/NVIDIA

    When asked about the reasons, he stated, "These models are very powerful and will become increasingly so. At some point, it would be easy for someone to cause significant harm if they wanted to."

    Additionally, DeepMind co-founder and Inflection AI CEO Mustafa Suleyman recently highlighted the risks of open-source AI. He pointed out that the key issue with open-sourcing AI is "the rapid diffusion of power," which could enable a single individual to inflict unprecedented harm and influence on the world. "In the next 20 years, naive open-sourcing will almost certainly lead to disaster," he warned.

    Mustafa Suleyman, Image/DeepMind

    However, Meta clearly disagrees with these arguments. Meta CEO Mark Zuckerberg countered that open-sourcing AI "democratizes AI, helps level the playing field, and fosters innovation by individuals and businesses." At the same time, he acknowledged that the open-source model might pose risks, but Meta is working to develop the technology as safely as possible.

    When AI meets open source, it remains an unresolved question.

    2104 years ago, seven years after the death of Emperor Wu of Han, the Han court convened an unprecedented policy consultation. One of the main issues was the abolition or modification of the state monopoly policies on salt, iron, and alcohol implemented during Emperor Wu's reign. For this purpose, they gathered over sixty scholars and literary figures, along with government officials like Sang Hongyang, the Imperial Censor. This event is historically known as the 'Salt and Iron Conference.' However, at its core, this conference was fundamentally about determining the economic policy direction of the Han Dynasty.

    Though the themes differ and the eras are worlds apart, whether it was the Salt and Iron Conference over two thousand years ago or today's AI insights forums, both face enormous issues and aim to use the exchange and debate among 'wise minds' as a foundation for legislation.

    Image
    Image: Google

    In a recent article commemorating Google's 25th anniversary, current CEO Sundar Pichai wrote:

    (Google) participates in important debates about how these technologies will shape our society and then works collectively to find answers. AI is a key part of this. While we are excited about AI's potential to benefit humanity and society, we recognize that AI, like any early-stage technology, brings complexities and risks. Our development and use of AI must address these risks and help advance the technology responsibly.

    A few days later, eight companies—including Adobe, Cohere, IBM, NVIDIA, Palantir, Salesforce, Scale AI, and Stability—became the second group to sign the agreement on 'Responsible Development of AI Technology.' Often, when faced with problems that no single individual, company, or government can solve alone, we must recognize the value of 'collective effort.'

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups