Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. OpenAI Seriously Concerned About AI Launching Nuclear Attacks on Humans
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

OpenAI Seriously Concerned About AI Launching Nuclear Attacks on Humans

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    In the upcoming Hollywood sci-fi film The Creator, an AI originally designed to serve humanity detonates a nuclear bomb in Los Angeles.

    More surreal than fiction, in reality, AI companies have already begun to worry about such scenarios unfolding in the real world.

    Recently, OpenAI stated that, out of concern for the safety of AI systems, the company is forming a dedicated team to address potential 'catastrophic risks' posed by frontier AI, including nuclear threats.

    In fact, its CEO Sam Altman has long warned that AI could pose an 'existential' threat to humanity, having previously called for stronger AI regulation in multiple forums, including U.S. Congressional hearings. However, some scientists, including Meta's Yann LeCun, hold differing views on AI regulation, arguing that current AI capabilities remain limited and that premature regulation would only benefit large corporations while stifling innovation.

    This highlights the ongoing industry divide over regulating frontier AI. Premature regulation could hinder technological progress, yet a lack of oversight may leave risks unaddressed. Striking a balance between fostering innovation and implementing safeguards—ensuring AI develops both efficiently and safely—remains a significant challenge for the industry.

    Recently, in an update, OpenAI stated that, due to concerns about AI system safety, the company is forming a new team called 'Preparedness' to monitor, evaluate, and forecast the development of 'frontier models'—aimed at preventing so-called 'catastrophic risks,' including cybersecurity issues as well as chemical, nuclear, and biological threats.

    Image
    Image source: OpenAI official website

    The team will be led by Aleksander Madry, who currently serves as the director of MIT's Center for Deployable Machine Learning and is currently on leave.

    Additionally, the team's responsibilities include developing and maintaining a "Risk-Informed Development Policy," which will detail OpenAI's methods for evaluating and monitoring AI models, the company's risk mitigation actions, and the governance structure overseeing the entire model development process. This policy aims to complement OpenAI's work in AI safety and ensure consistency in safety measures before and after deployment.

    OpenAI states that managing potential catastrophic risks from cutting-edge AI models requires addressing the following key questions:

    How dangerous is the potential misuse of advanced AI models?

    How to establish a robust framework for monitoring, assessing, predicting, and preventing the dangerous capabilities of advanced AI models?

    If advanced AI models are misused, how might malicious actors exploit them?

    OpenAI wrote in an update: "We believe... advanced AI models that surpass the current state-of-the-art have the potential to benefit all of humanity... but they also bring increasingly severe risks."

    Recently, OpenAI has continuously emphasized AI safety issues and initiated a series of actions at the corporate, public opinion, and even political levels.

    Previously, on July 7, OpenAI announced the formation of a new team aimed at exploring methods to guide and control "super AI." The team is co-led by OpenAI co-founder and Chief Scientist Ilya Sutskever and Alignment lead Jan Leike.

    Sutskever and Leike predicted that artificial intelligence surpassing human intelligence could emerge within 10 years. They stated that such AI is not necessarily benevolent, making it necessary to research methods to control and limit it.

    According to reports, the team was granted the highest priority and received 20% of the company's computing resources, with the goal of solving the core technical challenges of controlling 'super AI' within the next four years.

    To coincide with the launch of the 'Preparedness' team, OpenAI also organized a competition inviting external participants to propose ways AI could be misused and cause real-world harm. The top 10 submissions will receive a $25,000 prize and a job offer from the 'Preparedness' team.

    OpenAI CEO Sam Altman has long expressed concerns that AI could lead to human extinction.

    During a U.S. Congressional hearing on AI in May, Altman stated that AI needs to be regulated, warning that without strict regulatory standards for super AI, more dangers would emerge within the next 20 years.

    At the end of May, Altman, along with the CEOs of Google DeepMind and Anthropic, as well as other prominent AI researchers, signed a brief statement declaring that 'mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.'

    At the San Francisco Tech Summit in June, Sam Altman remarked that in the development of AI technology, 'you shouldn’t trust one company, and certainly not one person.' He believes the technology itself, its benefits, access, and governance should belong to all of humanity.

    However, some (represented by Elon Musk) have accused Altman of 'calling for regulation' merely to protect OpenAI's leadership position. Sam Altman responded at the time, stating, 'We believe that large companies and proprietary models exceeding a certain capability threshold should face more regulation, while smaller startups and open-source models should be subject to less. We've seen the issues in countries that attempt to over-regulate technology—this is not what we desire.'

    He further added, 'People are training models far beyond the scale of anything we have today. But if they surpass certain capability thresholds, I believe there should be a certification process, along with external audits and safety testing. Moreover, such models should be reported to and supervised by governments.'

    In contrast to Altman's stance, on October 19th, Meta scientist Yann LeCun expressed his opposition to premature AI regulation in an interview with the UK's Financial Times.

    Yann LeCun is a member of the US National Academy of Sciences, the US National Academy of Engineering, and the French Academy of Sciences. He is renowned for inventing convolutional networks and his work on optical character recognition and computer vision using convolutional neural networks (CNNs).

    In 2018, Yann LeCun, along with Yoshua Bengio and Geoffrey Hinton, received the Turing Award (often referred to as the 'Nobel Prize of Computing'). The trio is commonly known as the 'Godfathers of AI' and the 'Godfathers of Deep Learning.'

    During the interview, LeCun expressed a generally negative attitude toward AI regulation, arguing that regulating AI models now is akin to regulating jet aircraft in 1925 (when such technology hadn't even been invented). He warned that premature regulation would only reinforce the dominance of large tech companies and stifle competition.

    "Regulating AI research and development could have incredibly counterproductive effects," said Yann LeCun, suggesting that calls for AI regulation stem from the "arrogance" or "sense of superiority" of leading tech companies. These companies believe only they can be trusted to develop AI safely, "and they want regulation under the guise of AI safety."

    "But in reality, debates about the potential risks of AI are premature until we can design a system that matches a cat's learning capabilities," LeCun stated. He emphasized that the current generation of AI models is far from being as powerful as some researchers claim. "They fundamentally don't understand how the world works. They lack planning abilities and cannot perform true reasoning."

    In his view, OpenAI and Google DeepMind have been "overly optimistic" about the complexity of the issue. In reality, achieving human-level AI will require several "conceptual breakthroughs." Even then, AI could be controlled by encoding "ethical qualities" into systems, much like how laws regulate human behavior today.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups