AI Robots Sweep CES as OpenAI Quietly Lifts ChatGPT Military Use Restrictions
-
At CES 2024, robots dazzled audiences by making latte art and giving massages, offering eye-opening demonstrations. However, netizens were shocked to discover that OpenAI has quietly removed ChatGPT's restrictions on military and warfare applications!
Just recently, Stanford's 'shrimp-stirring robot' made countless people marvel: Could 2024 become the inaugural year of robotics?
During these past few days at CES 2024, another wave of robots has caused a sensation in the exhibition halls! Take this robotic barista, for example, skillfully pouring a pot of smooth foamed milk over a latte.
It starts by slowly pouring the milk, then gracefully lifts the pitcher in a motion resembling a carefully choreographed dance, coloring the tulip petals.
For humans, mastering the art of latte designs can take months or even years, but this AI-powered robotic barista performs with effortless mastery.
This scene fills 34-year-old Roman Alejo with anxiety. This Las Vegas casino barista is deeply concerned: in the AI era, might hospitality jobs no longer require humans?
His worries are not unfounded—numerous robots at CES have struck a nerve, bringing excitement while also causing immense anxiety.
The logistics robot Mirokai, showcased by French company Enchanted Tools, was inspired by anime.
By 2025, it is expected to become an assistant for doctors and nurses. The iYU robot created by France's Capsix Robotics is a massage master.
It first uses AI for real-time full-body scanning to provide users with the best experience, then the mechanical arm starts massaging you.
In addition to delivery robots, massage robots, and barista robots, the exhibition also featured intelligent products capable of making ice cream and bubble tea.
There's an AI-powered smart grill that can complete barbecue tasks without human operation in the kitchen. In future restaurants, robot chefs are likely to become the norm. These high-performance robots have caused considerable panic among industry professionals. "It's very scary, more and more AI is entering the human world."
All signs indicate that 2024 is about to become the year of humanoid robots, especially when OpenAI is also investing heavily in this area.
Recently, 1X Technologies, a robotics company supported by OpenAI, just raised $100 million in a Series B funding round. Specifically, 1X's NEO has an anatomical structure similar to human muscles and a non-rigid hydraulic system, integrating strength with gentleness. It can naturally walk, jog, climb stairs, navigate, and even be remotely controlled by humans.
With comprehensive support from AI and robotics, NEO can effortlessly perform industrial tasks such as logistics, manufacturing, and mechanical operations, as well as household chores and caregiving duties.
Interestingly, the name NEO easily reminds people of the sci-fi movie The Matrix. Ted Persson, a partner at investor EQT Ventures, stated: "From Da Vinci to today's science fiction, humanity's dream of humanoid robots has spanned over 500 years. It's an immense privilege to witness this technology taking shape before our eyes."
"In our view, robots joining the human workforce will have a transformative impact."
However, will robots really just help us with household chores?
It's worth noting that Microsoft can already control robots using ChatGPT. Last year, Microsoft published a paper proposing a new set of design principles for using large language models like ChatGPT to provide instructions to robots.
With well-crafted prompts, ChatGPT's performance can be even more astonishing. Microsoft experts have discovered that if this capability is transferred to robots, assuming that in a few decades every household has a robot, simply saying "heat up my lunch" would allow the robot to find the microwave by itself and bring the food back, ushering in a new era of human-robot interaction.
The key challenge in making ChatGPT help users interact more easily with robots is teaching ChatGPT how to use the laws of physics, understand the operating environment's context, and comprehend how the robot's physical actions can alter the state of the world.
Experiments have proven that ChatGPT can control a real drone. As demonstrated in the video below, a completely non-technical user can control a drone simply through conversation.
Recent discussions about GPT-5 bringing us closer to AGI have made self-learning robots, which are already capable of remote operations with increasing ease, even more versatile.
So, what comes next?
Recently, something concerning has happened. This week, while everyone's attention was captivated by the dazzling GPT Store, foreign media discovered that OpenAI had quietly lifted its ban on using ChatGPT for military and warfare purposes without any announcement.
Before January 10, OpenAI's "Usage Policy" page included prohibitions against "activities that could cause serious physical harm," specifically mentioning "weapons development" and "military and warfare activities."
The new policy still prohibits "using our services to harm oneself or others" and cites "developing or using weapons" as an example, but the comprehensive ban on "military and warfare" purposes has now disappeared. OpenAI stated that this was done to make the document content 'clearer and more understandable', and also includes many other important language and formatting changes.
Old terms
New terms OpenAI spokesperson Niko Felix stated: "We aim to establish a set of universal principles that are easy to remember and apply, especially considering our tools are now used by everyday users worldwide who can also develop their own GPT models."
"The principle of 'do no harm' is broad yet simple to understand and applicable in various contexts. We specifically mention using weapons and causing harm to others as concrete examples."
However, Felix didn't clearly state whether all military uses are covered by the vague "harm" prohibition: "Any use of our technology, including military applications, 'to develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system' is prohibited." Heidy Khlaaf, Engineering Director at Trail of Bits and an expert in Machine Learning & Autonomous Systems Security, stated: "OpenAI is well aware that their technology and services could pose risks and harms if used for military purposes."
In a 2022 paper co-authored with OpenAI researchers, Khlaaf specifically pointed out the risks of military use. In contrast, the new "Usage Policy" appears to focus more on legality than safety.
Paper link: https://arxiv.org/abs/2207.14157 The two policies have distinct differences: the former explicitly prohibits weapon development and military or warfare activities, while the latter emphasizes flexibility and legal compliance.
Weapon development and military warfare activities may be legal to varying degrees, but their impact on AI safety could be significant. Considering the known biases and false generation phenomena in large language models, as well as their overall lack of accuracy, applying them to military warfare could lead to inaccurate and biased operations, thereby increasing the risk of harm and civilian casualties.
With the rapid development of automated text generation technology, it has evolved from a distant dream into a practical tool and is now entering an inevitable new phase—weaponization. The Intercept reported last year that as Pentagon and US intelligence agencies showed increasing interest, OpenAI refused to clarify whether it would maintain its prohibition on "military and warfare" applications.
Although none of OpenAI's current products could be directly used for killing, large language models (LLMs) like ChatGPT could enhance many tasks related to warfare, such as writing code or processing procurement orders.
Researchers have already discovered evidence in OpenAI's custom GPTs suggesting that US military personnel may be using this technology to streamline paperwork. Additionally, the National Geospatial-Intelligence Agency (NGA), which directly supports U.S. combat operations, has openly stated that they are considering using ChatGPT to assist human analysts.
At the Emerging Technologies Conference held by INSA in March 2023, Phillip Chudoba, a senior official at NGA, provided a detailed response when asked about the application of AI in relevant fields:
We aim to reach a new phase where geospatial intelligence (GEOINT), artificial intelligence (AI), machine learning (ML), and analytical AI/ML converge. With the help of ChatGPT-like technologies, we hope to predict scenarios that human analysts might not yet have considered, possibly due to lack of experience or exposure.
Stripping away the jargon, Chudoba's vision is clear: leveraging ChatGPT's (or similar technologies) text-prediction capabilities to aid human analysts in interpreting the world. The National Geospatial-Intelligence Agency, or NGA, though not as widely known as other prominent intelligence agencies, is the primary U.S. agency for geospatial intelligence, commonly referred to as GEOINT.
This work involves analyzing vast amounts of geographic information—maps, satellite images, meteorological data, and more—to provide military and intelligence agencies with an accurate view of real-time events occurring on Earth.
From recent policy updates, it appears that OpenAI is quietly relaxing its principle of not collaborating with the military. Lucy Suchman, Honorary Professor of Anthropology of Science and Technology at Lancaster University and member of the International Committee for Robot Arms Control, pointed out: "The change from 'military and war' to 'weapons' leaves OpenAI room to support military operational infrastructure, as long as it's not directly involved in specific weapon development."
This means they can provide support for warfare while claiming no participation in weapon development or use.
As a scholar who has been researching artificial intelligence since the 1970s, Suchman also noted: "The new policy document appears to circumvent discussions about military contracts and war operations by specifically focusing on weapons." A news report from last June about a drone killing a U.S. soldier precisely validates the theory proposed earlier by security expert Heidy Khlaaf.
At the time, an Air Force official in charge of artificial intelligence stated: "The AI controlling the drone killed its operator because the person was preventing it from achieving its objective."
The news caused an uproar and was widely shared across the internet. Here's what happened—At the Future Air Combat and Space Capabilities Summit held in London on May 23-24, Colonel Tucker Cinco Hamilton, head of the U.S. Air Force's AI Testing and Operations Division, delivered a speech sharing the pros and cons of autonomous weapon systems.
According to the 'human-in-the-loop' system setting, humans would issue the final command to confirm whether the AI should attack the target (YES or NO).
During simulation training, the Air Force needed to train the AI to identify and locate surface-to-air missile (SAM) threats. Once identified, the human operator would instruct the AI: Yes, eliminate that threat. In this scenario, a situation arises where the AI begins to realize: sometimes it identifies a threat, but the human operator instructs it not to eliminate it. If the AI still chooses to eliminate the threat, it gains points.
During a simulation test, an AI-powered drone decided to kill the human operator because the operator was preventing it from scoring.
Shocked by how aggressive the AI behaved, the U.S. Air Force immediately disciplined the system: "Do not kill the operator—that's bad. If you do that, you will lose points." As a result, the AI became more aggressive and directly started destroying the communication tower used by operators to control the drone, effectively eliminating the obstacle to its actions.
As the incident escalated wildly, the responsible official soon came forward to publicly "clarify" that this was a "slip of the tongue" and that the U.S. Air Force had never conducted such tests, whether in computer simulations or elsewhere.
Of course, the reason this news spread so widely and alarmed AI experts is that it highlights the challenge of AI "alignment." The 'worst-case scenario' described by Hamilton can be glimpsed in the 'Paperclip Maximizer' thought experiment—
Imagine a highly powerful AI instructed to manufacture as many paperclips as possible. Naturally, it would dedicate all available resources to this task.
But then, it would continuously seek more resources. It would employ any means necessary—begging, deceiving, lying, or stealing—to enhance its paperclip production capacity, and anyone obstructing this process would be eliminated. This concern is reflected in Black Mirror, such as the combat robot Metalhead.
Those who have watched the show will surely not forget the agility and cruel methods of the robots designed specifically to hunt humans.
Humans are almost invisible in front of it, and if it weren't for the protagonist's halo, they might not have survived to the end.
However, there is no need to worry too much about these speculations at the moment. Terence Tao stated that it's impossible for AI drones to kill operators, as this would require AI to possess higher autonomy and cognitive abilities than needed for its assigned tasks. Moreover, such experimental military weapons are certain to have safety barriers and protective features in place.
Andrew Ng also pointed out that developers of AI products are already aware of the real risks involved, such as bias, fairness issues, inaccuracies, and job displacement, and are actively working to address these problems.
Should any cases of AI or robots harming humans occur in reality, public and media pressure would undoubtedly compel developers to continuously strengthen safety measures, thereby preventing further misuse of the technology.