"Godfather of AI" Admits Concerns: AI May Take Over Humanity
-
Renowned AI researcher Geoffrey Hinton, hailed as the "Godfather of AI," admitted in an interview his concerns about artificial intelligence potentially taking over the world. His remarks have sparked widespread discussion, casting uncertainty over the future of AI.
During an interview with CBS 60 Minutes, Hinton expressed his fear that the technology he has dedicated his life to developing might eventually dominate the world. He even stated, "For the first time in history, humans must face the reality that there may be something on Earth smarter than us." This thought-provoking statement came when CBS host Scott Pelley asked whether superintelligent AI might "take over humanity."
Hinton responded, "I'm not saying it will happen. It would be great if we could prevent them from having that desire." However, he emphasized, "But we can't be certain whether we can stop them from wanting to do so."
Regarding how AI might take over humanity, Hinton speculated that autonomous agents could begin "modifying themselves." This is a concerning issue, and he also mentioned the "black box" nature of the technology, where researchers lack detailed understanding of how machine learning algorithms actually operate.
He pointed out that while we have a "good" rough idea of how AI systems learn, when it comes to complex problems, we are as clueless about how AI operates as we are about what's happening in our own brains. This is a valid concern because, regardless of whether the machine's output is technically correct, the path the machine takes to achieve that result remains important. If people have long rewarded correct outputs obtained through questionable means, we may unintentionally train AI tools in ways that could lead to serious failures.
However, Hinton's concerns are not limited to the issue of AI taking over the world. He is also worried about how humans might misuse AI, including the use of autonomous AI weapons and war robots, the large-scale replacement of human jobs, and the mass dissemination of AI-generated misinformation. Although the outlook seems bleak, as this uncharted territory continues to evolve, the potential threats, especially those related to human misuse, remain worth considering.