In the AI Era, How Can Ordinary People Find Survival Strategies in the Age of Intelligence?
-
With the arrival of the artificial intelligence era, represented by ChatGPT, in addition to marveling at technological progress, the core of our discussions increasingly focuses on whether humans will ultimately be replaced by artificial intelligence and where the "rapid advancement" of technology will take our world. Such concerns are certainly not unfounded, as we have witnessed artificial intelligence surpassing humans in certain fields over the past period, increasingly permeating our lives and even dictating them. They determine what content we see online, recommend what music we listen to, and answer our various questions.
So, will artificial intelligence truly perform better than humans? Will we really be replaced by it?
Gerd Gigerenzer, Director of the Harding Center for Risk Literacy at the University of Potsdam in Germany, introduces a very interesting concept called the 'stable world principle' in his new book How to Stay Smart in a Smart World. The book mentions that artificial intelligence can surpass humans when the "environment" is stable. For example, in games like chess and Go, which have stable and clear rules, AI has defeated humans. Therefore, if the future remains similar to the past, the vast amounts of data analyzed by AI will be highly useful. However, the future is full of uncertainties, and in such unpredictable states, the success rate of complex algorithms might not be very high.
Gerd Gigerenzer's "stable world principle" seems to provide a new perspective for understanding artificial intelligence. By recognizing this characteristic of AI, we might better grasp what it can and cannot do in the current era of human-AI coexistence. Moreover, in an increasingly intelligent yet potentially uncontrollable world, how can we maintain self-control and make the right decisions?
The Potential of Digital Technology What is AI good at?
As mentioned in the "Stable World Principle," complex algorithms work best when there is a large amount of available data, and the data is explicitly stable.
For example, in rule-based games like chess and Go, AI indeed outperforms humans. In chess, each position can be represented by a piece image that specifies the location of every piece from pawns to kings. The chess engine doesn't need to infer where the actual pieces are because the piece image represents the position, and it remains stable both now and in the future. The nature of the rules ensures there is no uncertainty, and no unexpected changes will occur in the future. This was demonstrated in 1997 when IBM's Deep Blue program defeated chess champion Kasparov, and again in May 2017 when AlphaGo beat the then world's top Go player Ke Jie.
Since algorithms can maximize their advantages under stable conditions, the "Stable World Principle" also applies to predicting the future. That is, to successfully predict the future, one needs good theories, reliable data, and a stable environment.
In August 2004, NASA launched the Messenger probe, which entered Mercury's orbit in March 2011 - precisely at the location NASA had predicted over six years earlier. This incredible feat is made possible by well-established theories of planetary motion, highly reliable astronomical data, and the fact that Mercury's movement remains stable over time, largely unaffected by human activities.
Artificial intelligence excels in handling such stable scenarios, such as unlocking phones with facial recognition, selecting the optimal route to a destination, and classifying and analyzing big data in accounting tasks.
Therefore, the Stable World Principle carries an important implication: the Adapt to AI Principle. To improve AI performance, we need to make the physical environment more stable and human behavior more predictable. If we delegate decision-making to algorithms, we must alter our environment and behavior. This may involve making humans more transparent to algorithms, regulating human behavior, or even excluding humans from competitive environments. In other words, if we want to achieve certain goals through artificial intelligence, we must adapt to its potential requirements. This technology is not just a support system but also requires us to adjust our behavior.
So, through the 'Stable World Principle,' we can see that as computing power increases, machines will soon outperform humans in solving problems under stable conditions. However, for unstable situations, the same cannot be generalized.
The limitations and risks of digital technology From industrial robots tirelessly repeating precise movements to search engines capable of finding words and phrases in vast amounts of text, examples of artificial intelligence surpassing human capabilities abound. However, to date, AI has only achieved victories in games with fixed rules and clearly defined parameters. In other words, the more clearly defined and stable the environment, the more likely machine learning is to outperform humans.
Take chess and Go, for instance, or facial and voice recognition under relatively unchanging conditions. When the environment is stable, AI can surpass human performance. If the future resembles the past, massive datasets prove immensely useful. Yet when unexpected events occur, big data (which is always historical data) may mislead our understanding of the future.
In reality, many challenges we face aren't well-defined games but are fraught with uncertainty—such as finding true love, predicting criminal behavior, or responding to unforeseen emergencies. In these scenarios, even the most powerful computing capabilities and largest datasets offer limited assistance. Humans are the primary source of uncertainty. The moment human behavior enters the equation, unpredictability emerges, and predictions become correspondingly difficult. Without clear definitions, or in unstable situations—or both—artificial intelligence may find itself at a loss.
This applies not only to finding the right partner but also to predicting the next major financial crisis, just as we failed to foresee the 2008 financial crisis. When humans are involved, trusting complex algorithms can create an illusion of certainty, which can become the root of disaster.
Today, our confidence in artificial intelligence is growing. We believe machines can perform tasks more accurately, quickly, and economically, and that software knows us better than we know ourselves. Yet, AI also carries risks in many areas and may even 'manipulate' us for specific purposes. Gerd Gigerenzer's research indicates that artificial intelligence actually lacks certain common sense. Common sense, typically derived from genetic predispositions and personal and social learning, requires experience. For those engaged in AI development, common sense presents a significant challenge. We have yet to encode common sense into computer programs through rules or by creating deep neural networks capable of learning it, which not only limits its application in translation but also in natural language understanding.
Moreover, in the era of intelligent algorithms, big data analytics has been widely applied across various fields and integrated into our daily lives. Accompanying this is the issue of our privacy and security.
It can be said that people are sleepwalking into surveillance under the control of digital technology. The two characteristics of digital technology, convenience and surveillance, both conflict with privacy. Many feel helpless but have no alternative. Others are more inclined towards immediate convenience, overlooking the privacy they may lose in the long run. Not only that, social networking platforms fully utilize digital technology to 'control' people's attention.
'Likes' serve as an addictive adhesive, but controlling attention requires more than just 'likes.' Social media sites conduct repeated experiments to identify methods that keep users staring at their screens longer.
These technologies aim to captivate users, making it difficult to leave a platform or compelling them to return. For most people, such techniques provide mood enhancement and enjoyable distractions, but the consequence is addiction. A runaway phenomenon from the digital world is gradually eroding our physical reality. How to Stay Smart in an Intelligent World?
Since ancient times, humans have created many astonishing new technologies, but we haven't always used them wisely. Once, the dream of the internet was freedom; now, for many, freedom means an unconstrained internet.
Despite continuous technological innovation, we need to use our brains more than ever before. To reap the numerous benefits of digital technology, we require insight and courage to remain intelligent in this smart world. So, how should we proceed?
Gerd Gigerenzer points out in Out of Control and Self-Control that staying wise doesn't mean fully trusting technology, nor does it mean completely distrusting it with anxiety. Staying wise means understanding the potential and risks of digital technology, as well as maintaining dominance in a world filled with algorithms.
Self-control awareness is paramount. But self-control doesn't mean staying away from technology; it means that when someone strongly desires to switch to another activity, they can restrain themselves because they know they'll regret it or understand that the behavior could threaten their own and others' health. Secondly, recognizing the 'manipulability' of algorithmic recommendations is crucial, much like what we often refer to as information cocoons.
It's commonly believed that unrestricted access to social media equates to avoiding the tricks and traps of the internet. However, this is not the case.
Researchers from Stanford University once conducted an experiment where they had 900 students from 12 U.S. states browse information on social media and evaluate its credibility from perspectives such as 'what is the evidence' and 'who is behind the information'. In an article titled Do Millennials Have Good Money Habits? written by a bank executive and sponsored by Bank of America, it was suggested that many millennials need help with financial planning. Participants were asked to evaluate the credibility of this viewpoint. Surprisingly, most students didn't notice that the identity of the publisher was actually the primary driver behind this perspective.
In this experiment, whether middle school or college students, few paid attention to who was supporting the online resources. They didn't consider the basis of the information or refer to independent sources to verify it. Instead, they took the surface-level statements at face value and were drawn in by vivid photos and graphic designs. Even when encouraged to search online, most didn't consult other websites. Thus, being digital natives doesn't necessarily mean we truly understand the digital age.
The digital world makes it easier than ever for misinformation to spread and thrive. But at the same time, it provides multiple pathways for us to assess the credibility of people and information sources. We can use this to understand what AI can and cannot easily do, and reflect on how data-driven business models sell users' time and attention. Understanding the potential and risks of digital technology, as well as recognizing the 'reality' it constructs, is crucial. Throughout human history, our interaction with technology has been common, but no technology has ever participated in—or even determined—our lives as unprecedentedly as artificial intelligence does.
To remain intelligent in a smart world, as Gerd Gigerenzer said, 'Staying smart means understanding the potential and risks of digital technology, allowing us to maintain dominance in an algorithm-filled world and not be defeated by artificial intelligence.'
Therefore, to stay wise in an intelligent world, we should view digital technology with calm respect rather than unfounded awe or suspicion, shaping the digital world into the one we truly want to live in. 【Book Resources】
In the era of artificial intelligence, will humans be completely replaced by algorithms? Gerd Gigerenzer's answer is no. Why can human intelligence still surpass algorithms in the AI era? What are the differences between algorithms and humans?
Gerd Gigerenzer proposes a "stable world principle" in his book: in stable, well-defined situations, complex algorithms like deep neural networks undoubtedly outperform humans. For example, chess and Go are relatively stable scenarios. However, when it comes to unstable problems—such as predicting the spread of a viral outbreak—algorithms may offer limited assistance. Dealing with uncertainty is something the human brain excels at, as we can identify one or two critical clues while disregarding other information. In the present era of human-AI coexistence, it is vitally important to clearly understand what artificial intelligence can and cannot do. How do algorithms and artificial intelligence affect our lives? What are the risks and challenges brought by these technologies? And how should we respond? This book was born in this context - it teaches us how to properly view digital technologies, what our advantages are in the AI era, and provides strategies and methods to maintain control over our own lives to avoid being controlled by artificial intelligence.