Artificial Intelligence or Artificial Stupidity?
-
Artificial intelligence is not a universal solution, nor is it an illusory new technology. It has already permeated various aspects of daily life.
When it comes to artificial intelligence, most readers are likely familiar with the term, and its literal meaning is easy to understand.
In the eyes of some, artificial intelligence is already a mature computer technology capable of handling significant tasks—such as predicting tomorrow's weather in a specific region or analyzing stock market fluctuations—as well as minor tasks, like automatically tracking faces in photos or recommending news or products of interest. All of these can be achieved through artificial intelligence.
However, others view artificial intelligence as a nascent technology still confined to laboratories, far removed from everyday life. To them, the AI we interact with today is little more than "artificial stupidity," and the current hype around AI is merely a gimmick in a capital-driven game that has yet to bring about any real change.
Artificial intelligence is not a universal solution, nor is it an illusory new technology. It has already permeated various aspects of daily life.
In my daily work, I’ve noticed that many friends and professionals in the internet industry hold misconceptions about AI. Drawing from my past experiences, I’d like to share my perspective. First, let’s explore why the concept of AI has suddenly gained so much traction.
Many mistakenly believe that artificial intelligence is a new concept invented only in recent years.
In reality, the term "artificial intelligence" was first proposed at an academic conference at Dartmouth College in the U.S. in 1956. Although the conference lasted only a month and yielded no substantial progress, it marked the first formal introduction of the term "artificial intelligence," which has been used ever since.
Despite slow progress in AI research at the time, the classic sci-fi film 2001: A Space Odyssey reflected people’s optimistic fantasies about AI. After nearly 50 years of development, AI has evolved from a cinematic fantasy into a practical tool in daily life, becoming an invaluable assistant across various fields.
This journey, however, has not been smooth.
Around the 1980s, Japanese researchers developed a computer system capable of simulating human expert decision-making, known as an expert system. This system essentially functioned as a vast knowledge base, using inference rules to provide answers based on queries.
Such expert systems, which could respond to input questions, represented the pinnacle of AI technology at the time and were seen as a manifestation of computer "intelligence." As a result, the project received significant attention from the Japanese government, which invested heavily in developing faster and more knowledgeable expert systems. Inspired by Japan’s efforts, the U.S. and many European countries also entered the race.
Predictably, the initial success of expert systems was limited. They couldn’t learn or update their knowledge bases autonomously, making maintenance extremely costly. Much like early offline GPS navigation systems that required annual map updates to remain functional, these systems would become obsolete without constant upkeep, rendering them useless for accurate guidance.
The failure of expert systems triggered a crisis of confidence in AI. The collapse of the hardware market, coupled with theoretical stagnation and the withdrawal of funding by governments and institutions, led to a prolonged downturn in the field.
Fortunately, even as capital retreated, theoretical research in AI continued quietly. In 1988, American scientist Judea Pearl introduced probabilistic statistical methods into AI reasoning, a breakthrough that profoundly influenced later developments. In 1989, Yann LeCun and his team at AT&T Bell Labs used convolutional neural networks to enable AI to recognize handwritten ZIP code digits.
Over the next two decades, AI technology gradually merged with computing and the internet. Advances in four key catalysts—massive parallel computing, big data, deep learning algorithms, and brain-inspired chips—along with declining computational costs, propelled AI forward at an unprecedented pace.
Capitalizing on the growth of computers and the internet, AI rebranded itself as business intelligence, data analytics, digitalization, and automation, infiltrating every corner of societal development.
In 2011, IBM’s Watson defeated human champions in a natural language Q&A competition, and image recognition algorithms surpassed human accuracy in the ImageNet challenge. In 2016, AlphaGo defeated Lee Sedol, becoming the first AI to beat a world champion in Go.
In recent years, the most common criticism of AI has been its perceived lack of intelligence.
Public perception of AI is often polarized. On one hand, media reports highlight groundbreaking AI achievements, while prominent figures warn of its risks, and governments, including China’s, incorporate AI into national development plans.
On the other hand, news of self-driving car accidents, malfunctioning smart home devices, and repetitive recommendations from content platforms raise doubts: Where is the intelligence in artificial intelligence?
Before answering this question, it’s essential to distinguish between strong AI and weak AI.
Initially, the Dartmouth conference introduced the term "artificial intelligence" without such distinctions. The prevailing belief was that AI meant endowing machines with human-like thought and decision-making capabilities. Early research aimed to simulate human cognition to achieve true machine intelligence.
But it soon became clear that this approach produced only simulations of intelligence, not genuine understanding. American philosopher John Searle proposed the "Chinese Room" thought experiment:
Imagine an English speaker in a sealed room with only a small window. The person has a book of Chinese translation rules, paper, and pencils. Chinese characters are passed through the window, and the person uses the book to craft responses in Chinese. Despite knowing no Chinese, the person could convincingly mimic fluency.
Crucially, the book only provides syntactic rules, not semantic understanding. The person doesn’t comprehend the questions or answers—they merely assemble characters based on the rules.
Searle argued that AI operates similarly. Computers don’t truly understand information; they run programs that process data to create the illusion of intelligence.
For example, image recognition works by converting colors into numerical codes, identifying patterns, and matching them to labels like "airplane" or "rabbit." The computer doesn’t "know" what it’s seeing—it just calculates probabilities based on training data.
Most algorithms are fundamentally probabilistic. Differences lie in the data they require and how they determine outcomes like "airplane."
Today’s widely used models rely on matrix operations to derive probability distributions from training data. Complex models involve high-dimensional distributions and advanced math, but the core idea remains: using probabilities to describe data features. This enables "recognition" or "prediction" for similar inputs.
In truth, models don’t "understand" concepts like humans do. They simply identify patterns—for instance, recognizing images that statistically resemble airplanes.
<p>后来业界也普遍认识到这一点。因此把人工智能这个概念又划分为强人工智能与弱人工智能。</p><p>强人工智能流派仍然追求让计算机拥有人类的心智与意识,具有自主选择行为。就像西部世界中从固化程序逐渐演化出自我意识的梅芙一样。但是强人工智能的研究难度较大,市面上还没有成熟的应用。</p><p><strong>而弱人工智能更像是一个解决特定问题的工具。这类问题的特点是可以通过统计,归纳出经验并形成解决方案,而这种解决问题的实现方法被称为“机器学习”。</strong></p><p>机器学习最基本的做法,是使用算法解析数据、从中学习数据的规律,然后对真实世界中的事件做出决策。与传统的编程方式不同,机器学习是用大量的数据进行训练,通过各种算法从数据中学习“如何完成任务”。</p><p>例如量化交易、人脸识别和AlphaGo都是擅长于单个方面的机器学习模型。在训练模型时,我们只教会AlphaGo下围棋的技巧,所以它只能会下围棋。如果你把一道数学题丢给AlphaGo,显然它是无从下手的。</p><p>所有的机器学习模型都只能完成特定的任务,很多时候我们通过组合的方式满足更多的场景。例如智能音箱本质上是一个语音识别的模型结合NLP(自然语言处理)模型,它并非真的能听懂我们说的话代表什么含义,仅仅是能够把接收到的信息转化为模型的输入,在字典中找到对应的输出而已。</p><p></p><p><strong>从机器学习的特点可以看出来,如果想通过统计归纳经验,数据的数量与质量是决定性条件。没有数据,就没有人工智能。</strong></p><p>也就是说在你没有作出同类别行为,或者是与你行为相近的人群较少时,人工智能是没有办法作出判断的,这也是人工智能变成人工智障的重要原因。当行为增加,数据慢慢变多,数据质量逐渐上升时,你会发现预测越来越准确,人工智能通过大数据也能做到真正的“想你所想”。</p><p>前面我们说弱人工智能像工具,专门解决某个特定的问题。但是否所有问题都适合用机器学习去解决呢?很明显答案是否定的。</p><p>适合用机器学习去解决的问题,主要有三个基本条件。</p><p></p><p><strong>(1)有规律可以学习。</strong>这类问题必须存在共性,有内在的规律等待被发现;</p><p><strong>(2)编程难以实现。</strong>数据之间关联关系复杂,很难通过穷举的方式列清楚规则;</p><p><strong>(3)有足够多能够学习到规律的数据。</strong>没有数据支撑,机器学习就像搭好了结构少了砖瓦的房子。</p><p>举个栗子:</p><p>我们熟知的垃圾邮件检测是一个使用机器学习解决的经典场景。最常见的垃圾邮件是各种类型的营销邮件,并且这种邮件的发送方通常是各类用邮箱注册过的网站。在这个场景中我们发现,营销邮件一定是包含某些产品信息或推广信息,所以这类邮件有一定的规律。</p><p>但是因为不同产品种类各异,我们很难用编程的方式把所有规则写出来。就算能写出来,发送方也会设计各种规则躲避系统的检测,同时我们很容易找到大量垃圾邮件与正常邮件作为样本数据。因此这个场景非常适合用机器学习的方式解决。</p><p>但如果我们想判断新邮件包含多少个字符,恐怕就不太适用。虽然这个问题同样难以用编程解决并且有大量历史邮件支持,但包含多少个字符这个问题的随机性太强,没有规律可循,因此不适合。</p><p>由此可见,机器学习不是万能的,不是所有的问题都能用它去解决。<strong>机器学习擅长的是通过已知经验找到规律去解决问题。如果面对的问题没有任何规律可循,完全是一个随机事件,那么就算使用多复杂的机器学习算法也是无济于事。</strong></p><p>值得注意的是,很多问题看似没有规律,实际上是因为人类处理不了数据量太大的情况,看起来杂乱的数据掩盖了背后的面目,这类问题并非真的无迹可寻,只是需要用正确的方法。</p><p><strong>我们通过机器学习可以对大量数据进行分析获得规则,并利用规律对未知数据进行预测。不但能从数据中看到人类能看到的规律,更重要的是能在更短的时间内发现人类看不到的规律,我想这就是机器学习最大的应用价值。</strong></p><p>在医学领域,通过图像识别技术,已经实现让计算机自动识别肿瘤细胞,帮助医生快速进行医学诊断;在制造业,通过强化学习的方式自动检测产品缺陷提高出品率,帮助企业加快生产周期降低生产成本;在金融领域,通过神经网络技术可以避免传统程序化交易因为无法根据实时发生的市场变动调整算法,从而造成资产损失的风险。还有在零售、安防、航空、互联网等等不同领域,机器学习都有广泛的应用,它已经对我们生活的产生了巨大的变化。</p><p>最后我们必须认识到,目前的人工智能并非真正的智能,只是一种模拟人类行为的智能。而真正的智能,离我们的生活还非常遥远。但值得庆幸的是,仅仅是模拟人类行为的智能已经能够给我们的生活带来了如此大的便利,相信随着技术的发展,我们能够做出更多超越想象的场景。</p>