The Vast Economic Potential of Artificial Intelligence: Global Investment Prospects in AI
-
Global AI Investment to Reach $200 Billion by 2025
By 2025, U.S. investment in artificial intelligence (AI) is expected to reach $100 billion, while global AI investment could hit $200 billion, potentially boosting the overall economy.
This surge in AI investment is attributed to the vast economic potential of generative AI, a branch of AI focused on creating new content based on large language models, with ChatGPT being a prime example.
Since OpenAI launched ChatGPT in November 2022, igniting a global AI frenzy, hundreds of large models have been released worldwide by July 2023, including over 80 in China alone. Major tech giants like Baidu, Alibaba, Huawei, Microsoft, Google, and Meta have all entered the fray, grappling with the pressing question: "How to monetize massive investments?"
Despite the hype, recent Q2 earnings reports from Microsoft, Google, and Meta revealed continued heavy AI investments to maintain competitive edges, yet these investments have not yielded immediate returns. Microsoft, for instance, saw its stock decline for two consecutive days post-earnings, signaling investor unease.
Many investors believe the first wave of competition and investment in large AI models has concluded. The next phase hinges on solving commercialization challenges to ease funding difficulties. Second- and third-tier players are now the focus for new investments.
In the first half of 2023, AI-related funding rounds were dominated by angel rounds, Series A, and strategic investments, totaling 154 deals (59, 57, and 38, respectively). The key challenge for investors lies in identifying viable application scenarios for commercialization—a hurdle many companies have yet to overcome.
A telling anecdote: OpenAI operated without a clear research goal for its first 15 months. In May 2016, a visiting Google AI researcher expressed confusion over its approach.
Pre-trained large models have significantly advanced AI's general capabilities. Models with billions of parameters can process vast data, understand natural language, perform complex reasoning, and generate high-quality content. AI is transitioning from task-specific solutions to broader applications, poised to create value at scale.
A productivity revolution is brewing. McKinsey's report, The Economic Potential of Generative AI: The Next Productivity Frontier, estimates generative AI could add $2.6–4.4 trillion annually to the global economy.
At the recent AWS New York Summit, "generative AI" was the most frequently mentioned keyword. Swami Sivasubramanian, AWS VP of Databases, Analytics, and Machine Learning, noted, "Today, large models can be pre-trained on unlabeled data for out-of-the-box use in general tasks. With minimal fine-tuning, they adapt to specialized applications—a game-changer."
The battle over large models has intensified. While OpenAI and Google lead, open-source alternatives are rising. The future will likely see no single dominant model.
Just two months after the release of ChatGPT, Anthropic quickly developed its 'strongest competitor,' Claude, which was upgraded to Claude 2 in early July. LLaMa, hailed as the 'most powerful open-source large model in the AI community,' was recently upgraded to LLaMa 2, continuously raising the bar for open-source large models.
As some industry insiders have noted, no closed-source large model provider has a moat. Whether it's LLaMa or Claude, open-source large models demonstrate advantages such as faster iteration, greater customizability, and enhanced privacy.
These open-source models are increasingly being integrated into Amazon Web Services (AWS). In April, AWS launched the fully managed foundational model service 'Amazon Bedrock,' joining the large model battle as a 'key infrastructure provider.'
Today, even though generative AI models are incredibly powerful, they still cannot replace humans in performing certain critical, personalized tasks. For example, an AI customer service agent on an e-commerce platform can quickly inform a customer about the availability of a desired product's style, size, or color but cannot handle subsequent order updates or transaction management.
This is precisely a crucial step in transforming 'generative AI' into 'productivity.' The issue isn't unsolvable: models can often be augmented with APIs, plugins, or databases to extend functionality and automate specific tasks for users. For instance, ChatGPT previously introduced a plugin mechanism and provided an open platform for developers, allowing users to extend its capabilities based on their needs, ideas, and expertise.
The Transformation of Search Technology in the Generative AI Era
Amid the heated discussions on addressing the challenges of deploying large models, the concepts of 'vector search' and 'vector databases' have gained prominence. These represent the evolving retrieval technologies in the generative AI era.
First, as data scales grow, keyword-based retrieval alone is insufficient, and vector retrieval can complement traditional search techniques. By representing data as vectors, models can quickly analyze and understand vast amounts of information, accurately identifying and matching similar items.
Second, while pre-trained large models are highly capable, they have limitations, such as lacking domain-specific knowledge, long-term memory, and factual consistency. In the current landscape of expanding data and scarce computing resources, vector databases can serve as a 'super brain' for large models, providing dynamic knowledge at a relatively low cost to meet users' growing demands.