Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. The First Round of AI Model Competition and Investment Concludes, Global AI Investment Expected to Reach $200 Billion by 2025
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

The First Round of AI Model Competition and Investment Concludes, Global AI Investment Expected to Reach $200 Billion by 2025

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    Generative AI holds immense economic potential, potentially boosting global labor productivity by over 1 percentage point annually in the decade following widespread adoption. This surge in AI investment is attributed to the significant economic promise of generative AI, a branch of artificial intelligence focused on creating new content based on large language models, with ChatGPT being a prime example.

    Several overseas cloud providers have released their latest financial reports. Microsoft's FY2023Q4 revenue reached $56.19 billion, an 8% year-over-year increase, with net profit at $20.08 billion, up 20%. Alphabet's 2023Q2 revenue was $74.604 billion, a 7% increase, with net profit at $18.368 billion, up 15%. Meta's 2023Q2 revenue stood at $31.999 billion, an 11% increase, with net profit at $7.788 billion, up 16%. Google Cloud's Q2 revenue was $8.031 billion, a 28% increase. Microsoft's Intelligent Cloud business revenue for Q4 FY2023 was $24 billion, a 15% year-over-year growth.

    Global AI Investment to Reach $200 Billion by 2025

    By 2025, AI investment in the U.S. is projected to reach $100 billion, while global AI investment could hit $200 billion, potentially stimulating the overall economy.

    Since OpenAI launched ChatGPT in November 2022, igniting a global AI frenzy, hundreds of large models have been released worldwide by July 2023, with over 80 from China alone. Major tech giants have all entered the arena.

    Even leading AI companies like Baidu, Alibaba, Huawei, Microsoft, Google, and Meta face a pressing question in this "ultimate player" battle: "With massive investments, how to monetize?"

    The storm has raged for over half a year. Recently, Microsoft, Google, and Meta released their Q2 earnings reports, announcing continued heavy investments in AI to maintain their competitive edge. However, previous billions in investments have yet to yield immediate returns. Microsoft, whose stock surged due to ChatGPT, saw its shares decline for two consecutive days post-earnings. Capital markets show signs of restlessness.

    In fact, many investors believe the first round of competition and investment in large AI models has ended. In the next phase, only those who solve commercialization challenges will ease funding difficulties. Second and third-tier players are now the focus for investors who missed the initial wave.

    In the first half of 2023, AI-related funding rounds were dominated by angel rounds, Series A, and strategic investments, totaling 154 deals (59, 57, and 38, respectively).

    The key challenge for most investors today lies in finding suitable application scenarios to commercialize AI technologies. Many companies haven't even found their stepping stones across this river.

    A noteworthy fact is that OpenAI didn't have a clear research direction for its first 15 months. In May 2016, when Google's chief AI researcher visited OpenAI, he was quite puzzled by their working methods.

    Pre-trained large models have significantly advanced AI's general capabilities. Models with billions or even hundreds of billions of parameters can not only process massive amounts of information quickly but also understand natural language inputs, perform complex logical reasoning, and demonstrate excellent content generation abilities. AI is transitioning from task-specific solutions to more broadly applicable ones, potentially creating value on a massive scale.

    A productivity revolution is brewing. According to McKinsey's recent report "The Economic Potential of Generative AI: The Next Productivity Frontier," generative AI could add $2.6 to $4.4 trillion annually to the global economy.

    At the recent AWS Summit in New York, "generative AI" was the most frequently mentioned keyword throughout the event.

    "Today, large models can be pre-trained on vast amounts of unlabeled data, making them ready to use out of the box for solving various general problems. With relatively small amounts of labeled data for fine-tuning, they can be adapted for domain-specific applications," said Swami Sivasubramanian, AWS's Global VP for Databases, Data Analytics, and Machine Learning. "The ability to easily customize pre-trained models through fine-tuning is absolutely a game-changer."

    Over the past six months, the battle among large models has intensified. While OpenAI and Google race ahead, the rapidly emerging open-source alternatives cannot be underestimated. It's foreseeable that in the future landscape of large model competition, "no single model will dominate everything."

    Just two months after ChatGPT's release, Anthropic quickly developed its "strongest competitor" Claude, which was upgraded to Claude 2 in early July. LLaMa, hailed as "the most powerful open-source large model in the AI community," was recently upgraded to LLaMa 2, continuously raising the bar for open-source model capabilities.

    As some industry insiders have noted, no proprietary large model provider has an unassailable moat. Whether it's LLaMa or Claude, open-source models have demonstrated advantages in faster iteration, greater customizability, and enhanced privacy.

    The power of these open-source large models is increasingly converging within the services of Amazon Web Services.

    In April this year, Amazon Web Services launched the fully managed foundational model service 'Amazon Bedrock', joining the battle of large models as a 'key infrastructure provider'.

    Today, even though generative AI models are incredibly powerful, they still cannot replace humans in performing certain critical and personalized tasks.

    For example, if a customer wants to inquire about an exchange, an AI customer service agent on an e-commerce platform can quickly inform the customer about the availability of the desired style, size, or color, but it cannot complete subsequent operations like order updates or transaction management.

    This is precisely a crucial step in transforming 'generative AI' into 'productivity'.

    The problem is not unsolvable: models can typically be augmented with APIs, plugins, or databases to extend functionality and automate specific tasks for users. For instance, ChatGPT previously introduced a plugin mechanism and provided an open platform for developers, allowing more users to expand its capabilities based on their needs, ideas, and expertise.

    The Transformation of Search Technology in the Generative AI Era

    Amid the heated discussions on addressing the challenges of deploying large models, the concepts of 'vector search' and 'vector databases' have become increasingly well-known. This represents a retrieval technology transformation occurring in the generative AI era.

    First, as data scales grow, keyword-based retrieval can no longer meet demands, and vector retrieval serves as a supplement to traditional search technologies. By representing data as vectors, models can quickly analyze and understand vast amounts of information, accurately identifying and matching similar items.

    Second, while pre-trained large models are highly capable, they still have shortcomings, such as lacking domain knowledge, long-term memory, or factual consistency. In the current landscape of ever-growing data and increasingly precious computing resources, vector databases can act as a 'super brain' for large models, providing a relatively low-cost way to supplement dynamic knowledge and meet users' growing demands.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups