Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. 2025 AI Industry Development Plan and Growth Scale
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

2025 AI Industry Development Plan and Growth Scale

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote last edited by
    #1

    The initial layout of AI computing power has taken shape, with a significant increase in the market share of domestic AI chips, deep learning frameworks, and other foundational hardware and software products. Computing power chips have largely achieved self-sufficiency and controllability. The proportion of domestic hardware has significantly increased, fully compatible with domestic deep learning frameworks. The interconnection of AI computing power resources promotes high-quality self-sufficiency and controllability of foundational hardware and software.

    2023 China AI Industry Development Plan and Trends

    Recently, the Beijing Municipal Government issued the "Implementation Plan for Accelerating the Construction of a Globally Influential AI Innovation Hub in Beijing (2023-2025)." The plan proposes leveraging Beijing's innovation resources in the AI field to continuously enhance global influence and further drive AI's leading development.

    By 2025, the target is to achieve a core AI industry scale of 300 billion yuan, maintaining a growth rate of over 10%, with a radiating industry scale exceeding 1 trillion yuan. Leading AI enterprises will continue to increase R&D investments, the number of startups will grow, and the total number of enterprises will remain domestically leading, with 5-10 new unicorn companies cultivated. The depth and breadth of AI applications will further expand, with generative products becoming mainstream applications and ecosystem platforms in the domestic market, driving high-end industry development.

    It also proposes promoting breakthroughs in domestic artificial intelligence chips. To meet the demand for cloud-based distributed training in AI, the development of general-purpose high-performance training chips will be pursued. For low-power requirements in edge computing scenarios, the development of multi-modal intelligent sensing chips, autonomous intelligent decision-making execution chips, and high-efficiency edge heterogeneous intelligent chips will be undertaken. For innovative chip architectures, exploration of reconfigurable, memory-computing integrated, brain-inspired computing, and Chiplet innovation routes will be conducted. There will be active guidance for large model development enterprises to adopt domestic AI chips, accelerating the improvement of the localization rate in AI computing power supply.

    Artificial Intelligence (AI) Product Market Share

    On May 30, according to foreign media reports, investment bank J.P. Morgan stated in its investment report that Nvidia is expected to capture up to 60% of the artificial intelligence (AI) product market share this year, thanks to its hardware products such as GPUs and networking products.

    It is reported that due to cyclical slowdown in its gaming division, the company's revenue for the first quarter of fiscal year 2024 fell by 13% year-on-year to $7.19 billion. However, during the same period, its data center business revenue reached a record $4.28 billion, a 14% increase year-on-year, accounting for 60% of its total revenue; gaming business revenue was $2.24 billion, down 38% year-on-year, accounting for 31% of total revenue.

    Currently, Nvidia leads in the AI field, holding approximately 80% of the AI processor market share. Its high-end processors have been used to train and run various chatbots. The company is highly favored by investors and is considered a key supplier to meet AI computing power demands.

    The AI industry chain typically consists of upstream data and computing power layer, midstream algorithm layer, and downstream application layer. Recently, the market has paid more attention to the upstream industry chain, especially the computing power sector. Many new investment opportunities have emerged in AI hardware, as AI software applications rely on the computing power provided by hardware.

    Domestic AI computing power demand will maintain growth momentum

    Driven by the continuous catalysis of ChatGPT, domestic AI computing power demand will maintain its growth momentum, and computing power server manufacturers are expected to benefit. It is estimated that ChatGPT's total computing power requires 7 to 8 data centers with an investment scale of 3 billion yuan and 500P computing power to support its operation. In the era of the digital economy, global data volume and computing power scale will show rapid growth.

    With the simultaneous increase in demand for AI servers and AI chips, it is expected that the shipment volume of AI servers (including those equipped with GPUs, FPGAs, ASICs, and other main chips) will reach nearly 1.2 million units in 2023, a year-on-year increase of 38.4%, accounting for nearly 9% of total server shipments. By 2026, this proportion is expected to further increase to 15%. The institution has also revised the compound annual growth rate of AI server shipments from 2022 to 2026 to 22%, while the shipment volume of AI chips in 2023 is expected to grow by 46%.

    The institution stated that NVIDIA GPUs have become the mainstream chips in AI servers, with a market share of about 60-70%, followed by ASIC chips independently developed by cloud computing manufacturers, with a market share of over 20%.

    Compared to general-purpose servers, AI servers use multiple accelerator cards, and their PCBs adopt high multilayer HDI structures, which are of higher value. Additionally, the number of motherboard layers is much higher than that of general-purpose servers, making the PCB value of AI servers 5-6 times that of ordinary servers.

    NVIDIA founder and CEO Jensen Huang announced during his keynote at NVIDIA Computex 2023 that the generative AI engine NVIDIA DGX GH200 has entered mass production. Observations from the demonstration reveal significant architectural changes in the newly launched GH200 server compared to the DGX H100. The GH200's PCB modifications include the reduction of one UBB and one CPU motherboard, while adding three NVLink module boards. With substantial performance improvements in the accelerator cards, the per-unit PCB value is expected to increase, indicating that AI advancements will continue driving value growth in the PCB sector.

    As a fundamental pillar of digital economic development, computing power resources are witnessing diversified application scenarios driven by new digital innovations, business models, and paradigms. The continuous expansion of computing scale has led to rising demand for computational power. According to data from China's Ministry of Industry and Information Technology, the total scale of operational data center racks nationwide exceeded 6.5 million standard units in 2022, with an average annual growth rate of over 25% in computing power scale during the past five years. As computing power becomes widely adopted across industries, different precision levels of computing need to "adapt" to diverse application scenarios. Particularly with the rapid development of artificial intelligence technology, the computing power structure is evolving accordingly, leading to increasing demand for intelligent computing capabilities.

    From a policy perspective, China places high importance on AI industry development, gradually solidifying the foundation for intelligent computing growth. In February 2022, four government departments jointly issued a notice approving the construction of national computing hub nodes in eight regions and planning ten national data center clusters. This completed the overall layout design of China's integrated data center system. With the full implementation of the "East Data West Computing" project, the construction of intelligent computing centers has entered a new phase of accelerated development. As the hub for data and application carriers, data centers form the foundation for AI development. In the long term, data center demand is expected to recover. The IDC market size is projected to reach 612.3 billion yuan by 2024, with a compound annual growth rate of 15.9% from 2022 to 2024, indicating that data centers will enter a new upward cycle.

    The Future Development Direction of Artificial Intelligence

    Artificial intelligence encompasses many concepts, some of which are difficult to measure and verify. For instance, while machines can output representations of societal norms or responsibilities, it's challenging to confirm whether they truly understand these concepts. Therefore, creating a closed loop around verifiable and measurable concepts is crucial—and embodied intelligence provides precisely such a loop, serving as an excellent starting point toward general intelligence.

    The rapidly advancing large AI models hold promise for breaking through limitations and endowing robots with "intelligence."

    Robotic large models include LLM (Large Language Models), VLM (Vision-Language Models), and VNM (Visual Navigation Models). A robot's "brain" in the AI domain isn't limited to the language models used in ChatGPT. As highlighted in Google's LM-Nav research, the integration of LLM, VLM, and VNM enables a pathway from natural language (redundant verbal descriptions) to text (landmark strings) to images (locating objects in images based on text), ultimately generating path planning for robots. This behavioral pattern allows robots to engage in human-machine interactions while achieving a degree of adaptability.

    Recently, Professor Lu Cewu from Shanghai Jiao Tong University delivered a keynote speech titled "Embodied Intelligence" at the Machine Heart AI Technology Annual Conference, proposing the PIE framework. This approach identifies three modules of embodied intelligence: embodied Perception, embodied Imagination, and embodied Execution, which may accelerate the practical implementation of embodied intelligence.

    Currently, the combination of AI and robotics appears to be the most promising avenue for realizing "embodied intelligence."

    Compared to non-intelligent humanoid robots, embodied intelligence demonstrates significantly higher work efficiency, with its capabilities in comprehension, interaction, and planning making it highly practical for deployment across diverse industries. Additionally, its natural language control feature is a prerequisite for future large-scale collaboration with human workers.

    Therefore, attention should be paid to hardware robots and application scenarios that can be enhanced using large models, such as service robots focused on dialogue, industrial robots, and humanoid robots in complex environments.

    Major tech companies have already begun investing in embodied intelligence. Google released PaLM-E, its largest generalist model to date; Microsoft is exploring ways to extend ChatGPT into robotics; and Alibaba is experimenting with integrating its Qwen large model into industrial robots.

    Among these developments, Tesla's humanoid robot Optimus stands out as particularly impressive.

    Since its debut in October last year, when Optimus couldn't walk autonomously and required human assistance, it has made remarkable progress. By May 17, as shown in Tesla's shareholder meeting video, Optimus can now move flexibly in workshops, grasp objects, and possesses capabilities in environmental exploration and memory, motor torque control, AI training based on human motion tracking, and object manipulation. Moreover, it has integrated the FSD (Full Self-Driving) underlying modules, achieving a certain level of algorithm reuse.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups