Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Baidu VP Qu Jing: Embracing the AI Era Must Be a 'Top Leader' Project
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Baidu VP Qu Jing: Embracing the AI Era Must Be a 'Top Leader' Project

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    On December 5, 2023, the awards ceremony of the '2023 New Generation Artificial Intelligence (Shenzhen) Entrepreneurship Competition', jointly hosted by the Shenzhen Internet Information Office, Bao'an District People's Government, and NetEase Media, was successfully held in Shenzhen.

    At the event, Qu Jing, Vice President of Baidu, delivered a keynote speech titled 'Reflections and Opportunities in the AI-Native Era'. She pointed out that there are currently too many large models in the industry but too few AI-native applications. Simply 'reinventing the wheel' with large models would lead to resource waste: 'The AI-native era requires a million AI-native applications, not 100 large AI models.'

    Regarding the potential opportunities in the future AI-native era, Qu Jing stated that powerful foundational large models will spur the explosion of native applications. Future AI-native applications will be developed based on the four core capabilities of large models: understanding, generation, logic, and memory.

    Additionally, Qu Jing emphasized that embracing the AI era must be a 'top leader' project. She noted that only founders can holistically consider the true commercial value AI brings to enterprises from a global perspective, enabling comprehensive planning that genuinely drives businesses to embrace AI's future.

    Below is the transcript of Qu Jing's keynote speech:

    Respected leaders and distinguished guests, good morning!

    First, congratulations once again to all the winning teams! I attended the roadshow presentations all day yesterday and gained immensely. I look forward to seeing today's winners become tomorrow's little giants and unicorns. I am Qu Jing from Baidu, and I am delighted to be invited to this entrepreneurship competition. I would like to take this opportunity to share some of Baidu's perspectives and reflections on large models. The theme of my speech is 'Reflections and Opportunities in the AI-Native Era.'

    First, let's discuss two key "reflections."

    The first reflection is that there are too many large models and too few AI-native applications.

    As of October, China has released 238 large models, compared to 79 in June—a threefold increase in just four months. But how many AI-native applications does China have? I believe most of us here would struggle to name even a few. In contrast, abroad, alongside dozens of foundational large models, there are already thousands of AI-native applications. This highlights that China has an excess of large models but a severe shortage of AI-native applications built on top of them.

    Looking back at the PC era, various software was developed based on the Windows system. In the mobile era, there were only two operating systems, Android and iOS, but over 8 million mobile applications. Large models serve as the foundational platform for AI-native applications, much like operating systems. Developing high-quality, usable large models is highly challenging, and duplicating efforts is a significant waste of societal resources. Thus, in the AI-native era, what we need is a million-scale of AI-native applications, not a hundred so-called large models.

    The second reflection is that specialized large models without emergent intelligence capabilities have limited value.

    We observe a trend where many industries, enterprises, and even cities are purchasing GPUs and stockpiling chips to train their own specialized large models from scratch. However, these models lack emergent intelligence capabilities. Emergent intelligence arises only when the parameter scale is sufficiently large, the algorithms and data training are correctly implemented, and there is sustained investment. This enables the model to generalize and understand things it was never explicitly taught. Without these conditions, the model's value remains constrained.

    The emergence of intelligence requires massive computing power. Each training session of a model demands over 10^20 calculations, equivalent to 1,000 NVIDIA A100 chips computing for 100 days. If using an abacus, it would require all 8 billion people on Earth to calculate for 1 million years.

    Therefore, the industrialization model for large models should integrate the general capabilities of foundational large models with the specialized abilities of industry domains. This means combining large models with smaller models—specialized small models offer fast response times and low costs, while large models provide greater intelligence and serve as a safety net.

    Since its launch on August 31st, the API calls for Wenxin Large Model have shown exponential growth. There are over 200 large models in China, many of which appear on various rankings and lists, but most have little actual usage. The call volume for Wenxin Large Model alone likely exceeds the combined call volume of all those 200 models.

    I’ve just shared two points of reflection. Now, let me discuss three opportunities in the AI-native era.

    First, powerful foundational large models will spur the explosion of AI-native applications.

    China possesses leading foundational large models, providing a solid foundation for AI-native applications. On March 16th, Baidu was the first to launch the Wenxin Yiyan product, which has since undergone continuous iterations. In October, we released Wenxin Large Model 4.0, which shows significant improvements in understanding, generation, logic, and memory capabilities. For example, in generation, Wenxin Yiyan can create multimodal content like images, videos, and digital humans, covering over 200 writing genres and nearly all writing needs. AI-native applications are built on these four core capabilities—understanding, generation, logic, and memory—which were unattainable in previous eras.

    As of today, neither China nor the United States has seen the emergence of the best AI-native applications. The mobile era gave birth to numerous super apps, and this presents a golden opportunity for entrepreneurs. With its rich scenarios and solid industrial foundation, Shenzhen is poised to become a new fertile ground for innovation and entrepreneurship in the AI era.

    Second, embracing the AI era is a top-down project.

    Every enterprise and organization is contemplating how to embrace the era of large models and leverage generative AI technology to enhance their competitiveness. Like the adoption process of any new technology, the concept of AI-native will first be accepted by end consumers and startups, followed by small and medium-sized enterprises, and finally by large corporations.

    Baidu has encountered many companies where leadership is highly attentive to this opportunity but lacks a deep understanding of its essence. CEOs often delegate the task to IT leaders, who, along with engineers, may be misled into thinking that developing their own foundational models or selecting a high-scoring large model based on online evaluations equates to embracing AI. However, large models themselves do not generate any value and instead result in significant waste of company and societal resources.

    Why does embracing the AI era require top leadership to drive it? Because only the CEO cares whether new technology positively impacts key business metrics. For the internet industry, this means assessing whether large models improve DAU, user engagement time, retention rates, and monetization efficiency. For all enterprises, it’s about whether large models reduce costs, increase revenue and profits, and accelerate growth. This is the essence of the matter. In small companies, where top leaders oversee everything, it’s easier to develop native applications tailored to their needs.

    At Baidu, we have resolutely carried out an AI-native reconstruction across all our product lines. Baidu Wenku, a decade-old product, has undergone the most thorough transformation. For example, preparing a speech used to take several days to draft and create a PPT. Now, Baidu Wenku can generate a 20+ page PPT in just one minute, including chart generation and formatting, at almost zero cost. The new Wenku has evolved from a content tool to a productivity tool. Thanks to this AI-native transformation, Wenku's payment rate has significantly increased. In just a few months, 13 million users have utilized its AI capabilities to create over 2 million PPTs, demonstrating the positive impact of AI on key business metrics.

    The greater potential lies in the emergence of entirely new AI-native applications powered by large models. Baidu's intelligent code assistant, Comate, is one such example. Currently, for every 100 new lines of code at Baidu, 20 are AI-generated, and this ratio is rapidly increasing, significantly boosting development efficiency. The transformative potential of AI-native applications is only just beginning.

    Beyond internal reconstruction, Baidu has launched the "Qianfan Large Model Platform," a super factory for large models, to meet diverse customer needs. For customers requiring only computing power, Qianfan offers cost-effective heterogeneous computing services. Those looking to leverage existing large models can access APIs for 45 mainstream domestic and international models, including Baidu's ERNIE. The platform also provides a full lifecycle toolchain for model refinement and the industry's largest collection of high-quality datasets. At the application level, we offer a range of components and frameworks to accelerate enterprise application development. Additionally, customers can conveniently purchase mature AI-native applications through the AI-native app store.

    The following video showcases GBI, a commercial decision-making tool tailored for enterprise clients using the Qianfan platform.

    This is Baidu GBI, a generative business intelligence product. It reduces the time required for data analysis and report writing from weeks to minutes, enabling faster decision-making and a competitive edge.

    Today, the Qianfan Platform has become China's largest and most open large model development platform, with over 20,000 enterprises developing industry models and solutions covering nearly 500 scenarios across sectors such as government affairs, finance, industry, and transportation.

    Good applications drive the market and force market changes. By analogy, China's new energy vehicles account for 65% of the global market share. This is because national policies support the application side, effectively stimulating the rapid growth of the new energy vehicle industry through measures such as vehicle purchase tax exemptions and unrestricted road access. The AI industry is also demand-driven, so efforts should focus on the demand side and application layer, similar to subsidizing new energy vehicle users, encouraging enterprises to leverage large models to develop AI-native applications and using the market to drive industrial development.

    Globally, AI-native applications are becoming a major trend. Microsoft does not have its own foundational large model but has the most successful AI-native application: Office 365. From day one of its AI endeavors, Baidu has placed great emphasis on ecosystem development and now boasts 8 million AI developers.

    Not long ago, Baidu launched a large model plugin platform where both individuals and enterprises can quickly turn their data and capabilities into plugins through this platform.

    Plugins are a special type of AI-native application. They are not only easy to use but also allow enterprises to leverage large model capabilities more conveniently without risking the exposure of private data, significantly lowering the barrier for developers.

    In the future, every enterprise's way of interacting with its customers will be transformed into AI-native applications, greatly enhancing corporate competitiveness and increasing the driving force for economic growth.

    Finally, after discussing large models and AI-native applications at length, the hope is that everyone will take action—use them, experience them, and engage in the innovation of AI-native applications. Together, let's create a flourishing and boundless AI-native era.

    Thank you all!

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups