Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. After the Hundred Models War, AI-Native Applications Are 'Struggling to Be Born'
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

After the Hundred Models War, AI-Native Applications Are 'Struggling to Be Born'

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    After the Hundred Models War, exhausted entrepreneurs are gradually realizing: China's real opportunity lies in the application layer, and AI-native applications are the most fertile ground for the next wave.

    Image sourced from the internet

    Robin Li, Wang Xiaochuan, Zhou Hongyi, and Fu Sheng—reviewing the speeches of these industry leaders over the past few months—all emphasize the enormous opportunities at the application layer.

    Internet giants are all talking about AI-native: Baidu released over 20 AI-native applications at once; ByteDance formed a new team focused on the application layer; Tencent integrated large models into mini-programs; Alibaba plans to revamp all its applications using Tongyi Qianwen; WPS is aggressively giving away AI trial cards...

    Startups are even more fanatical. A single hackathon produced nearly 200 AI-native projects. This year, events hosted by MiraclePlus, Baidu, Founder Park, and others have collectively spawned thousands of projects, yet none have truly stood out.

    It’s undeniable that despite recognizing the vast opportunities at the application layer, large models haven’t revolutionized all applications. Most products are undergoing superficial transformations. Even China’s top product managers seem to have "lost their touch" this time.

    From the explosive popularity of Midjourney in April to now, nine months have passed. Why is it so difficult to develop domestic AI-native applications that carry the "hopes of the entire village"?

    Choosing is more important than effort. At this moment, perhaps we need to calmly reflect and find the right "posture" to open AI-native applications.

    Why are native applications so hard to develop? We might find some answers by examining the "production" process of these applications.

    "We usually run four or five models simultaneously and choose the one with the best performance," a Silicon Valley entrepreneur working on large models mentioned in a conversation with "Self Quadrant." They develop AI applications based on foundational large models but do not commit to any single model initially. Instead, they let each model run and ultimately select the most suitable one.

    Simply put, the horse racing mechanism has now extended to large models.

    However, this approach still has some drawbacks. While it attempts different large models, it ultimately becomes deeply coupled with one specific model. This remains an "end-to-end" development approach—one application tied to one large model.

    Unlike applications, a foundational large model serves multiple applications. This results in minimal differences between various applications in the same scenario. A bigger issue is that current foundational models each have strengths and weaknesses—none has become an all-around leader in every field. Consequently, applications built on a single model struggle to achieve balanced functionality across all features.

    In this context, decoupling large models from applications has become a new approach.

    The so-called 'decoupling' actually consists of two phases.

    First is the decoupling between large models and applications. As the underlying driver of AI-native applications, the relationship between large models and native applications can be analogized to the automotive industry.

    Image from the internet

    ▲ Image from the internet

    For AI-native applications, large models are like the engines of cars. The same engine can be adapted to different car models, and the same car model can be matched with different engines. Through different tuning, it can achieve different positioning, from compact cars to luxury vehicles.

    Therefore, for the entire vehicle, the engine is just one part of the overall configuration and cannot become the core defining feature of the car.

    Analogously, for AI-native applications, foundational large models are the key drivers, but they should not be completely bound to the application. A single large model can power different applications, and the same application should be able to be driven by different large models.

    Such examples have already been demonstrated in current cases. For instance, domestic platforms like Feishu and DingTalk, as well as international tools like Slack, can all adapt to different foundational large models, allowing users to choose based on their needs.

    Secondly, in specific applications, large models should be decoupled layer by layer from different application stages.

    A typical example is HeyGen, an AI video company that has gained explosive popularity abroad. Its annual recurring revenue reached $1 million by March this year and surged to $18 million by November.

    HeyGen currently employs 25 staff members but has built its own video AI model while integrating large language models from OpenAI and Anthropic, along with Eleven Labs' audio products. Based on different large models, HeyGen utilizes distinct models at various stages—such as creation, script generation (text), and voice—when producing a single video.

    ▲图源HeyGen官网

    另一个更直接的案例是ChatGPT的插件生态,最近国内剪辑应用剪映加入了ChatGPT的生态池,在这之后,用户在ChatGPT上要求调用剪映的插件制作视频,剪映就能在ChatGPT的驱动下自动生成一个视频。

    也就是说,大模型与应用的多对多匹配,可以精细到在每一个环节选择一个最适配的大模型进行支持。即一个应用不是由一个大模型进行驱动,而是由数个,甚至一组大模型进行联合驱动。

    Multiple large models correspond to one application, combining the strengths of all. In such a model, the division of labor in the AI industry chain will also be redefined.

    Just like the current automotive industry chain, where engines, batteries, components, and bodywork are each handled by specialized manufacturers, the main factory only needs to select and assemble them to create differentiated products and bring them to market.

    Redefining roles, breaking and rebuilding—no destruction, no creation.

    Under the multi-model, multi-application model, a new ecosystem will emerge.

    Following the clues, we attempt to envision the architecture of a new ecosystem based on internet experience.

    When mini-programs first emerged, everyone was uncertain about their capabilities, architecture, and suitable application scenarios. Initially, each enterprise had to learn the functionalities and strategies of mini-programs from scratch, leading to slow development and stagnant growth in their numbers.

    It wasn't until the emergence of WeChat service providers that the situation improved. These providers bridged the gap between the WeChat ecosystem—understanding the underlying architecture and framework of mini-programs—and enterprise clients, helping them develop customized mini-programs tailored to their needs. They also leveraged the broader WeChat ecosystem to attract and retain customers through mini-programs. Among these service providers, companies like Weimob and Youzan stood out.

    In other words, the market may not need vertical large models, but it does need large model service providers.

    Similarly, only through actual usage and operation can one truly understand the characteristics of each large model and how to leverage them. Service providers, positioned in the middle layer, can not only be downward compatible with multiple large models but also collaborate with enterprises to build a healthy ecosystem.

    Based on past experience, service providers can be roughly categorized into three types:

    First, the experienced service providers, who understand and master the features and application scenarios of each large model, and work with industry-specific scenarios to open up opportunities through service teams.

    Second, the resourceful service providers, similar to how Weimob was able to secure low-cost advertising spaces within WeChat and outsource them in the past. In the future, the open permissions of large models will not be universal, and service providers who can obtain sufficient permissions will establish early barriers.

    The third category of technical service providers involves addressing how to call and connect multiple models embedded in an application's underlying layer while ensuring stability, security, and resolving various technical challenges.

    According to observations by "Self Quadrant," prototypes of large model service providers have begun to emerge in the past six months, primarily in the form of enterprise services that teach businesses how to apply various large models. The approach to application development is gradually forming a WorkFlow.

    "Currently, when I create a video, I first propose a script idea to Claude to help me write a story, then copy and paste it into ChatGPT to break it down into a script using its logical capabilities. Next, I use the Jianying plugin to convert text to video and generate the video directly. If some images are inaccurate, I regenerate them using Midjourney to complete the video. If an application could simultaneously call all these capabilities, it would be a truly native application," an entrepreneur shared with us.

    Of course, implementing a multi-model, multi-application ecosystem presents many challenges to solve, such as how to interconnect multiple models, how to maximize model calls through algorithms, and what combinations yield the best solutions. These are both challenges and opportunities.

    From past experience, the development trend of AI applications may initially appear fragmented and scattered, only to gradually become unified and integrated.

    For example, we currently use separate applications for tasks like Q&A, image generation, and creating PowerPoint presentations. However, in the future, these functions may be consolidated into a single, comprehensive product, moving toward platformization. This mirrors how services such as ride-hailing, food delivery, and ticket booking—once distinct industries—have now converged into super-apps. Such integration will also pose diversified challenges to model capabilities to meet varying demands.

    Beyond this, AI-native solutions will disrupt current business models, leading to a redistribution of capital across the industry chain. Baidu has transformed into a knowledge shelf, Alibaba into a product shelf—all business models are reverting to their most fundamental purpose: addressing consumers' genuine needs while eliminating redundant processes.

    On this foundation, while value creation remains crucial, reconstructing business models emerges as an even more critical question for investors and entrepreneurs to consider.

    At present, we are still on the eve of the explosion of AI-native applications. A clear hierarchy is gradually forming: foundational large models at the bottom, large model service providers in the middle, and various startups at the top. Only with such distinct layers and healthy collaboration can AI-native applications arrive in batches.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups