Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. AI "Entrepreneurial Rules" - A Must-Read for AI Startups!
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

AI "Entrepreneurial Rules" - A Must-Read for AI Startups!

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    The era of large models has arrived. Where are the future industrial opportunities? What do AI-native applications really look like? How can foundational models and killer apps coexist symbiotically? Are vertical models and foundational models competitors?

    According to official news from Baidu, Robin Li will deliver a one-hour keynote speech at the Baidu World Conference on October 17, titled "Step-by-Step Guide to Building AI-Native Applications." With just 10 days left before the speech, the author has compiled all of Robin Li's remarks on large models and generative AI this year, extracting 18 key insights from tens of thousands of words of speech transcripts. These insights serve as a key to unlocking the door to generative AI entrepreneurship and are the "entrepreneurial rules."

    Robin Li's thoughts further reveal the essence of AI-native applications, providing inspiration for peers on the path of large-model entrepreneurship and innovation.

    Below are excerpts from the transcript. Enjoy:

    1. In the past, artificial intelligence meant teaching machines specific skills—what we taught, they could do; what we didn’t teach, they couldn’t. With the emergence of "intelligent emergence" in large models, they can now perform tasks they were never explicitly taught. This is why some say we are moving toward Artificial General Intelligence (AGI).

    2. How do large models redefine artificial intelligence? The change is primarily in human-computer interaction. Over the past few decades, human-computer interaction has undergone three major shifts: from command-line interfaces to graphical user interfaces (GUI), and now, in the AI era, we can interact with computers using natural language. In other words, future applications will rely on natural language prompts to activate AI-native functionalities.

    3. The advent of the AI era has expanded the IT technology stack from three layers to four: the bottom layer remains the chip layer, but the primary chips are no longer CPUs—they are now GPUs and other next-generation chips optimized for parallel large-scale floating-point operations. Above this is the framework layer, which includes deep learning frameworks like Baidu’s PaddlePaddle, Meta’s PyTorch, and Google’s TensorFlow. The next layer is the model layer, where ChatGPT, ERNIE Bot, and others reside. At the top is the application layer, where all future AI-native applications will be developed based on large models.

    4. Vertical models are not competitors to foundational models; instead, they should be built on top of robust foundational models. Without a strong foundational model, vertical models will struggle to improve and evolve. However, only a few companies will achieve high proficiency in foundational models.

    5. What entrepreneurial and investment opportunities will generative large models bring? I believe there are at least three major opportunities: first, new forms of cloud computing; second, fine-tuning industry-specific models; and third, application development.

    6. Large models are game-changers—they will completely redefine the rules of cloud computing. In the future, the primary business model for cloud computing companies will shift to MaaS (Model as a Service). Applications will be built on large models rather than relying on cloud computing resources like processing power or storage.

    7. In the era of large models, the greatest opportunities lie neither in foundational services nor in industry-specific services—they lie in applications. Just as in the mobile internet era, the biggest commercial opportunities weren’t operating systems like iOS or Android but applications like WeChat, TikTok, and Taobao. The U.S. has dozens of foundational large models, similar in scale to China’s, but the U.S. already has thousands of "AI-native applications" built on these models, while China has none. This is the biggest difference.

    8. Only when a sufficient number of AI-native applications emerge on top of large models will we have a healthy ecosystem, reflecting the broader technological trend. For entrepreneurs, competing in large models is meaningless—focusing on applications offers far greater opportunities.

    9. What defines an AI-native application? I believe it must meet at least three criteria: First, it must enable natural language interaction—this is the most fundamental change. Second, it must fully leverage capabilities like understanding, generation, reasoning, and memory—abilities that were previously unavailable. Third, the interaction flow for each application should not exceed two menu levels.

    10. No AI-native application should have more than two levels of menus. Beyond two levels, users struggle to remember where functions are located. Many features painstakingly developed by engineers end up buried in third or fourth-level menus, remaining unused. This is evident in tools like PPT and Excel, where 80% of features go unused simply because they're too hard to find.

    11. As long as you have active thinking and clear expression, the machine can work for you - this is the essence of AI-native applications.

    12. In the future, we must consciously cultivate an AI-native mindset to reconstruct every product and service. Generative AI has shown us that many tasks can now be accomplished with just a few keywords or mouse clicks, eliminating the need for lengthy descriptions. This revelation makes us realize how many engineer-developed features lie forgotten in deep menus, and how many creative ideas never get feedback. Now, simple prompts can unlock these potentials, so we must have the courage to change our thinking patterns.

    13. The most distinctive feature of "AI Native" is "Prompt Engineering" - a field that didn't exist before. We never thought interacting with computers required such finesse. But going forward, crafting effective prompts to unleash large models' potential will be fascinating work and likely the area with the most new job opportunities, where compensation will depend on prompt-writing skills.

    14. I've made a bold prediction: in 10 years, 50% of jobs worldwide will relate to prompt engineering. Just as in education, asking the right questions is often more important than solving them. We'll need increasingly more prompt engineers.

    15. Today, Baidu has thousands of engineers skilled in C++ and Python. But when AI-native applications become mainstream, everyone may need to write prompts and evaluate their execution results.

    16. The capabilities of large models are fixed - their effectiveness depends entirely on prompts. Good prompts yield more intelligent outputs and valuable results; poor prompts produce nonsense or incorrect conclusions.

    17. Future applications will operate through natural language prompts. Writing prompts is a technical skill requiring study. Crafting effective prompts blends technology and art, with perhaps more art than technology.

    18. Different large models like Wenxin and ChatGPT have distinct prompt requirements, as they're trained differently. If comparing them to people, they have different "temperaments" - we must learn through interaction how to write prompts that work best with each.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups