Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Has the 'Winner Takes All' Rule in AI Startups Disappeared? Maybe You Can Turn the Tables
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Has the 'Winner Takes All' Rule in AI Startups Disappeared? Maybe You Can Turn the Tables

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 2 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    Peter Thiel, the renowned Silicon Valley investor and founder of PayPal, once said, 'Competition is for losers. If you want to create and capture lasting value, build a monopoly.'

    This statement is an extreme articulation of the 'Winner Takes All' principle. 'Winner Takes All' refers to a scenario where a product or service that is slightly better than its competitors (e.g., by 1%) can capture a disproportionately large share of revenue (e.g., 90-100%) in its category, leaving competitors far behind.

    This phenomenon is evident in many industries, especially in technology. Globally, IBM dominated computing for decades; Microsoft led the personal computer market; and Amazon continues to single-handedly rule the e-commerce space. Clearly, a defining feature of the internet era is 'Winner Takes All, Loser Takes Nothing.'

    Understanding this is crucial because it changes our investment logic: if the traditional internet playbook of 'burning cash to eliminate competitors—monopolizing the market—leveraging network effects' no longer works, then investors who lived through the 'Groupon Wars' era may need a new investment methodology.

    To find answers, the Shidao Research team referenced several foreign articles by authors such as Benedict Evans, a partner at A16Z, and Guru Chahal, a partner at Lightspeed Venture Partners, attempting to distill similar or opposing viewpoints for consideration.

    Overall, the virtuous cycle model of artificial intelligence introduced by Andrew Ng provides the underlying logic for 'Winner Takes All.' Initially, AI products are built with limited data. Over time, as they interact with users, these products collect increasing amounts of data daily. And machine learning is fundamentally based on data—lots of it.

    More data = More accurate models = Better products = More users = More data

    This virtuous cycle formula is considered a key factor in the winner-takes-all nature of the AI market. The combination of big data and machine learning amplifies network effects and returns to scale, further solidifying the dominance of market leaders. This means companies that are already large and possess vast amounts of data will grow even stronger.

    In the domestic context, data barriers present a significant wall for emerging companies. High-quality Chinese language corpus data poses a major challenge for startups, as data accumulation requires both time and experience. Companies like Baidu, which have accumulated data over years through search and various internet/IoT applications, start with a lead of several lengths.

    While data is crucial, A16Z partner and renowned analyst Benedict Evans offers a different perspective on how data functions in practical applications.

    In his article 'Does AI Make Strong Tech Companies Stronger?', Evans points out that while machine learning requires massive amounts of data, the data used must be highly relevant to the specific problem being solved.

    General Electric has vast amounts of telemetry data from gas turbines, Google has massive search data, and American Express has extensive credit card fraud data. However, you can't use turbine data to train a model for detecting fraudulent transactions, nor can you use web search data to train a model for identifying failing gas turbines.

    Every model you train can only do one thing.

    This is very similar to previous waves of automation: just as washing machines can only wash clothes but not wash dishes or cook, and chess programs can't file taxes, machine learning translation systems also cannot recognize cats.

    The applications you build and the datasets you need are strongly correlated with the specific tasks you're trying to solve. (Although this is a moving target, with research attempting to discover how to make machine learning models more transferable between different datasets.)

    This means Google will become increasingly better at being Google, but it doesn't mean it's getting better at everything else.

    So, in vertical markets, can leading companies capture the entire market by virtue of their overwhelming data advantage?

    EvansDoes believes the situation will become more complex.

    Questions like who owns the data, how unique the data is, at what level it's unique, and where the right place is to aggregate and analyze the data. The answers to these questions will vary for different business units, industries, and use cases.

    Let's consider a scenario: if you're creating a company to solve real-world problems using machine learning, you'll face two fundamental data problems:

    1. How do you obtain the initial dataset to train your model and acquire your first customer?

    2. How much data do you need?

    The second question can be broken down into many sub-questions:

    • Do you want to solve the problem with less and easily accessible data (though many competitors may also have access to it)?
    • Or do you need more, hard-to-obtain data to solve the problem?
    • If so, is there a network effect that can be leveraged? Will one winner take all the data?
    • Does the product improve indefinitely with more data, or is there an S-curve?

    It all depends.

    Some data is unique to a company or product, or has strong proprietary advantages, such as General Electric's turbine telemetry technology. But this may not be very useful for analyzing Rolls-Royce's turbines.

    Some data can be applied to use cases across many companies and even industries. Numerous startups have emerged to address common problems faced by various companies or sectors, leveraging data that exhibits network effects.

    However, there are cases where, beyond a certain point, vendors no longer require additional data because the product already functions effectively. EvansDoes notes that this scenario has played out in many startups. For example, Everlaw, a company backed by A16Z, developed legal software capable of sentiment analysis on a million emails without needing further training on specific litigation data from clients.

    A more extreme example involves a major vehicle manufacturer training models to develop a more accurate tire blowout detector. This model is trained on extensive tire data, yet acquiring such data is not particularly challenging.

    In essence, the proliferation of machine learning doesn't necessarily empower giants like Google but enables diverse startups to harness cutting-edge technology more swiftly to build applications and solve problems.

    The future won't see more 'AI startups' per se; instead, they will be industrial process analytics firms, legal platform companies, or sales optimization providers.

    EvansDoes draws a parallel between machine learning and SQL (Structured Query Language).

    In the past, if you didn't use SQL, you would fall behind. For example, one of the key factors in Walmart's success was its use of SQL to manage inventory and logistics more efficiently.

    But today, when you start a retail company and say, '...we will use SQL,' it doesn't make the company appear more valuable because SQL has become a ubiquitous part of everything and has faded from the discourse.

    The same will happen with machine learning in the future.

    The Shidao Investment Research team believes that, regardless of whether 'the winner takes all,' the investment logic of the internet era no longer holds in the age of artificial intelligence.

    The core logic lies in the fact that during the internet era, 'traffic' was free, which gave rise to the concept of 'network effects.' In other words, with fixed operational costs, the more users there were, the greater the value of the network. This led to the idea that 'every industry could be reimagined with internet thinking.'

    However, the difference in the era of large models is that computing power comes at a cost. Each additional user requires real computational resources, which doesn't create network effects. This makes subsidies meaningless—the more new users you have, the less profitable you become.

    Additionally, current large models face issues such as high usage costs, significant inference latency, data leakage, and inaccuracies in specialized tasks. In contrast, smaller, more specialized (fine-tuned and refined) long-tail models have begun to show their advantages.

    Therefore, even though most technologies can contribute to wealth accumulation, and AI giants can indeed amass significant wealth, the total amount of wealth will still be limited due to computational costs and the inability to dominate the entire market.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups