Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Robotics 2.0 (1): AI Redefining Robotics
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Robotics 2.0 (1): AI Redefining Robotics

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    This article will unveil the mystery of next-generation AI robots and analyze how they will influence our future.

    Artificial intelligence has ushered in a new era of robotics—Robotics 2.0—with the most significant shift being the transition from manually programmed automation to genuine autonomous learning. This article aims to demystify the application of artificial intelligence (AI), helping readers understand how AI robots will shape our future and clarify topics we often hear about but rarely explore in depth or fully comprehend.

    This is the first installment in the "Robotics 2.0" series, discussing the impact of robotics and AI on various industries and future work. We will explore how AI unlocks the potential of robotics, the challenges and opportunities of this new technology, and how it will affect productivity, employment, and daily life. Amidst the hype surrounding AI, we hope these articles encourage more constructive and comprehensive discussions.

    When it comes to robots, our imaginations run wild: from Softbank's social robot Pepper, Boston Dynamics' backflipping Atlas, the humanoid assassins in the Terminator films, to the lifelike android hosts in the TV series Westworld.

    We often hear polarized views: some overestimate robots' ability to mimic humans, believing machines will eventually replace us, while others are overly pessimistic about the potential of new research and technologies.

    Over the past year, many friends in startups, tech, and venture capital have asked me about the "real" progress in AI, particularly in deep reinforcement learning and robotics.

    The most pressing questions include:

    How are AI robots different from traditional ones? Do they truly have the potential to disrupt industries? What are their capabilities and limitations?

    Understanding current technological advancements and industry landscapes is surprisingly difficult, let alone predicting the future. Through this article, I aim to demystify AI applications in robotics and clarify this often-discussed yet poorly understood topic.

    The fundamental questions we must address first: What are AI-enabled robots, and what makes them unique?

    "Machine learning solves problems that were previously 'hard for computers but easy for humans,' or, more understandably, problems where 'it's hard for humans to explain to computers.'"

    —Benedict Evans, Andreessen Horowitz (a16z)

    The greatest achievement of AI in robotics is the shift from "automation" (where engineers program rules for robots to follow) to true "autonomous learning."

    If a robot only handles one task, the presence of AI may not be noticeable. But if it must manage diverse tasks or adapt to human and environmental changes, a degree of autonomy becomes essential.

    We can borrow the classification levels of autonomous vehicles to explain the evolution of robotics:

    Currently, most factory robots operate via open-loop or non-feedback control (Level 1), meaning their actions are independent of sensor feedback.

    A few factory robots adjust operations based on sensor feedback (Level 2). Collaborative robots (cobots) are simpler and safer, allowing them to work alongside humans, though they lag behind industrial robots in precision and speed.

    While cobots are easier to program, they lack autonomous learning. Any changes in tasks or environments require manual adjustments or reprogramming by humans, as the robots cannot generalize or adapt flexibly.

    Deep Learning and Reinforcement Learning enable robots to autonomously handle various objects, minimizing human intervention.

    We are beginning to see pilot projects using AI robots (Levels 3/4). For example, "warehouse picking" is a prime case. In logistics warehouses, workers must pack millions of different products based on customer orders. Traditional computer vision cannot handle such a wide range of items, as each requires prior registration and programming for specific actions.

    Now, thanks to deep learning and reinforcement learning, robots can autonomously learn to handle diverse objects, reducing human involvement. During training, robots may encounter unfamiliar items and require human assistance or demonstration (Level 3). But as they gather more data and learn from trial and error (Level 4), algorithms improve, moving toward full autonomy.

    Like the autonomous vehicle industry, robotics startups adopt different strategies: some focus on human-robot collaboration (Level 3), while others believe in full autonomy, skipping Level 3 to target Levels 4 or 5.

    This makes it challenging to assess the industry's current autonomy levels.

    A startup might claim to develop Level 3/4 autonomous systems but rely heavily on remote human operators. Without insight into their software and AI progress, it's hard to distinguish between remote control and true autonomy. Conversely, startups aiming for Levels 4/5 may struggle to achieve quick results, potentially deterring early adopters and complicating data collection.

    In the latter part of this article, I will delve into the varied business strategies of these startups.

    Interestingly, AI's potential in robotics may exceed that in self-driving cars, as robots have diverse applications across industries, making Level 4 autonomy more attainable in some cases.

    The adoption of AI robotic arms in warehouses exemplifies this. Warehouses are "semi-controlled" environments with relatively low uncertainty, and picking tasks, while critical, can tolerate errors.

    Autonomous home or surgical robots, however, remain distant goals due to higher environmental variability and the irreversible, high-stakes nature of some tasks. Still, as technology improves in precision, accuracy, and reliability, we will see AI robots adopted in more industries.

    Many industries have yet to adopt robotic arms due to the limitations of traditional robotics and computer vision.

    There are only about 3 million robotic arms worldwide, mostly used for搬运 (handling), welding, and assembly. Beyond automotive and electronics, industries like warehousing and agriculture have barely begun using them, primarily due to these limitations.

    In the coming decades, as deep learning (DL), reinforcement learning (RL), and cloud computing unlock robotics' potential, we will witness explosive growth and industry transformation. What opportunities does this present for AI robots? How are startups and incumbents adapting their strategies and business models?

    Next, I will highlight example companies across different market segments. This overview is by no means exhaustive—feel free to contribute additional examples to enrich the discussion.

    AI/Robotics Startup Market Overview (Author's Contribution)

    Examining the new generation of robotics startups reveals two distinct business models.

    The first type is vertical applications: most startups in Silicon Valley focus on developing solutions for specific vertical markets, such as e-commerce logistics, manufacturing, agriculture, and more.

    This approach of providing complete solutions is quite reasonable, given that the related technologies are still in their infancy. Companies do not rely on others to provide key modules or components but instead build end-to-end solutions. Such vertically integrated solutions can enter the market faster and ensure that companies have a more comprehensive grasp of end-user cases and performance.

    However, finding relatively easy-to-implement use cases like "warehouse sorting" is not that simple. Warehouse picking is a relatively straightforward task with higher customer willingness to invest and technical feasibility, and almost every warehouse has the same picking needs.

    But in other industries (e.g., manufacturing), assembly tasks may vary from factory to factory. Additionally, tasks performed in manufacturing require higher precision and speed, making them technically more challenging.

    Currently, robots with learning capabilities still cannot achieve the same precision as closed-loop robots.

    Although machine learning allows robots to improve over time, robots currently operating with machine learning still cannot match the precision of closed-loop robots because they need to accumulate trial-and-error experiences, learn from mistakes, and gradually improve.

    This explains why startups like Mujin and CapSen do not use deep reinforcement learning but instead rely on traditional computer vision.

    However, traditional computer vision requires each object to be pre-registered, ultimately lacking scalability and adaptability. Once deep reinforcement learning (DRL) reaches the performance threshold and gradually becomes mainstream, this traditional method will eventually become obsolete.

    Another issue with these startups is that their value is often overestimated. It is common to see startups in Silicon Valley raising tens of millions of dollars in funding without being able to promise any concrete revenue streams.

    For entrepreneurs, "painting" a bright future for deep reinforcement learning is all too easy, but the reality is that it will take several more years to achieve such results. Although these companies are still far from profitability, Silicon Valley venture capitalists are willing to continue betting on these talented and technologically advanced teams.

    On the other hand, horizontal applications are a more practical but rarer model. We can simplify robotics into three parts: sensing (input), processing, and actuation (output), along with development tools.

    (The term "processing" here broadly covers controllers, machine learning, operating systems, and robotic modules—essentially anything that doesn't fall under sensing or actuation.)

    I believe this area will have the most growth potential in the future. For robotics users, a fragmented market is a thorny issue because all robot manufacturers promote their own languages and interfaces, making it difficult for system integrators and end-users to integrate robots with related systems.

    As the industry matures, more robotic applications are being used outside of automotive and electronics factories, increasing the need for standard operating systems, communication protocols, and interfaces to improve efficiency and reduce time-to-market.

    For example, several startups in Boston are working on related modules. Veo Robotics is developing safety modules to enable industrial robots to work more safely alongside humans, while Realtime Robotics offers solutions to accelerate robotic arm path planning.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups