Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. AI Large Models Show Broad Application Prospects in Autonomous Driving
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

AI Large Models Show Broad Application Prospects in Autonomous Driving

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    Vehicle intelligence is primarily manifested in autonomous driving and smart cockpits. The advancement of AI technology continues to enhance vehicle intelligence, with large AI models like BEV, cognitive models, and NLP poised to drive automotive intelligence to new levels. In the first half of 2023, several manufacturers have mass-produced NOA (Navigation on Autopilot) systems based on BEV large models. The new Mocha DHT-PHEV, released in June 2023, will feature a cognitive large model, and two new models equipped with NLP language models are set to launch within the year.

    With the continuous progress and cost reduction of AI technology, autonomous driving is expected to mature and become more widespread. The national standard "Classification of Driving Automation for Vehicles" (GB/T 40429-2021) was officially implemented in March 2022. This standard, referencing the SAE classification, aligns levels 0-5 with SAE's L0-L5. Notably, the national standard stipulates that levels 0-2 require collaboration between the driver and the system in case of incidents, unlike the SAE standard, which places full responsibility on the driver.

    According to data from the Ministry of Industry and Information Technology, the China Federation of Industrial Economics, and the China Society of Automotive Engineers, the penetration rate of Level 2 (L2) assisted driving reached 34% in 2022, with 32% for fuel vehicles and 46% for new energy vehicles. However, the penetration of Level 3+ (L3+) high-level autonomous driving remains extremely low globally and in China. The "Technology Roadmap for Energy-Saving and New Energy Vehicles 2.0," compiled under the leadership of the China Society of Automotive Engineers, sets targets for 2025: over 50% market share for PA (Partial Automation) and CA (Conditional Automation) smart connected vehicles, and commercial application of HA (High Automation) in limited areas and specific scenarios. By 2030, PA and CA vehicles are expected to exceed 70% market share, with HA vehicles reaching 20%, widely applied on highways and in some urban roads. Similarly, Li Yizhong, Chairman of the China Federation of Industrial Economics, predicts that L3 autonomous driving will achieve a 70% penetration rate by 2030.

    Vehicle intelligence also extends to cockpit upgrades, evolving from traditional mechanical dashboards and radios to smart assistant cockpits with biometric recognition and driver health monitoring, ultimately aiming to create a multifunctional third living space integrating information and entertainment.

    AI is key to the perception and decision-making modules of autonomous driving systems. Mainstream autonomous driving systems are modular, divided into perception, decision-making, and execution layers, with AI at the core of perception and decision-making.

    In the perception layer, sensor algorithms are critical. Multi-sensor fusion algorithms include data-level, feature-level, and decision-level fusion, with decision-level fusion being the most widely used. Tesla's vector map modeling and automatic lane annotation algorithms are vital components of its autonomous driving system, supporting lane trajectory planning.

    Decision-making and planning algorithms can be further divided into global path planning, behavioral decision-making, and motion planning. Global planning includes physics-based, intent classification-based, and deep learning-based methods. Behavioral decision-making includes rule-based, learning-based, and hybrid approaches. Motion planning includes policy rule-based, optimal control-based, and machine learning-based methods. Interactive motion planning enhances safety by incorporating human-machine co-driving, vehicle-road coordination, and risk assessment of dynamic environments.

    End-to-end autonomous driving systems are gaining research attention, demanding even more from AI large models. These systems integrate perception and decision-making, potentially outperforming traditional modular designs and better handling complex road conditions and multi-traffic interactions. In May 2023, Tesla CEO Elon Musk announced plans to use a new end-to-end AI in FSD Beta v12, employing a single neural network to process camera inputs and output driving behaviors like steering and acceleration, continuously improving through human driving data.

    However, end-to-end systems face challenges in complexity and safety. These systems rely on a single AI model for input-to-output control, requiring high computational power and cloud coordination. The "black box" nature of these models makes systematic analysis difficult, relying on trial and error for updates, which can lead to regressions. Thus, end-to-end systems are not yet mainstream.

    AI large models in NLP and CV are advancing rapidly, with broad applications in autonomous driving perception and decision-making. CV models are used for perception algorithms like data annotation and sensor fusion. NLP models enhance human-vehicle interaction in smart cockpits. Multimodal models improve perception accuracy and are key to integrated autonomous driving architectures. Data-driven multimodal AI models will be crucial for achieving integrated autonomous driving designs.

    AI large models are expected to accelerate the transition from L0-L2 assisted driving to L3+ high-level autonomy. High-level autonomy demands greater precision and complexity, requiring advanced environmental perception and scenario reconstruction. Addressing edge cases and extreme conditions necessitates extensive real-world and test data. Improved data annotation and simulation methods are needed, and data-driven multimodal sensor fusion models will be increasingly important to minimize information loss and enhance perception accuracy.

    High-level autonomous driving requires more intelligent and human-like cognitive decision-making. Traditional decision-making and planning methods are transitioning from rule-based approaches to data-driven, learning-based intelligent decision-making. The Transformer+RL architecture has already demonstrated its advantages in handling larger datasets and more complex, large-scale environments. In the long run, the implementation of end-to-end autonomous driving systems is promising. End-to-end systems require a single model to handle the entire process from input data processing to decision-making control, and the Transformer-based AI large models can meet these requirements.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups