Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. If AI Can Diagnose Illnesses, Who Takes Responsibility for Errors?
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

If AI Can Diagnose Illnesses, Who Takes Responsibility for Errors?

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    Alex, a 4-year-old child in the U.S., fell ill. Over three years, he consulted 17 doctors across various specialties, including pediatrics, dentistry, and orthopedics, yet none accurately diagnosed his condition. It wasn't until 2023, when ChatGPT gained popularity, that his mother turned to the AI for help.

    "I meticulously entered every detail from Alex's MRI records into ChatGPT," his mother said. ChatGPT diagnosed him with "tethered cord syndrome." Armed with this diagnosis, they visited a new neurosurgeon, who, after a glance at the MRI, confirmed ChatGPT's conclusion and pinpointed the exact location of the tethering.

    Alex's case undoubtedly fuels the rush of companies into AI healthcare. On October 24, 2023, iFlytek launched its Spark Medical Large Model. According to incomplete statistics from Caijing∙Health, this marks the 32nd generative AI large model released in China's healthcare sector in 2023.

    Behind the flurry of product launches, companies are scrambling to find commercial pathways. The most pressing question: Who will buy these products?

    Public hospitals are seen as the ideal buyers by the industry. 'But the common challenge now is the lack of motivation for public hospitals to purchase these systems,' a representative from an AI healthcare large model company told Caijing∙Health. 'On one hand, these innovations aren't part of the hospitals' evaluation metrics, unlike other software that can earn them points. On the other, hospital data remains largely siloed, limiting the large models' full potential.'

    Like an unsolvable curse, the longstanding hurdles in AI healthcare commercialization—data barriers and the absence of payers—remain unresolved.

    'Even though AI's penetration into healthcare is an undeniable trend with vast potential, China's AI healthcare sector is currently in a trough. Breaking through depends on who can master the data,' said Yu Jianlin, Deputy General Manager of Gaotou Investment.

    With the rise of generative AI large models, we're inching closer to AI doctors.

    Over the past six months, medical AI large models have proliferated. On October 24, the app "iFlytek Xiaoyi" opened to the public, offering pre-consultation and health report analysis. As early as May, Chunyu Doctor integrated a large model into its online consultation product 'Chunyu Huiwen,' upgrading the traditional 'human-to-human' model to 'human-to-machine-to-human.'

    AI large models are complex neural networks requiring vast parameters to enhance depth and performance, trained on massive datasets to yield high-quality predictions.

    Public data shows that 20% of internet searches relate to healthcare. But a critical issue remains: If AI can't take responsibility for its answers, the business model won't work.

    Currently, only doctors are accountable for medical diagnoses. A former senior internet healthcare professional noted, 'The crux of healthcare lies in responsibility and risk. Compliance is key—only those who can bear the responsibility can profit.'

    While AI, like ChatGPT, can provide accurate diagnoses with sufficient symptom details, it doesn't take responsibility for its conclusions.

    This raises a question: If a doctor errs, it's understandable—humans make mistakes. But what if the machine errs? Neither law nor ethics has yet addressed these dilemmas.

    Thus, for AI large models to commercialize, they must first target hospital doctors, aiming to become their assistants.

    In May 2023, a chief physician at a tertiary hospital tested his 'new assistant.' During patient consultations, the assistant recorded sessions and drafted case notes.

    This consultation-recording robot, developed on large language models like ChatGPT, 'has been deployed in over a dozen hospitals, saving doctors time,' said Zhang Chao, CEO of Beijing Zuoyi Technology.

    The physician rated his assistant 90/100. The lost 10 points were due to minor errors, such as including unasked patient details (e.g., menstrual or marital history) in records, requiring manual deletion.

    Integrating into hospital workflows is a key goal. In September, Baidu launched its 'Lingyi' large model to address the 'queue for hours, consult for minutes' issue. 'Lingyi' excels at triage, matching patients to doctors efficiently. Over 200 medical institutions, including 26 internet platforms and dozens of public hospitals, are trialing it.

    'Pre-consultation' also saves time. SenseTime's collaboration with Shanghai Xinhua Hospital will introduce its 'Dayi' AI model for pre-visit patient interactions.

    Developers are exhaustively targeting every hospital niche. But do hospitals need it?

    "Twenty years ago, search companies tried entering hospitals, but infrastructure and demand were lacking," the internet healthcare veteran told Caijing∙Health. Today's AI models face the same hurdles.

    Sales teams know hospitals are tough to crack. First, competition with incumbents—new entrants must outperform existing partners in service, brand, or connections. Second, hospitals may not see AI as essential.

    Many hospitals now have Clinical Decision Support Systems (CDSS), suggesting diagnoses and treatments based on symptoms.

    'If large models merely offer diagnostic clues, they're just upgraded search tools,' the expert said. For routine cases, doctors need reliable, not overly smart, assistants—'this undercuts large models' value.'

    Paying for advanced 'search' gives hospital administrators pause.

    ChatGPT's ad- and subscription-based model won't work in healthcare. China's public hospitals dominate the market but won't pay for ads or computing power.

    To win hospital budgets, AI must prove clinical value. Yu Jianlin stressed that in China's healthcare, data outweighs algorithms or computing power.

    Training a clinically viable AI model requires tens of thousands of cases, yet few Chinese AI firms have even a few thousand.

    Image
    Image/Pixabay

    Large models are trained using extensive datasets, with "textbook" training forming the foundation of healthcare large models, fueled by massive amounts of medical textbooks and industry guidelines. A representative from iFlytek explained that through deep collaborations with publishers like People's Medical Publishing House, Chinese Medical Association Journals, and Scientific and Technical Documentation Press, they have acquired numerous authoritative medical resources, including books, clinical guidelines, literature, and case studies, significantly expanding the model's professional knowledge coverage and enhancing its understanding and consultation capabilities.

    JD.com's healthcare large model, "Jingyi Qianxun," was developed by collecting over 100 million doctor-patient dialogue records from its online consultation platform, covering more than 140 departments, including doctors, pharmacists, nutritionists, and psychologists.

    "There is still a noticeable gap in quality between online consultation data and offline medical data," Yu Jianlin analyzed. In the short term, over the next three to five years, companies with inherent advantages in the medical field, such as large medical equipment manufacturers, will have a clear edge in data acquisition due to their extensive hospital operations. Combining AI with hardware sales makes commercialization more feasible.

    Internationally, some mature AI-assisted diagnostic products are already being used in large-scale clinical applications. However, strong commercial incentives are the key driver for AI adoption. For example, an AI-powered early cancer screening product was developed with support from insurers who sought earlier diagnoses to reduce claim costs, motivating them to provide high-quality clinical data for its development.

    Insurance-funded models are a proven pathway internationally, as they help reduce costs, making insurers willing participants. However, China's commercial health insurance market is still underdeveloped and lacks the capacity to serve as a robust payer.

    Yu Jianlin analyzed for Caijing∙Da Jiankang that AI in healthcare currently focuses on two main areas: one is patient-facing applications like consultation and health management, and the other is assisting doctors with AI-aided diagnostics, such as AI imaging.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups