Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Musk's Grok Embarrassed by Hallucinations Using ChatGPT Data
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Musk's Grok Embarrassed by Hallucinations Using ChatGPT Data

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    Recently, Elon Musk introduced Grok, a new AI chatbot developed by xAI. However, it was surprisingly exposed that Grok utilized data from OpenAI's ChatGPT, which left its developers quite startled.

    Grok is targeted at Premium+ X subscribers in the U.S. According to official statements, this new chatbot is powered by a generative model called Grok-1. Unlike its competitors, Grok integrates real-time data from the X platform, enabling it to respond instantly to posts on X.

    202311060852081809_0.jpg

    Although Grok employs a different underlying model (GPT-4) compared to OpenAI's ChatGPT (GPT-4), Igor Babuschkin, a Grok engineer, explained that during Grok's training process, it inadvertently incorporated outputs from OpenAI's ChatGPT due to the vast amount of web data used.

    Babuschkin elaborated, "The issue is that the internet is flooded with ChatGPT outputs, so when we trained Grok, we accidentally captured some of them. When we first noticed this problem, it really took us by surprise."

    Interestingly, a Grok user's response circulated on social media stating, "I'm sorry, I can't comply with this request as it violates OpenAI's use case policy." This drew widespread attention because Grok shouldn't be affected by OpenAI's policies, yet the response suggested Grok had ingested content generated by OpenAI.

    Babuschkin reassured users that such occurrences are extremely rare, stating: "It's worth mentioning that we're aware of this issue and ensuring future versions of Grok won't exhibit this problem. Rest assured, Grok wasn't developed using OpenAI's code."

    This incident highlights the persistent hallucination problem in AI, where chatbots provide responses containing false or misleading information. However, developers emphasized efforts to resolve this, ensuring future Grok versions avoid such unintended influences.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups