Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. AI Startup Reka Launches Multimodal AI Assistant Yasa-1 to Compete with ChatGPT
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

AI Startup Reka Launches Multimodal AI Assistant Yasa-1 to Compete with ChatGPT

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    Reka, an AI startup co-founded by researchers from DeepMind, Google, Baidu, and Meta, recently unveiled its latest product, the multimodal AI assistant Yasa-1. This assistant is designed to comprehend and interact with multiple media formats such as text, images, videos, and audio, positioning itself as a potential rival to OpenAI's ChatGPT.

    Yasa-1

    Yasa-1 is currently undergoing private testing, competing with OpenAI's ChatGPT, which has already undergone multimodal upgrades, including GPT-4V and DALL-E3. The Reka team highlighted their experience in developing projects like Google Bard, PaLM, and DeepMind's AlphaCode, which they believe gives Yasa-1 a competitive edge.

    Yasa-1 capabilities

    What sets Yasa-1 apart is its multimodal capabilities. It can combine text prompts with multimedia files to provide more specific answers. For example, it can create social media posts using images to promote products or identify specific sounds and their sources.

    Yasa-1 features

    Additionally, Yasa-1 can comprehend what is happening in videos, including the topics being discussed, and predict the next possible actions in the footage.

    image.png

    Beyond its multimodal capabilities, Yasa-1 also supports programming tasks and can execute code to perform arithmetic operations, analyze tables, or create visualizations for specific data points. However, like all large language models, Yasa-1 may occasionally generate nonsensical content, so it should not be entirely relied upon for critical advice.

    image.png

    Reka plans to expand Yasa-1's usage in the coming weeks to enhance its functionality and address some limitations. The startup, which debuted in June 2023, has secured $58 million in funding, with a focus on areas such as general intelligence, universal multimodal and multilingual agents, self-improving AI, and model efficiency.

    The launch of Yasa-1 signifies intensifying competition in the field of multimodal AI assistants, foreshadowing more complex interactions across different media types and offering users increasingly interesting and practical features.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups