Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Fu Sheng Unveils Orion-14B Large Model by OrionStar with 14 Billion Parameters
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Fu Sheng Unveils Orion-14B Large Model by OrionStar with 14 Billion Parameters

Scheduled Pinned Locked Moved AI Insights
ai-articles
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote last edited by
    #1

    On January 21, OrionStar released its Orion-14B large language model during Fu Sheng's 2024 AI keynote and OrionStar model launch event. This pre-trained multilingual model developed by OrionStar features 14 billion parameters, supports common languages and professional terminology, and has achieved best-in-class performance on multiple third-party benchmark tests.

    The Orion-14B model boasts several key features: support for ultra-long context (up to 320K tokens); inference speed reaching 31 tokens/s on consumer-grade GPUs; exceptional multilingual capabilities with outstanding performance in Japanese and Korean; and a 70% reduction in model size through quantization techniques with nearly no performance loss.

    To meet enterprise application needs, OrionStar also introduced a fine-tuning suite including models optimized for RAG (Retrieval-Augmented Generation) and Agent applications. The RAG toolkit enables rapid integration with corporate knowledge bases to build customized solutions, while the Agent toolkit can dynamically select optimal tools to solve complex problems based on user queries. In addition to launching large models and fine-tuned models, OrionStar has also introduced applications such as the Juyan HR Assistant, Juyan Cloud Asset Assistant, and Juyan Creative Assistant to help enterprises enhance operational efficiency and decision-making capabilities.

    At the launch event, Fu Sheng emphasized that enterprises need not just large models but also applications that address pain points by integrating with business processes. OrionStar provides a one-stop solution for AI large model consulting and services, assisting enterprises in achieving AI-assisted decision-making.

    The release of OrionStar's large models is one of the outcomes of its years of tracking AI technology advancements and substantial investment in R&D. OrionStar boasts a team of top algorithm scientists and experience with global applications serving 2 billion users, along with extensive user data and token data, providing a solid foundation for model development and optimization. OrionStar is currently training a Mixture of Experts model based on MoE architecture, with the next milestone being a 10-billion-parameter intelligent model.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups