Qwen-72B Model Tops Hugging Face Open-Source Large Model Pre-training Leaderboard
-
The open-source large model community Hugging Face has released its latest open-source large model rankings, with Qwen (Tongyi Qianwen) standing out in the pre-trained model category and taking the top spot.
The Hugging Face open-source large model leaderboard covers hundreds of top open-source large models worldwide, evaluating them comprehensively across six dimensions: reading comprehension, logical reasoning, mathematical computation, factual question answering, and more.
Among these models, Qwen's 72B model performed exceptionally well, with its 72 billion parameters and a comprehensive score of 73.6, making it the top performer among all pre-trained models.
Tongyi Qianwen is a large-scale language model launched by Alibaba Cloud, featuring capabilities such as multi-turn dialogue, copywriting, logical reasoning, multimodal understanding, and multilingual support. On December 1st, Alibaba Cloud announced the open-sourcing of its 72-billion-parameter Tongyi Qianwen model.
Experience address: https://qianwen.aliyun.com/