Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Microsoft Releases NVIDIA RTX Graphics Cards with 5x Faster AI Inference!
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Microsoft Releases NVIDIA RTX Graphics Cards with 5x Faster AI Inference!

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    At the ongoing Microsoft Ignite global technology conference, Microsoft unveiled a series of new AI-optimized models and development tools to help developers better harness hardware performance and expand AI applications.

    For NVIDIA, which currently dominates the AI field, Microsoft has delivered a significant gift. Whether it's the TensorRT-LLM wrapper interface for OpenAI Chat API, the performance improvements in RTX drivers like DirectML for Llama 2, or other popular large language models (LLMs), all can achieve better acceleration and application on NVIDIA hardware.

    Among these, TensorRT-LLM is a library designed to accelerate LLM inference, significantly boosting AI inference performance. It is continuously updated to support more language models and is open-source.

    In October, NVIDIA also released TensorRT-LLM for Windows. On desktops and laptops equipped with RTX 30/40 series GPUs, as long as the VRAM is no less than 8GB, users can more easily handle demanding AI workloads.

    Now, Tensor RT-LLM for Windows can be compatible with OpenAI's widely popular chat API through a new wrapper interface, allowing various related applications to run locally without connecting to the cloud. This helps preserve private and proprietary data on PCs and prevents privacy leaks.

    Any large language model optimized for TensorRT-LLM can work with this wrapper interface, including Llama 2, Mistral, NV LLM, and more.

    For developers, there's no need for cumbersome code rewriting or porting. Just modify one or two lines of code to enable AI applications to execute quickly locally.

    NVIDIA RTX GPUs Achieve 5x AI Inference Speed Boost! RTX PCs Easily Handle Large Models Locally

    The TensorRT-LLM v0.6.0 update coming at the end of this month will bring up to 5x inference performance improvement on RTX GPUs, supporting more popular LLMs including the new 7B-parameter Mistral and 8B-parameter Nemotron-3, enabling desktops and laptops to run LLMs locally anytime with speed and accuracy.

    According to actual test data, RTX 4060 GPU with TensorRT-LLM can achieve 319 tokens per second in inference performance, which is 4.2x faster than other backends' 61 tokens per second.

    The RTX 4090 can accelerate from tokens per second to 829 tokens per second, achieving a 2.8x performance improvement.

    NVIDIA RTX显卡AI推理提速5倍!RTX PC轻松在本地搞定大模型

    With powerful hardware performance, a rich development ecosystem, and broad application scenarios, NVIDIA RTX is becoming an indispensable assistant for local AI processing. The increasing optimizations, models, and resources are accelerating the adoption of AI features and applications across hundreds of millions of RTX PCs.

    Currently, over 400 partners have released AI applications and games that support RTX GPU acceleration. As model usability continues to improve, we can expect to see more AIGC features emerging on Windows PC platforms.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups