Unlock stock picks and a broker-level newsfeed that powers Wall Street.
NVIDIA Dynamo Open-Source Library Accelerates and Scales AI Reasoning Models

In This Article:

NVIDIA
NVIDIA

NVIDIA Dynamo Increases Inference Performance While Lowering Costs for Scaling Test-Time Compute; Inference Optimizations on NVIDIA Blackwell Boosts Throughput by 30x on DeepSeek-R1

NVIDIA Dynamo

NVIDIA Dynamo is new fully open-sourced AI inference-serving software designed to maximize token revenue generation for AI factories deploying reasoning AI models.
NVIDIA Dynamo is new fully open-sourced AI inference-serving software designed to maximize token revenue generation for AI factories deploying reasoning AI models.

SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) -- GTC -- NVIDIA today unveiled NVIDIA Dynamo, an open-source inference software for accelerating and scaling AI reasoning models in AI factories at the lowest cost and with the highest efficiency.

Efficiently orchestrating and coordinating AI inference requests across a large fleet of GPUs is crucial to ensuring that AI factories run at the lowest possible cost to maximize token revenue generation.

As AI reasoning goes mainstream, every AI model will generate tens of thousands of tokens used to “think” with every prompt. Increasing inference performance while continually lowering the cost of inference accelerates growth and boosts revenue opportunities for service providers.

NVIDIA Dynamo, the successor to NVIDIA Triton Inference Server™, is new AI inference-serving software designed to maximize token revenue generation for AI factories deploying reasoning AI models. It orchestrates and accelerates inference communication across thousands of GPUs, and uses disaggregated serving to separate the processing and generation phases of large language models (LLMs) on different GPUs. This allows each phase to be optimized independently for its specific needs and ensures maximum GPU resource utilization.

“Industries around the world are training AI models to think and learn in different ways, making them more sophisticated over time,” said Jensen Huang, founder and CEO of NVIDIA. “To enable a future of custom reasoning AI, NVIDIA Dynamo helps serve these models at scale, driving cost savings and efficiencies across AI factories.”

Using the same number of GPUs, Dynamo doubles the performance and revenue of AI factories serving Llama models on today’s NVIDIA Hopper™ platform. When running the DeepSeek-R1 model on a large cluster of GB200 NVL72 racks, NVIDIA Dynamo’s intelligent inference optimizations also boost the number of tokens generated by over 30x per GPU.

To achieve these inference performance improvements, NVIDIA Dynamo incorporates features that enable it to increase throughput and reduce costs. It can dynamically add, remove and reallocate GPUs in response to fluctuating request volumes and types, as well as pinpoint specific GPUs in large clusters that can minimize response computations and route queries. It can also offload inference data to more affordable memory and storage devices and quickly retrieve them when needed, minimizing inference costs.