Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.

| Roadmap | Support Matrix | Docs | Recipes | Examples | Prebuilt Containers | Design Proposals | Blogs
NVIDIA Dynamo
High-throughput, low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments.
Why Dynamo
Large language models exceed single-GPU capacity. Tensor parallelism spreads layers across GPUs but creates coordination challenges. Dynamo closes this orchestration gap.
Dynamo is inference engine agnostic (supports TRT-LLM, vLLM, SGLang) and provides:
- Disaggregated Prefill & Decode – Maximizes GPU throughput with latency/throughput trade-offs
- Dynamic GPU Scheduling – Optimizes performance based on fluctuating demand
- LLM-Aware Request Routing – Eliminates unnecessary KV cache re-computation
- Accelerated Data Transfer – Reduces inference response time using NIXL
- KV Cache Offloading – Leverages multiple memory hierarchies for higher throughput
Built in Rust for performance and Python for extensibility, Dynamo is fully open-source with an OSS-first development approach.
Backend Feature Support
| SGLang | TensorRT-LLM | vLLM | |
|---|---|---|---|
| Best For | High-throughput serving | Maximum performance | Broadest feature coverage |
| Disaggregated Serving | ✅ | ✅ | ✅ |
| KV-Aware Routing | ✅ | ✅ | ✅ |
| SLA-Based Planner | ✅ | ✅ | ✅ |
| KVBM | 🚧 | ✅ | ✅ |
| Multimodal | ✅ | ✅ | ✅ |
| Tool Calling | ✅ | ✅ | ✅ |
Full Feature Matrix → — Detailed compatibility including LoRA, Request Migration, Speculative Decoding, and feature interactions.
Dynamo Architecture
Latest News
- [12/05] Moonshot AI's Kimi K2 achieves 10x inference speedup with Dynamo on GB200
- [12/02] Mistral AI runs Mistral Large 3 with 10x faster inference using Dynamo
- [12/01] InfoQ: NVIDIA Dynamo simplifies Kubernetes deployment for LLM inference
Get Started
| Path | Use Case | Time | Requirements |
|---|---|---|---|
| Local Quick Start | Test on a single machine | ~5 min | 1 GPU, Ubuntu 24.04 |
| Kubernetes Deployment | Production multi-node clusters | ~30 min | K8s cluster with GPUs |
| Building from Source | Contributors and development | ~15 min | Ubuntu, Rust, Python |
Want to help shape the future of distributed LLM inference? See the Contributing Guide.
Local Quick Start
The following examples require a few system level packages. Recommended to use Ubuntu 24.04 with a x86_64 CPU. See docs/reference/support-matrix.md
Install Dynamo
Option A: Containers (Recommended)
Containers have all dependencies pre-installed. No setup required.
# SGLang
# TensorRT-LLM
# vLLM
Tip: To run frontend and worker in the same container, either run processes in background with
&(see below), or open a second terminal and usedocker exec -it <container_id> bash.
See Release Artifacts for available versions.
Option B: Install from PyPI
The Dynamo team recommends the uv Python package manager, although any way works.
# Install uv (recommended Python package manager)
|
# Create virtual environment
Install system dependencies and the Dynamo wheel for your chosen backend:
SGLang
Note: For CUDA 13 (B300/GB300), the container is recommended. See SGLang install docs for details.
TensorRT-LLM
Note: TensorRT-LLM requires
pipdue to a transitive Git URL dependency thatuvdoesn't resolve. We recommend using the TensorRT-LLM container for broader compatibility.
vLLM
Run Dynamo
Tip (Optional): Before running Dynamo, verify your system configuration with
python3 deploy/sanity_check.py
Dynamo provides a simple way to spin up a local set of inference components including:
- OpenAI Compatible Frontend – High performance OpenAI compatible http api server written in Rust.
- Basic and Kv Aware Router – Route and load balance traffic to a set of workers.
- Workers – Set of pre-configured LLM serving engines.
Start the frontend:
Tip: To run in a single terminal (useful in containers), append
> logfile.log 2>&1 &to run processes in background. Example:python3 -m dynamo.frontend --store-kv file > dynamo.frontend.log 2>&1 &
# Start an OpenAI compatible HTTP server with prompt templating, tokenization, and routing.
# For local dev: --store-kv file avoids etcd (workers and frontend must share a disk)
In another terminal (or same terminal if using background mode), start a worker for your chosen backend:
# SGLang
# TensorRT-LLM
# vLLM (note: uses --model, not --model-path)
Note: For dependency-free local development, disable KV event publishing (avoids NATS):
- vLLM: Add
--kv-events-config '{"enable_kv_cache_events": false}'- SGLang: No flag needed (KV events disabled by default)
- TensorRT-LLM: No flag needed (KV events disabled by default)
TensorRT-LLM only: The warning
Cannot connect to ModelExpress server/transport error. Using direct download.is expected and can be safely ignored.See Service Discovery and Messaging for details.
Send a Request
|
Rerun with curl -N and change stream in the request to true to get the responses as soon as the engine issues them.
Kubernetes Deployment
For production deployments on Kubernetes clusters with multiple GPUs.
Prerequisites
- Kubernetes cluster with GPU nodes
- Dynamo Platform installed
- HuggingFace token for model downloads
Production Recipes
Pre-built deployment configurations for common models and topologies:
| Model | Framework | Mode | GPUs | Recipe |
|---|---|---|---|---|
| Llama-3-70B | vLLM | Aggregated | 4x H100 | View |
| DeepSeek-R1 | SGLang | Disaggregated | 8x H200 | View |
| Qwen3-32B-FP8 | TensorRT-LLM | Aggregated | 8x GPU | View |
See recipes/README.md for the full list and deployment instructions.
Cloud Deployment Guides
Building from Source
For contributors who want to build Dynamo from source rather than installing from PyPI.
1. Install Libraries
Ubuntu:
sudo apt install -y build-essential libhwloc-dev libudev-dev pkg-config libclang-dev protobuf-compiler python3-dev cmake
macOS:
# if brew is not installed on your system, install it
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install cmake protobuf
## Check that Metal is accessible
xcrun -sdk macosx metal
If Metal is accessible, you should see an error like metal: error: no input files, which confirms it is installed correctly.
2. Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
3. Create a Python Virtual Environment
Follow the instructions in uv installation guide to install uv if you don't have uv installed. Once uv is installed, create a virtual environment and activate it.
- Install uv
|
- Create a virtual environment
4. Install Build Tools
uv pip install pip maturin
Maturin is the Rust<->Python bindings build tool.
5. Build the Rust Bindings
cd lib/bindings/python
maturin develop --uv
6. Install GPU Memory Service
The GPU Memory Service is a Python package with a C++ extension. It requires only Python development headers and a C++ compiler (g++).
7. Install the Wheel
cd $PROJECT_ROOT
uv pip install -e .
8. Run the Frontend
9. Configure for Local Development
- Pass
--store-kv fileto avoid external dependencies (see Service Discovery and Messaging) - Set
DYN_LOGto adjust the logging level (e.g.,export DYN_LOG=debug). Uses the same syntax asRUST_LOG
Note: VSCode and Cursor users can use the
.devcontainerfolder for a pre-configured dev environment. See the devcontainer README for details.
Advanced Topics
Benchmarking
Dynamo provides comprehensive benchmarking tools:
- Benchmarking Guide – Compare deployment topologies using AIPerf
- SLA-Driven Deployments – Optimize deployments to meet SLA requirements
Frontend OpenAPI Specification
The OpenAI-compatible frontend exposes an OpenAPI 3 spec at /openapi.json. To generate without running the server:
This writes to docs/frontends/openapi.json.
Service Discovery and Messaging
Dynamo uses TCP for inter-component communication. On Kubernetes, native resources (CRDs + EndpointSlices) handle service discovery. External services are optional for most deployments:
| Deployment | etcd | NATS | Notes |
|---|---|---|---|
| Local Development | ❌ Not required | ❌ Not required | Pass --store-kv file; vLLM also needs --kv-events-config '{"enable_kv_cache_events": false}' |
| Kubernetes | ❌ Not required | ❌ Not required | K8s-native discovery; TCP request plane |
Note: KV-Aware Routing requires NATS for prefix caching coordination.
For Slurm or other distributed deployments (and KV-aware routing):
To quickly setup both: docker compose -f deploy/docker-compose.yml up -d
See SGLang on Slurm and TRT-LLM on Slurm for deployment examples.