Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.

| Roadmap | Support matrix | Docs | Recipes | Examples | Prebuilt containers | Design Proposals | Blogs
NVIDIA Dynamo
High-throughput, low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments.
Framework Support Matrix
| Feature | vLLM | SGLang | TensorRT-LLM |
|---|---|---|---|
| Disaggregated Serving | ✅ | ✅ | ✅ |
| KV-Aware Routing | ✅ | ✅ | ✅ |
| SLA-Based Planner | ✅ | ✅ | ✅ |
| KVBM | ✅ | 🚧 | ✅ |
| Multimodal | ✅ | ✅ | ✅ |
| Tool Calling | ✅ | ✅ | ✅ |
Full Feature Matrix → — Detailed compatibility including LoRA, Request Migration, Speculative Decoding, and feature interactions.
Latest News
- [12/05] Moonshot AI's Kimi K2 achieves 10x inference speedup with Dynamo on GB200
- [12/02] Mistral AI runs Mistral Large 3 with 10x faster inference using Dynamo
- [12/01] InfoQ: NVIDIA Dynamo simplifies Kubernetes deployment for LLM inference
- [11/20] Dell integrates PowerScale with Dynamo's NIXL for 19x faster TTFT
- [11/20] WEKA partners with NVIDIA on KV cache storage for Dynamo
- [11/13] Dynamo Office Hours Playlist
- [10/16] How Baseten achieved 2x faster inference with NVIDIA Dynamo
The Era of Multi-GPU, Multi-Node
Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close.
Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
- Disaggregated prefill & decode inference – Maximizes GPU throughput and facilitates trade off between throughput and latency.
- Dynamic GPU scheduling – Optimizes performance based on fluctuating demand
- LLM-aware request routing – Eliminates unnecessary KV cache re-computation
- Accelerated data transfer – Reduces inference response time using NIXL.
- KV cache offloading – Leverages multiple memory hierarchies for higher system throughput
Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.
Installation
The following examples require a few system level packages. Recommended to use Ubuntu 24.04 with a x86_64 CPU. See docs/reference/support-matrix.md
1. Initial setup
The Dynamo team recommends the uv Python package manager, although any way works. Install uv:
curl -LsSf https://astral.sh/uv/install.sh | sh
Install Python development headers
Backend engines require Python development headers for JIT compilation. Install them with:
2. Select an engine
We publish Python wheels specialized for each of our supported engines: vllm, sglang, and trtllm. The examples that follow use SGLang; continue reading for other engines.
uv venv venv
source venv/bin/activate
uv pip install pip
# Choose one
uv pip install "ai-dynamo[sglang]" #replace with [vllm], [trtllm], etc.
3. Run Dynamo
Sanity check (optional)
Before trying out Dynamo, you can verify your system configuration and dependencies:
This is a quick check for system resources, development tools, LLM frameworks, and Dynamo components.
Running an LLM API server
Dynamo provides a simple way to spin up a local set of inference components including:
- OpenAI Compatible Frontend – High performance OpenAI compatible http api server written in Rust.
- Basic and Kv Aware Router – Route and load balance traffic to a set of workers.
- Workers – Set of pre-configured LLM serving engines.
# Start an OpenAI compatible HTTP server with prompt templating, tokenization, and routing.
# Pass the TLS certificate and key paths to use HTTPS instead of HTTP.
# Pass --store-kv to use the filesystem instead of etcd. The workers and frontend must share a disk.
python -m dynamo.frontend --http-port 8000 [--tls-cert-path cert.pem] [--tls-key-path key.pem] [--store-kv file]
# Start the SGLang engine, connecting to NATS and etcd to receive requests. You can run several of these,
# both for the same model and for multiple models. The frontend node will discover them.
# Pass --store-kv to use the filesystem instead of etcd. The workers and frontend must share a disk.
python -m dynamo.sglang --model deepseek-ai/DeepSeek-R1-Distill-Llama-8B [--store-kv file]
Send a Request
|
Rerun with curl -N and change stream in the request to true to get the responses as soon as the engine issues them.
Deploying Dynamo
- Follow the Quickstart Guide to deploy on Kubernetes.
- Check out Backends to deploy various workflow configurations (e.g. SGLang with router, vLLM with disaggregated serving, etc.)
- Run some Examples to learn about building components in Dynamo and exploring various integrations.
Service Discovery and Messaging
Dynamo uses TCP for inter-component communication. External services are optional for most deployments:
| Deployment | etcd | NATS | Notes |
|---|---|---|---|
| Kubernetes | ❌ Not required | ❌ Not required | K8s-native discovery; TCP request plane |
| Local development | ❌ Not required | ❌ Not required | Pass --store-kv file; TCP request plane |
| KV-aware routing | — | ✅ Required | Add NATS for KV event messaging |
For local development, pass --store-kv file to both the frontend and workers. For distributed non-Kubernetes deployments or KV-aware routing:
To quickly setup both: docker compose -f deploy/docker-compose.yml up -d
Benchmarking Dynamo
Dynamo provides comprehensive benchmarking tools to evaluate and optimize your deployments:
- Benchmarking Guide – Compare deployment topologies (aggregated vs. disaggregated vs. vanilla vLLM) using AIPerf
- SLA-Driven Dynamo Deployments – Optimize your deployment to meet SLA requirements
Engines
Dynamo is designed to be inference engine agnostic. To use any engine with Dynamo, start a Dynamo frontend (python -m dynamo.frontend). For local development, pass --store-kv file to avoid etcd dependency. NATS is optional and only required for KV-aware routing.
vLLM
uv pip install ai-dynamo[vllm]
Run the backend/worker like this:
python -m dynamo.vllm --help
vLLM attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass --context-length <value>.
To specify which GPUs to use set environment variable CUDA_VISIBLE_DEVICES.
SGLang
# Install libnuma
apt install -y libnuma-dev
uv pip install ai-dynamo[sglang]
Run the backend/worker like this:
python -m dynamo.sglang --help
You can pass any sglang flags directly to this worker, see https://docs.sglang.ai/advanced_features/server_arguments.html . See there to use multiple GPUs.
TensorRT-LLM
It is recommended to use NGC PyTorch Container for running the TensorRT-LLM engine.
[!Note] Ensure that you select a PyTorch container image version that matches the version of TensorRT-LLM you are using. For example, if you are using
tensorrt-llm==1.2.0rc5, use the PyTorch container image version25.10. To find the correct PyTorch container version for your desiredtensorrt-llmrelease, visit the TensorRT-LLM Dockerfile.multi on GitHub. Switch to the branch that matches yourtensorrt-llmversion, and look for theBASE_TAGline to identify the recommended PyTorch container tag.
[!Important] Launch container with the following additional settings
--shm-size=1g --ulimit memlock=-1
Install prerequisites
# Optional step: Only required for non-container installations. The PyTorch 25.10 container already includes PyTorch 2.9.0 with CUDA 13.0.
uv pip install torch==2.9.0 torchvision --index-url https://download.pytorch.org/whl/cu130
sudo apt-get -y install libopenmpi-dev
# Optional step: Only required for disaggregated serving
sudo apt-get -y install libzmq3-dev
[!Tip] You can learn more about these prequisites and known issues with TensorRT-LLM pip based installation here.
After installing the pre-requisites above, install Dynamo
pip install --pre --extra-index-url https://pypi.nvidia.com ai-dynamo[trtllm]
[!Note] We use
pipinstead ofuvhere becausetensorrt-llmhas a URL-based git dependency (etcd3) thatuvdoes not currently support.
Run the backend/worker like this:
python -m dynamo.trtllm --help
To specify which GPUs to use set environment variable CUDA_VISIBLE_DEVICES.
Developing Locally
1. Install libraries
Ubuntu:
sudo apt install -y build-essential libhwloc-dev libudev-dev pkg-config libclang-dev protobuf-compiler python3-dev cmake
macOS:
# if brew is not installed on your system, install it
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install cmake protobuf
## Check that Metal is accessible
xcrun -sdk macosx metal
If Metal is accessible, you should see an error like metal: error: no input files, which confirms it is installed correctly.
2. Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
3. Create a Python virtual env:
Follow the instructions in uv installation guide to install uv if you don't have uv installed. Once uv is installed, create a virtual environment and activate it.
- Install uv
|
- Create a virtual environment
4. Install build tools
uv pip install pip maturin
Maturin is the Rust<->Python bindings build tool.
5. Build the Rust bindings
cd lib/bindings/python
maturin develop --uv
6. Install the wheel
cd $PROJECT_ROOT
uv pip install -e .
You should now be able to run python -m dynamo.frontend.
For local development, pass --store-kv file to avoid external dependencies (see Service Discovery and Messaging section).
Set the environment variable DYN_LOG to adjust the logging level; for example, export DYN_LOG=debug. It has the same syntax as RUST_LOG.
If you use vscode or cursor, we have a .devcontainer folder built on Microsofts Extension. For instructions see the ReadMe for more details.