Expand description
Burn multi-backend dispatch.
§Available Backends
The dispatch backend supports the following variants, each enabled via cargo features:
| Backend | Feature | Description |
|---|---|---|
Cpu | cpu | Rust CPU backend (MLIR + LLVM) |
Cuda | cuda | NVIDIA CUDA backend |
Metal | metal | Apple Metal backend via wgpu (MSL) |
Rocm | rocm | AMD ROCm backend |
Vulkan | vulkan | Vulkan backend via wgpu (SPIR-V) |
WebGpu | webgpu | WebGPU backend via wgpu (WGSL) |
NdArray | ndarray | Pure Rust CPU backend using ndarray |
LibTorch | tch | Libtorch backend via tch |
Autodiff | autodiff | Autodiff-enabled backend (used in combination with any of the backends above) |
Note: WGPU-based backends (metal, vulkan, webgpu) are mutually exclusive.
All other backends can be combined freely.
§WGPU Backend Exclusivity
The WGPU-based backends (metal, vulkan, webgpu) are mutually exclusive due to
the current automatic compile, which can only select one target at a time.
Enable only one of these features in your Cargo.toml:
metalvulkanwebgpu
If multiple WGPU features are enabled, the build script will emit a warning and disable all WGPU backends to prevent unintended behavior.
Structs§
- Autodiff
Device - A wrapper that enables automatic differentiation for a
DispatchDevice. - Dispatch
- The main execution backend in Burn.
Enums§
- Backend
Tensor - Tensor which points to a backend tensor primitive kind.
- Dispatch
Device - Represents a device for the
Dispatch. - Dispatch
Tensor - Dispatch tensor that can hold tensors from any enabled backend.