Skip to main content

burn/
lib.rs

1#![cfg_attr(not(feature = "std"), no_std)]
2#![warn(missing_docs)]
3
4//! # Burn
5//!
6//! Burn is a new comprehensive dynamic Deep Learning Framework built using Rust
7//! with extreme flexibility, compute efficiency and portability as its primary goals.
8//!
9//! ## Performance
10//!
11//! Because we believe the goal of a deep learning framework is to convert computation
12//! into useful intelligence, we have made performance a core pillar of Burn.
13//! We strive to achieve top efficiency by leveraging multiple optimization techniques:
14//!
15//! - Automatic kernel fusion
16//! - Asynchronous execution
17//! - Thread-safe building blocks
18//! - Intelligent memory management
19//! - Automatic kernel selection
20//! - Hardware specific features
21//! - Custom Backend Extension
22//!
23//! ## Training & Inference
24//!
25//! The whole deep learning workflow is made easy with Burn, as you can monitor your training progress
26//! with an ergonomic dashboard, and run inference everywhere from embedded devices to large GPU clusters.
27//!
28//! Burn was built from the ground up with training and inference in mind. It's also worth noting how Burn,
29//! in comparison to frameworks like PyTorch, simplifies the transition from training to deployment,
30//! eliminating the need for code changes.
31//!
32//! ## Backends
33//!
34//! Burn strives to be as fast as possible on as many hardwares as possible, with robust implementations.
35//! We believe this flexibility is crucial for modern needs where you may train your models in the cloud,
36//! then deploy on customer hardwares, which vary from user to user.
37//!
38//! Compared to other frameworks, Burn has a very different approach to supporting many backends.
39//! By design, most code is generic over the Backend trait, which allows us to build Burn with swappable backends.
40//! This makes composing backend possible, augmenting them with additional functionalities such as
41//! autodifferentiation and automatic kernel fusion.
42//!
43//! - WGPU (WebGPU): Cross-Platform GPU Backend
44//! - Candle: Backend using the Candle bindings
45//! - LibTorch: Backend using the LibTorch bindings
46//! - Flex: Pure-Rust CPU backend (std, no_std, WebAssembly)
47//! - Autodiff: Backend decorator that brings backpropagation to any backend
48//! - Fusion: Backend decorator that brings kernel fusion to backends that support it
49//!
50//! # Quantization
51//!
52//! Quantization techniques perform computations and store tensors in lower precision data types like
53//! 8-bit integer instead of floating point precision. There are multiple approaches to quantize a deep
54//! learning model categorized as post-training quantization (PTQ) and quantization aware training (QAT).
55//!
56//! In post-training quantization, the model is trained in floating point precision and later converted
57//! to the lower precision data type. There are two types of post-training quantization:
58//!
59//! 1. Static quantization: quantizes the weights and activations of the model. Quantizing the
60//!    activations statically requires data to be calibrated (i.e., recording the activation values to
61//!    compute the optimal quantization parameters with representative data).
62//! 2. Dynamic quantization: quantized the weights ahead of time (like static quantization) but the
63//!    activations are dynamically at runtime.
64//!
65//! Sometimes post-training quantization is not able to achieve acceptable task accuracy. In general,
66//! this is where quantization-aware training (QAT) can be used: during training, fake-quantization
67//! modules are inserted in the forward and backward passes to simulate quantization effects, allowing
68//! the model to learn representations that are more robust to reduced precision.
69//!
70//! Burn does not currently support QAT. Only post-training quantization (PTQ) is implemented at this
71//! time.
72//!
73//! Quantization support in Burn is currently in active development. It supports the following PTQ modes on some backends:
74//! - Per-tensor and per-block quantization to 8-bit, 4-bit and 2-bit representations
75//!
76//! ## Feature Flags
77//!
78//! The following feature flags are available.
79//! By default, the feature `std` is activated.
80//!
81//! - Training
82//!   - `train`: Enables features `dataset` and `autodiff` and provides a training environment
83//!   - `tui`: Includes Text UI with progress bar and plots
84//!   - `metrics`: Includes system info metrics (CPU/GPU usage, etc.)
85//! - Dataset
86//!   - `dataset`: Includes a datasets library
87//!   - `audio`: Enables audio datasets (SpeechCommandsDataset)
88//!   - `sqlite`: Stores datasets in SQLite database
89//!   - `sqlite_bundled`: Use bundled version of SQLite
90//!   - `vision`: Enables vision datasets (MnistDataset)
91//! - Backends
92//!   - `wgpu`: Makes available the WGPU backend
93//!   - `webgpu`: Makes available the `wgpu` backend with the WebGPU Shading Language (WGSL) compiler
94//!   - `vulkan`: Makes available the `wgpu` backend with the alternative SPIR-V compiler
95//!   - `cuda`: Makes available the CUDA backend
96//!   - `rocm`: Makes available the ROCm backend
97//!   - `candle`: Makes available the Candle backend
98//!   - `tch`: Makes available the LibTorch backend
99//!   - `flex`: Makes available the Flex backend (pure-Rust CPU, std/no_std/WASM)
100//!   - `ndarray`: Makes available the NdArray backend (legacy - prefer `flex` for new projects)
101//! - Backend specifications
102//!   - `accelerate`: If supported, Accelerate will be used
103//!   - `blas-netlib`: If supported, Blas Netlib will be use
104//!   - `openblas`: If supported, Openblas will be use
105//!   - `openblas-system`: If supported, Openblas installed on the system will be use
106//!   - `autotune`: Enable running benchmarks to select the best kernel in backends that support it.
107//!   - `fusion`: Enable operation fusion in backends that support it.
108//! - Backend decorators
109//!   - `autodiff`: Makes available the Autodiff backend
110//! - Model Storage
111//!   - `store`: Enables model storage with SafeTensors format and PyTorch interoperability
112//! - Others:
113//!   - `std`: Activates the standard library (deactivate for no_std)
114//!   - `server`: Enables the remote server.
115//!   - `network`: Enables network utilities (currently, only a file downloader with progress bar)
116//!
117//! You can also check the details in sub-crates [`burn-core`](https://docs.rs/burn-core) and [`burn-train`](https://docs.rs/burn-train).
118
119pub use burn_core::*;
120
121/// Train module
122#[cfg(feature = "train")]
123pub mod train {
124    pub use burn_train::*;
125}
126
127/// Module for reinforcement learning.
128#[cfg(feature = "rl")]
129pub mod rl {
130    pub use burn_rl::*;
131}
132
133/// Backend module.
134pub mod backend;
135
136#[cfg(feature = "server")]
137pub use burn_remote::server;
138
139/// Module for collective operations
140#[cfg(feature = "collective")]
141pub mod collective;
142
143/// Module for model storage and serialization
144#[cfg(feature = "store")]
145pub mod store {
146    pub use burn_store::*;
147}
148
149/// Neural network module.
150pub mod nn {
151    pub use burn_nn::*;
152}
153
154pub use burn_std::config::config as runtime_config;
155
156/// Optimizers module.
157pub mod optim {
158    pub use burn_optim::*;
159}
160
161// For backward compat, `burn::lr_scheduler::*`
162/// Learning rate scheduler module.
163#[cfg(feature = "std")]
164pub mod lr_scheduler {
165    pub use burn_optim::lr_scheduler::*;
166}
167// For backward compat, `burn::grad_clipping::*`
168/// Gradient clipping module.
169pub mod grad_clipping {
170    pub use burn_optim::grad_clipping::*;
171}
172
173#[cfg(feature = "dispatch")]
174pub use burn_dispatch::*;
175
176/// CubeCL module re-export.
177#[cfg(feature = "cubecl")]
178pub mod cubecl {
179    pub use cubecl::*;
180}
181
182#[cfg(feature = "vision")]
183/// Vision module.
184pub mod vision {
185    pub use burn_vision::*;
186}
187
188pub mod prelude {
189    //! Structs and macros used by most projects. Add `use
190    //! burn::prelude::*` to your code to quickly get started with
191    //! Burn.
192    pub use burn_core::prelude::*;
193
194    pub use crate::nn;
195}