1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
//! # Burn
//!
//! Burn is a new comprehensive dynamic Deep Learning Framework built using Rust
//! with extreme flexibility, compute efficiency and portability as its primary goals.
//!
//! ## Performance
//!
//! Because we believe the goal of a deep learning framework is to convert computation
//! into useful intelligence, we have made performance a core pillar of Burn.
//! We strive to achieve top efficiency by leveraging multiple optimization techniques:
//!
//! - Automatic kernel fusion
//! - Asynchronous execution
//! - Thread-safe building blocks
//! - Intelligent memory management
//! - Automatic kernel selection
//! - Hardware specific features
//! - Custom Backend Extension
//!
//! ## Training & Inference
//!
//! The whole deep learning workflow is made easy with Burn, as you can monitor your training progress
//! with an ergonomic dashboard, and run inference everywhere from embedded devices to large GPU clusters.
//!
//! Burn was built from the ground up with training and inference in mind. It's also worth noting how Burn,
//! in comparison to frameworks like PyTorch, simplifies the transition from training to deployment,
//! eliminating the need for code changes.
//!
//! ## Backends
//!
//! Burn strives to be as fast as possible on as many hardwares as possible, with robust implementations.
//! We believe this flexibility is crucial for modern needs where you may train your models in the cloud,
//! then deploy on customer hardwares, which vary from user to user.
//!
//! Compared to other frameworks, Burn has a very different approach to supporting many backends.
//! By design, most code is generic over the Backend trait, which allows us to build Burn with swappable backends.
//! This makes composing backend possible, augmenting them with additional functionalities such as
//! autodifferentiation and automatic kernel fusion.
//!
//! - WGPU (WebGPU): Cross-Platform GPU Backend
//! - Candle: Backend using the Candle bindings
//! - LibTorch: Backend using the LibTorch bindings
//! - NdArray: Backend using the NdArray primitive as data structure
//! - Autodiff: Backend decorator that brings backpropagation to any backend
//! - Fusion: Backend decorator that brings kernel fusion to backends that support it
//!
//! # Quantization
//!
//! Quantization techniques perform computations and store tensors in lower precision data types like
//! 8-bit integer instead of floating point precision. There are multiple approaches to quantize a deep
//! learning model categorized as post-training quantization (PTQ) and quantization aware training (QAT).
//!
//! In post-training quantization, the model is trained in floating point precision and later converted
//! to the lower precision data type. There are two types of post-training quantization:
//!
//! 1. Static quantization: quantizes the weights and activations of the model. Quantizing the
//! activations statically requires data to be calibrated (i.e., recording the activation values to
//! compute the optimal quantization parameters with representative data).
//! 2. Dynamic quantization: quantized the weights ahead of time (like static quantization) but the
//! activations are dynamically at runtime.
//!
//! Sometimes post-training quantization is not able to achieve acceptable task accuracy. In general,
//! this is where quantization-aware training (QAT) can be used: during training, fake-quantization
//! modules are inserted in the forward and backward passes to simulate quantization effects, allowing
//! the model to learn representations that are more robust to reduced precision.
//!
//! Burn does not currently support QAT. Only post-training quantization (PTQ) is implemented at this
//! time.
//!
//! Quantization support in Burn is currently in active development. It supports the following PTQ modes on some backends:
//! - Per-tensor and per-block quantization to 8-bit, 4-bit and 2-bit representations
//!
//! ## Feature Flags
//!
//! The following feature flags are available.
//! By default, the feature `std` is activated.
//!
//! - Training
//! - `train`: Enables features `dataset` and `autodiff` and provides a training environment
//! - `tui`: Includes Text UI with progress bar and plots
//! - `metrics`: Includes system info metrics (CPU/GPU usage, etc.)
//! - Dataset
//! - `dataset`: Includes a datasets library
//! - `audio`: Enables audio datasets (SpeechCommandsDataset)
//! - `sqlite`: Stores datasets in SQLite database
//! - `sqlite_bundled`: Use bundled version of SQLite
//! - `vision`: Enables vision datasets (MnistDataset)
//! - Backends
//! - `wgpu`: Makes available the WGPU backend
//! - `webgpu`: Makes available the `wgpu` backend with the WebGPU Shading Language (WGSL) compiler
//! - `vulkan`: Makes available the `wgpu` backend with the alternative SPIR-V compiler
//! - `cuda`: Makes available the CUDA backend
//! - `rocm`: Makes available the ROCm backend
//! - `candle`: Makes available the Candle backend
//! - `tch`: Makes available the LibTorch backend
//! - `ndarray`: Makes available the NdArray backend
//! - Backend specifications
//! - `accelerate`: If supported, Accelerate will be used
//! - `blas-netlib`: If supported, Blas Netlib will be use
//! - `openblas`: If supported, Openblas will be use
//! - `openblas-system`: If supported, Openblas installed on the system will be use
//! - `autotune`: Enable running benchmarks to select the best kernel in backends that support it.
//! - `fusion`: Enable operation fusion in backends that support it.
//! - Backend decorators
//! - `autodiff`: Makes available the Autodiff backend
//! - Model Storage
//! - `store`: Enables model storage with SafeTensors format and PyTorch interoperability
//! - Others:
//! - `std`: Activates the standard library (deactivate for no_std)
//! - `server`: Enables the remote server.
//! - `network`: Enables network utilities (currently, only a file downloader with progress bar)
//!
//! You can also check the details in sub-crates [`burn-core`](https://docs.rs/burn-core) and [`burn-train`](https://docs.rs/burn-train).
pub use *;
/// Train module
/// Module for reinforcement learning.
/// Backend module.
pub use server;
/// Module for collective operations
/// Module for model storage and serialization
/// Neural network module.
/// Optimizers module.
// For backward compat, `burn::lr_scheduler::*`
/// Learning rate scheduler module.
// For backward compat, `burn::grad_clipping::*`
/// Gradient clipping module.
pub use *;
/// CubeCL module re-export.
/// Vision module.