tensorlogic-train 0.1.0

Training loops, loss composition, and optimization schedules for TensorLogic
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
//! LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning.
//!
//! Implements Hu et al. (2021): weight updates are decomposed as
//! `dW = B @ A` where `B in R^{d x r}`, `A in R^{r x k}`, and
//! `r << min(d, k)`, drastically reducing trainable parameter count.

pub mod adapter;
pub mod config;
pub mod error;
pub mod layer;

#[cfg(test)]
mod tests;

pub use adapter::{LayerStats, LoraAdapter, LoraAdapterSummary};
pub use config::LoraConfig;
pub use error::{LoraError, LoraResult};
pub use layer::LoraLayer;