Expand description
LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning.
Implements Hu et al. (2021): weight updates are decomposed as
dW = B @ A where B in R^{d x r}, A in R^{r x k}, and
r << min(d, k), drastically reducing trainable parameter count.
Re-exports§
pub use adapter::LayerStats;pub use adapter::LoraAdapter;pub use adapter::LoraAdapterSummary;pub use config::LoraConfig;pub use error::LoraError;pub use error::LoraResult;pub use layer::LoraLayer;