Skip to main content

Module lora

Module lora 

Source
Expand description

LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning.

Implements Hu et al. (2021): weight updates are decomposed as dW = B @ A where B in R^{d x r}, A in R^{r x k}, and r << min(d, k), drastically reducing trainable parameter count.

Re-exports§

pub use adapter::LayerStats;
pub use adapter::LoraAdapter;
pub use adapter::LoraAdapterSummary;
pub use config::LoraConfig;
pub use error::LoraError;
pub use error::LoraResult;
pub use layer::LoraLayer;

Modules§

adapter
Multi-layer LoRA adapter managing named LoRA layers.
config
LoRA configuration.
error
LoRA-specific error types.
layer
Core LoRA layer: low-rank A/B decomposition of a frozen base weight.