pub struct LoraLayer {
pub weight_a: Vec<Vec<f64>>,
pub weight_b: Vec<Vec<f64>>,
pub base_weight: Vec<Vec<f64>>,
pub config: LoraConfig,
pub merged: bool,
/* private fields */
}Expand description
A single LoRA-augmented weight matrix.
Holds the frozen base weight W (d x k) and the low-rank factors
B (d x r, init zeros) and A (r x k, init Gaussian).
Forward: output = input @ (W + scaling * B @ A)^T.
Fields§
§weight_a: Vec<Vec<f64>>A matrix (r x k), initialised with random Gaussian N(0, 1/r).
weight_b: Vec<Vec<f64>>B matrix (d x r), initialised to zeros.
base_weight: Vec<Vec<f64>>Original frozen weight (d x k).
config: LoraConfigConfiguration.
merged: boolWhether delta W has been merged into base_weight.
Implementations§
Source§impl LoraLayer
impl LoraLayer
Sourcepub fn new(base_weight: Vec<Vec<f64>>, config: LoraConfig) -> LoraResult<Self>
pub fn new(base_weight: Vec<Vec<f64>>, config: LoraConfig) -> LoraResult<Self>
Create a new LoRA layer wrapping base_weight (d x k).
weight_a is initialised from N(0, 1/r) and weight_b from zeros,
so the initial delta W is the zero matrix.
Sourcepub fn effective_weight(&self) -> LoraResult<Vec<Vec<f64>>>
pub fn effective_weight(&self) -> LoraResult<Vec<Vec<f64>>>
Compute the effective weight W + scaling * B @ A without mutating state.
When merged, returns base_weight (the delta is already folded in).
Sourcepub fn forward(&mut self, input: &[Vec<f64>]) -> LoraResult<Vec<Vec<f64>>>
pub fn forward(&mut self, input: &[Vec<f64>]) -> LoraResult<Vec<Vec<f64>>>
Forward pass: output = input @ effective_weight^T.
input has shape (n, k) and the result has shape (n, d).
When not merged, applies optional dropout on the LoRA branch.
Sourcepub fn merge(&mut self) -> LoraResult<()>
pub fn merge(&mut self) -> LoraResult<()>
Merge scaling * B @ A into base_weight.
Sourcepub fn unmerge(&mut self) -> LoraResult<()>
pub fn unmerge(&mut self) -> LoraResult<()>
Remove scaling * B @ A from base_weight.
Sourcepub fn trainable_params(&self) -> usize
pub fn trainable_params(&self) -> usize
Number of trainable parameters: r * (d + k).
Sourcepub fn total_params(&self) -> usize
pub fn total_params(&self) -> usize
Total parameter count: d * k + r * (d + k).
Sourcepub fn compression_ratio(&self) -> f64
pub fn compression_ratio(&self) -> f64
Fraction of trainable vs total parameters.
Auto Trait Implementations§
impl Freeze for LoraLayer
impl RefUnwindSafe for LoraLayer
impl Send for LoraLayer
impl Sync for LoraLayer
impl Unpin for LoraLayer
impl UnsafeUnpin for LoraLayer
impl UnwindSafe for LoraLayer
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more