1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
//! LoRA (Low-Rank Adaptation) and Adapter layers for parameter-efficient fine-tuning.
//!
//! This module provides implementations of several parameter-efficient fine-tuning (PEFT)
//! techniques:
//!
//! - **LoRA**: Decomposes weight updates into low-rank matrices, dramatically reducing
//! the number of trainable parameters while maintaining model quality.
//! - **DoRA**: Weight-decomposed LoRA that separately learns magnitude and direction.
//! - **AdaLoRA**: Adaptive rank allocation via SVD parameterisation and importance scoring.
//! - **IA³**: Infused Adapter — element-wise scaling vectors, extremely parameter-efficient.
//! - **VeRA**: Vector-based Random Matrix Adaptation — shares frozen random matrices,
//! only learns tiny per-layer scaling vectors.
//! - **Bottleneck Adapters**: Inserts small trainable bottleneck modules with optional
//! residual connections.
//!
//! # References
//!
//! - Hu et al., "LoRA: Low-Rank Adaptation of Large Language Models", 2021
//! - Houlsby et al., "Parameter-Efficient Transfer Learning for NLP", 2019
//! - Liu et al., "DoRA: Weight-Decomposed Low-Rank Adaptation", 2024
//! - Zhang et al., "Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning", 2023
//! - Liu et al., "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper…", 2022
//! - Kopiczko et al., "VeRA: Vector-Based Random Matrix Adaptation", 2024
pub use ;
pub use BottleneckAdapter;
pub use ;
pub use ;
pub use LoRALinear;
pub use ;
pub use ;
pub use ;