//! LoRA configuration.
/// Configuration for a LoRA adapter layer.
///
/// LoRA (Low-Rank Adaptation, Hu et al., 2021) decomposes weight updates as
/// `dW = B @ A` where `B in R^{d x r}`, `A in R^{r x k}`, and `r << min(d, k)`.
/// The effective scaling factor applied is `alpha / rank`.