Expand description
§axolotl-rs
YAML-driven configurable fine-tuning toolkit for LLMs.
This crate provides a user-friendly interface for fine-tuning language models, similar to the Python Axolotl project but in pure Rust.
§Features
- YAML Configuration - Define entire training runs in simple config files
- Multiple Adapters - Support for
LoRA,QLoRA, full fine-tuning - Dataset Handling - Automatic loading and preprocessing
- Multi-GPU - Distributed training support (planned)
§Quick Start (CLI)
# Validate configuration
axolotl validate config.yaml
# Start training
axolotl train config.yaml
# Merge adapters
axolotl merge --config config.yaml --output ./merged-model§Quick Start (Library)
use axolotl_rs::{AxolotlConfig, Trainer};
// Load configuration from YAML file
let config = AxolotlConfig::from_file("config.yaml")?;
// Create trainer and start training
let mut trainer = Trainer::new(config)?;
trainer.train()?;§Using Presets
use axolotl_rs::AxolotlConfig;
// Create mutable config from preset
let mut config = AxolotlConfig::from_preset("llama2-7b")?;
// Customize as needed
config.training.epochs = 5;
config.training.learning_rate = 1e-4;§Building Custom Configurations
use axolotl_rs::{AxolotlConfig, TrainingConfig};
use axolotl_rs::config::{AdapterType, LoraSettings, DatasetConfig};
let config = AxolotlConfig {
base_model: "meta-llama/Llama-2-7b-hf".to_string(),
adapter: AdapterType::Lora,
lora: LoraSettings {
r: 64,
alpha: 16,
..Default::default()
},
quantization: None,
dataset: DatasetConfig {
path: "./data/train.jsonl".to_string(),
..Default::default()
},
training: TrainingConfig {
epochs: 3,
learning_rate: 2e-4,
..Default::default()
},
output_dir: "./outputs".to_string(),
seed: 42,
};Re-exports§
pub use config::AxolotlConfig;pub use config::TrainingConfig;pub use error::AxolotlError;pub use error::Result;pub use trainer::Trainer;
Modules§
- adapters
- Adapter integration layer.
- cli
- CLI utilities.
- config
- Configuration parsing and validation.
- dataset
- Dataset loading and preprocessing.
- error
- Error types for axolotl-rs.
- model
- Model loading and adapter merging.
- normalization
- Normalization layer wrappers with GPU support fallback.
- optimizer
- Optimizer implementations (AdamW, SGD).
- scheduler
- Learning rate schedulers.
- trainer
- Training loop and optimization.