Table of Contents
- What is Entrenar?
- Installation
- Getting Started
- Features
- Usages
- Architecture
- Documentation
- Contributing
What is Entrenar?
Entrenar (Spanish: "to train") provides everything needed to train neural networks in Rust:
- Autograd Engine - Tape-based automatic differentiation
- Optimizers - SGD, Adam, AdamW with schedulers and gradient clipping
- LoRA/QLoRA - Parameter-efficient fine-tuning (4-bit quantized)
- Quantization - QAT, PTQ, GGUF-compatible Q4_0/Q8_0
- Model Merging - TIES, DARE, SLERP algorithms
- Knowledge Distillation - Multi-teacher, progressive layer-wise
- Training Loop - Callbacks, checkpoints, early stopping
- Monitoring - Real-time metrics, drift detection, Andon alerts
- Explainability - Feature attribution via SHAP, Integrated Gradients
Part of the PAIML Stack, built on trueno for SIMD-accelerated operations.
Installation
# From crates.io
# From source
Getting Started
Add to your Cargo.toml:
[]
= "0.2"
Basic Training
use ;
use Adam;
use Tensor;
Declarative Configuration
# train.yaml
model:
path: base-model.gguf
data:
train: train.parquet
batch_size: 8
optimizer:
name: adamw
lr: 0.0001
lora:
rank: 64
alpha: 16
training:
epochs: 10
grad_clip: 1.0
Features
Autograd
Tape-based automatic differentiation with verified gradients:
use ;
let y = matmul; // Matrix multiplication
let s = softmax; // Softmax activation
let n = layer_norm; // Layer normalization
let a = attention; // Scaled dot-product attention
Optimizers
use ;
let sgd = SGDnew;
let adam = new;
let adamw = new;
// Learning rate scheduling
let scheduler = new;
LoRA / QLoRA
Parameter-efficient fine-tuning with up to 99.75% parameter reduction:
use ;
// Standard LoRA
let lora = new;
// QLoRA: 4-bit base + FP16 adapters
// 7B model: 28GB -> 3.5GB memory
let qlora = new;
Quantization
use ;
// QAT with straight-through estimator
let fq = new;
// Post-training quantization
let calibrator = percentile;
// GGUF export (llama.cpp compatible)
let quantizer = q4_0;
Model Merging
use ;
// TIES: Trim + Sign Election
let merged = new.merge;
// DARE: Dropout + Rescale
let merged = new.merge;
// SLERP: Spherical interpolation
let merged = new.merge;
Knowledge Distillation
use ;
// Temperature-scaled KD loss
let kd = new;
let loss = kd.compute;
// Multi-teacher ensemble
let ensemble = weighted;
Training Callbacks
use ;
trainer.add_callback;
trainer.add_callback;
trainer.add_callback;
trainer.add_callback; // NaN/Inf detection
// Feature importance tracking
trainer.add_callback;
Real-Time Monitoring
Toyota Way-inspired quality monitoring:
use ;
let mut collector = new;
let mut drift = new;
let mut andon = new;
// Automatic drift detection and Andon alerts
if let Drift = drift.check
Usages
Programmatic
CLI Commands
# Training
# Model operations
# Benchmarking & Monitoring
# Shell completions
Architecture
entrenar/
├── autograd/ Tape-based automatic differentiation
├── optim/ SGD, Adam, AdamW, schedulers
├── lora/ LoRA, QLoRA fine-tuning
├── quant/ QAT, PTQ, GGUF quantization
├── merge/ TIES, DARE, SLERP merging
├── distill/ Knowledge distillation
├── train/ Trainer, callbacks, metrics
├── monitor/ Real-time monitoring, Andon
├── config/ Declarative YAML config
└── io/ Model persistence
Quality
| Metric | Value |
|---|---|
| Tests | 2155 passing |
| Coverage | >90% |
| Property Tests | 200K+ iterations |
| Gradient Checking | Finite difference validated |
| Mutation Testing | >80% kill rate |
PAIML Stack
| Library | Purpose | Version |
|---|---|---|
| trueno | SIMD tensor operations | 0.7.3 |
| entrenar | Training & optimization | 0.2.3 |
| aprender | ML algorithms & explainability | 0.14.0 |
| realizar | GGUF inference | 0.2.1 |
Documentation
- API Reference
- Book - Comprehensive guide
- Roadmap - 53/53 tickets complete
Contributing
Contributions welcome! Please follow the PAIML quality standards:
- Fork the repository
- Create a feature branch
- Ensure all tests pass:
cargo test - Run quality checks:
cargo clippy -- -D warnings && cargo fmt --check - Submit a pull request
License
MIT License - see LICENSE for details.