Expand description
§Entrenar: Training & Optimization Library
Entrenar provides a tape-based autograd engine with optimizers, LoRA/QLoRA, quantization (QAT/PTQ), model merging (TIES/DARE/SLERP), and knowledge distillation.
§Architecture
- autograd: Tape-based automatic differentiation
- optim: Optimizers (SGD, Adam, AdamW)
- lora: Low-rank adaptation with QLoRA support
- quant: Quantization-aware training and post-training quantization
- merge: Model merging methods
- distill: Knowledge distillation
- config: Declarative YAML configuration
- train: High-level training loop
- io: Model saving and loading (JSON, YAML formats)
- hf_pipeline: HuggingFace model fetching and distillation
- citl: Compiler-in-the-Loop training with RAG-based fix suggestions (feature-gated)
- efficiency: Cost tracking, device detection, and performance benchmarking
- eval: Model evaluation framework with metrics, comparison, and drift detection
- sovereign: Air-gapped deployment and distribution packaging
- research: Academic research artifacts, citations, and archive deposits
- ecosystem: PAIML stack integrations (Batuta, Realizar, Ruchy)
- dashboard: Real-time training monitoring and WASM bindings
- yaml_mode: Declarative YAML Mode Training (v1.0 spec)
- transformer: Transformer layers with autograd support
Re-exports§
pub use autograd::backward;pub use autograd::Context;pub use autograd::Tensor;pub use error::Error;pub use error::Result;
Modules§
- autograd
- Tape-based autograd engine
- config
- Declarative YAML configuration
- dashboard
- Dashboard Module (Phase 2: ENT-003, ENT-004)
- distill
- Knowledge Distillation
- ecosystem
- Ecosystem Integration (Phase 9)
- efficiency
- Efficiency & Cost Tracking Module (ENT-008 through ENT-012)
- error
- Error types for Entrenar
- eval
- Model Evaluation Framework (APR-073)
- generative
- Generative Models for Code Synthesis
- hf_
pipeline - HuggingFace Distillation & Learning Pipeline
- integrity
- Behavioral Integrity & Lineage Module (ENT-013, ENT-014, ENT-015)
- io
- Model I/O - Loading and saving models
- lora
- LoRA (Low-Rank Adaptation) implementation
- merge
- Model merging methods (TIES, DARE, SLERP)
- monitor
- Real-time Training Monitoring Module
- optim
- Optimizers for training neural networks
- prune
- Neural network pruning integration for Entrenar
- quality
- Quality Gates Module (ENT-005, ENT-006, ENT-007)
- quant
- Quantization: QAT and PTQ
- research
- Academic Research Artifacts (Phase 7)
- run
- Run Struct with Renacer Integration (ENT-002)
- search
- MCTS (Monte Carlo Tree Search) for Code Generation
- server
- REST/HTTP API Server (#67)
- sovereign
- Sovereign Deployment Module (ENT-016 through ENT-018)
- storage
- Experiment Storage Module (ENT-001)
- tokenizer
- Subword Tokenization Module (#26)
- train
- High-level training loop
- transformer
- Transformer layers with automatic differentiation support
- yaml_
mode - YAML Mode Training - Declarative, No-Code Training Interface