mlmf 0.1.0

Machine Learning Model Files - Loading, saving, and dynamic mapping for ML models
Documentation
# MLMF - Machine Learning Model Files


[![Crates.io](https://img.shields.io/crates/v/mlmf.svg)](https://crates.io/crates/mlmf)
[![Documentation](https://docs.rs/mlmf/badge.svg)](https://docs.rs/mlmf)
[![License](https://img.shields.io/badge/license-MIT%20OR%20Apache--2.0-blue.svg)](https://github.com/CireSnave/mlmf#license)

**MLMF** (Machine Learning Model Files) is a comprehensive Rust crate for working with ML model files. MLMF provides loading, saving, conversion, and dynamic mapping capabilities for transformer models across multiple formats including SafeTensors, GGUF, ONNX, PyTorch, and AWQ. It eliminates code duplication and provides a unified, efficient API for model file operations.

## Features


- πŸ—οΈ **Architecture Detection**: Automatically detects model architecture (LLaMA, GPT-2, GPT-NeoX) from tensor names
- πŸ“¦ **Multiple Formats**: Comprehensive support for SafeTensors, GGUF, ONNX, PyTorch, and AWQ formats
- πŸ—ΊοΈ **Name Mapping**: Intelligent tensor name mapping between HuggingFace and custom formats
- πŸ’Ύ **Memory Efficient**: Memory-mapped loading for large models (30GB+)
- πŸ”§ **Device Management**: Automatic CUDA detection with CPU fallback
- πŸ“Š **Progress Reporting**: Optional progress callbacks for long-running operations
- πŸ›‘οΈ **Type Safety**: Comprehensive error handling with detailed context
- πŸ”„ **Model Conversion**: Direct format conversion with batch processing and progress tracking

## Quick Start


Add `mlmf` to your `Cargo.toml`:

```toml
[dependencies]
mlmf = "0.1"
```

### Basic Usage


```rust
use mlmf::{LoadOptions, loader};
use candle_core::{Device, DType};

// Load a LLaMA model from SafeTensors
let device = Device::cuda_if_available(0).unwrap_or(Device::Cpu);
let options = LoadOptions {
    device: device.clone(),
    dtype: DType::F16,
    use_mmap: true,
    validate_cuda: false,
    progress: Some(mlmf::progress::default_progress()),
};

let loaded_model = loader::load_safetensors("./models/llama-7b", options)?;

// Access components
let var_builder = loaded_model.var_builder;
let config = loaded_model.config;
let name_mapper = loaded_model.name_mapper;

// Use name mapper to convert HF names to your format
if let Some(mapped_name) = name_mapper.map_name("model.layers.0.self_attn.q_proj.weight") {
    println!("Mapped name: {}", mapped_name);
}
```

### Architecture Detection


```rust
use mlmf::name_mapping::{TensorNameMapper, Architecture};

let tensor_names = vec![
    "model.embed_tokens.weight".to_string(),
    "model.layers.0.self_attn.q_proj.weight".to_string(),
    "model.norm.weight".to_string(),
];

let mapper = TensorNameMapper::from_tensor_names(&tensor_names)?;
assert_eq!(mapper.architecture(), Architecture::LLaMA);
```

### Model Conversion


```rust
use mlmf::conversion::{convert_model, ConversionFormat, ConversionOptions};
use std::path::Path;

// Convert from SafeTensors to ONNX
let options = ConversionOptions::default();
let result = convert_model(
    Path::new("model.safetensors"),
    Path::new("model.onnx"),
    ConversionFormat::ONNX,
    options,
)?;

println!("Conversion completed in {:.2}s", result.duration.as_secs_f64());
```

## Architecture


MLMF provides a modular architecture with the following components:

- **`loader`**: High-level loading API
- **`conversion`**: Direct model format conversion with batch processing
- **`name_mapping`**: Architecture detection and tensor name mapping
- **`config`**: HuggingFace config parsing with field aliases
- **`formats`**: Format-specific loaders and exporters (SafeTensors, GGUF, ONNX, PyTorch, AWQ)
- **`validation`**: CUDA validation and dtype checking
- **`progress`**: Progress reporting utilities

## Supported Models


- **LLaMA Family**: LLaMA 2/3, TinyLlama, Qwen, Mistral
- **GPT Family**: GPT-2, GPT-J
- **GPT-NeoX Family**: GPT-NeoX, Pythia, StableLM

## Examples


See the [`examples/`](examples/) directory for complete working examples:

- [`load_llama.rs`]examples/load_llama.rs - Loading LLaMA models from SafeTensors
- [`load_gpt2.rs`]examples/load_gpt2.rs - Loading GPT-2 models
- [`load_gguf.rs`]examples/load_gguf.rs - Loading quantized GGUF models

## Performance


MLMF is optimized for performance:

- **Memory-mapped loading**: Loads 70B models (130GB) in ~10 seconds
- **Architecture detection**: Typically completes in <100ms
- **Zero-copy**: Direct tensor access without unnecessary copying
- **Incremental builds**: Changes compile in <10 seconds

## Contributing


Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.

## License


This project is licensed under either of:

- Apache License, Version 2.0 ([LICENSE-APACHE]LICENSE-APACHE)
- MIT License ([LICENSE-MIT]LICENSE-MIT)

at your option.