pmetal 0.2.0

High-performance LLM fine-tuning framework for Apple Silicon
docs.rs failed to build pmetal-0.2.0
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.

PMetal

High-performance LLM fine-tuning framework for Apple Silicon.

This crate re-exports the PMetal sub-crates behind feature flags for convenient single-dependency usage:

[dependencies]
pmetal = "0.1"           # default features: core, gguf, metal, hub, mlx, models, lora, trainer
pmetal = { version = "0.1", features = ["full"] }  # everything

Feature Flags

Feature Crate Default
core [pmetal-core] yes
gguf [pmetal-gguf] yes
metal [pmetal-metal] yes
hub [pmetal-hub] yes
mlx [pmetal-mlx] yes
models [pmetal-models] yes
lora [pmetal-lora] yes
trainer [pmetal-trainer] yes
data [pmetal-data] no
distill [pmetal-distill] no
merge [pmetal-merge] no
vocoder [pmetal-vocoder] no
distributed [pmetal-distributed] no
mhc [pmetal-mhc] no
easy all training/inference crates yes
full all of the above no

Easy API

The [easy] module provides high-level builders for common workflows:

# async fn example() -> Result<(), Box<dyn std::error::Error>> {
// Fine-tune a model in one call
let result = pmetal::easy::finetune("Qwen/Qwen3-0.6B", "data.jsonl")
    .lora(16, 32.0)
    .epochs(3)
    .learning_rate(2e-4)
    .output("./output")
    .run()
    .await?;

// Run inference
let result = pmetal::easy::infer("Qwen/Qwen3-0.6B")
    .lora("./output/lora_weights.safetensors")
    .generate("What is 2+2?")
    .await?;
# Ok(())
# }