hf-fetch-model
A Rust library and CLI for downloading HuggingFace models at maximum speed. Multi-connection parallel downloads, file filtering, checksum verification, retry — and a search command to find models before you download them.
Install
Try it
$ hf-fm search mistral,3B,instruct
Models matching "mistral,3B,instruct" (by downloads):
hf-fm mistralai/Ministral-3-3B-Instruct-2512 (159.7K downloads)
hf-fm mistralai/Ministral-3-3B-Instruct-2512-BF16 (62.6K downloads)
hf-fm mistralai/Ministral-3-3B-Instruct-2512-GGUF (32.7K downloads)
...
$ hf-fm search mistralai/Ministral-3-3B-Instruct-2512 --exact
Exact match:
hf-fm mistralai/Ministral-3-3B-Instruct-2512 (159.7K downloads)
License: apache-2.0
Pipeline: text-generation
Library: vllm
Languages: en, fr, es, de, it, pt, nl, zh, ja, ko, ar
$ hf-fm list-files mistralai/Ministral-3-3B-Instruct-2512 --preset safetensors
File Size SHA256
model-00001-of-00002.safetensors 3.68 GiB a1b2c3d4e5f6
model-00002-of-00002.safetensors 2.88 GiB f6e5d4c3b2a1
config.json 856 B —
...
7 files, 6.57 GiB total
$ hf-fm mistralai/Ministral-3-3B-Instruct-2512 --preset safetensors --dry-run
Repo: mistralai/Ministral-3-3B-Instruct-2512
Revision: main
File Size Status
model-00001-of-00002.safetensors 3.68 GiB to download
model-00002-of-00002.safetensors 2.88 GiB to download
...
Total: 6.57 GiB (7 files, 0 cached, 7 to download)
Recommended config:
concurrency: 2
connections/file: 8
chunk threshold: 100 MiB
$ hf-fm mistralai/Ministral-3-3B-Instruct-2512 --preset safetensors
Downloaded to: ~/.cache/huggingface/hub/models--mistralai--Ministral-3-3B.../snapshots/...
Library quick start
let outcome = download.await?;
println!;
Filter, progress, auth, and more via the builder — see Configuration.
Documentation
| Topic | |
|---|---|
| CLI Reference | All subcommands, flags, and output examples |
| Search | Comma filtering, --exact, model card metadata |
| Configuration | Builder API, presets, progress callbacks |
| Architecture | How hf-fetch-model relates to hf-hub and candle-mi |
| Diagnostics | --verbose output, tracing setup for library users |
| Changelog | Release history and migration notes |
Used by
- candle-mi — Mechanistic interpretability toolkit for transformer models
License
Licensed under either of Apache License, Version 2.0 or MIT License at your option.
Development
- Exclusively developed with Claude Code (dev) and Augment Code (review)
- Git workflow managed with Fork
- All code follows CONVENTIONS.md, derived from Amphigraphic-Strict's Grit — a strict Rust subset designed to improve AI coding accuracy.