Expand description
onde — On-device inference abstraction layer for SplitFire AI.
This crate centralises all mistral.rs-backed model management so that individual Tauri apps remain thin command wrappers:
inference::models— model ID constants and rich metadata used across the app (download size, display name, org, description).inference::token— HuggingFace token resolution (build-time literal or on-disk cache; required on iOS where the filesystem is sandboxed).hf_cache— HuggingFace hub cache inspection, repair, and model download with a progress-callback API that is decoupled from Tauri.
§Re-exports
mistralrs, hf_hub, and mistralrs_core are re-exported so that apps
depending on onde do not need their own direct dependency on those crates.
Access them as onde::mistralrs, onde::hf_hub, and onde::mistralrs_core.