Expand description
Rust ONNX inference for LiquidAI LFM2.5-VL (vision-language) models.
See docs/superpowers/specs/2026-05-03-lfm-vlm-wrapper-design.md
for the full design rationale.
§Model weights license
This crate is dual-licensed under MIT OR Apache-2.0. The model
weights it wraps are NOT — LFM2.5-VL-450M ships under the LFM
Open License v1.0 (lfm1.0, see https://www.liquid.ai/lfm-license).
Verify your use case complies with Liquid AI’s terms separately
from this crate’s license.
Re-exports§
pub use chat_template::BOS;pub use chat_template::BOS_TOKEN_ID;pub use chat_template::EOS_TOKEN_ID;pub use chat_template::IM_END;pub use chat_template::IM_START;pub use chat_template::IMAGE_END;pub use chat_template::IMAGE_START;pub use chat_template::IMAGE_THUMBNAIL;pub use chat_template::IMAGE_TOKEN;pub use chat_template::IMAGE_TOKEN_ID;pub use chat_template::ImagePlaceholderInfo;pub use chat_template::PAD;pub use chat_template::PAD_TOKEN_ID;pub use chat_template::TOOL_CALL_END;pub use chat_template::TOOL_CALL_START;pub use chat_template::expand_image_placeholders;pub use chat_template::BUNDLED_CHAT_TEMPLATE_JINJA;inferencepub use chat_template::ContentItem;inferencepub use chat_template::Message;inferencepub use chat_template::UserContent;inferencepub use chat_template::apply_chat_template;inferencepub use error::Error;pub use error::Result;pub use options::ImageBudget;pub use options::Options;pub use options::RequestOptions;pub use options::ThreadOptions;pub use preproc::decode_bytes_with_orientation;decoderspub use preproc::decode_with_orientation;decodersand non-WebAssemblypub use preproc::PreprocessedImage;pub use preproc::Preprocessor;pub use preproc::TileGrid;
Modules§
- chat_
template - Chat-template rendering for LFM2.5-VL.
- error
- Error type for the
lfmcrate. - options
- Configuration types:
RequestOptions,ImageBudget,ThreadOptions,Options. - preproc
- Image preprocessing for LFM2.5-VL. Wasm-compatible.
Structs§
- Chat
Message - One message in a multi-turn conversation.
- Engine
inferenceanddecoders - Public engine for LFM2.5-VL inference.
- Engine
Paths inferenceanddecoders - Paths to the four model files used by
Engine::from_paths. - Image
Analysis - Structured single-image VLM output. Construct via an engine’s
ImageAnalysisTask::parse(theTask::parseimpl) or, for tests/builders,ImageAnalysis::newfollowed bywith_*chains. All fields are private; the accessor surface follows the rest of the crate’sscenesdetect-style getter /with_*/set_*convention. - Image
Analysis Task - The scene-analysis task. Construct via
ImageAnalysisTask::new.
Enums§
- Chat
Content - Content payload of a
ChatMessage. - Content
Part - One part inside a
ChatContent::Partsmultimodal message. - Graph
Optimization Level inference - ONNX Runtime provides various graph optimizations to improve performance. Graph optimizations are essentially graph-level transformations, ranging from small graph simplifications and node eliminations to more complex node fusions and layout optimizations.
- Image
Input - An image supplied to
generate: either a file path or raw bytes. - Json
Parse Error - Convenience parse-error type for
crate::Taskimplementations whose model output is JSON. Available behind thejsonfeature.
Traits§
- Task
- A structured-output task description.