Skip to main content

Crate lfm

Crate lfm 

Source
Expand description

Rust ONNX inference for LiquidAI LFM2.5-VL (vision-language) models.

See docs/superpowers/specs/2026-05-03-lfm-vlm-wrapper-design.md for the full design rationale.

§Model weights license

This crate is dual-licensed under MIT OR Apache-2.0. The model weights it wraps are NOT — LFM2.5-VL-450M ships under the LFM Open License v1.0 (lfm1.0, see https://www.liquid.ai/lfm-license). Verify your use case complies with Liquid AI’s terms separately from this crate’s license.

Re-exports§

pub use chat_template::BOS;
pub use chat_template::BOS_TOKEN_ID;
pub use chat_template::EOS_TOKEN_ID;
pub use chat_template::IM_END;
pub use chat_template::IM_START;
pub use chat_template::IMAGE_END;
pub use chat_template::IMAGE_START;
pub use chat_template::IMAGE_THUMBNAIL;
pub use chat_template::IMAGE_TOKEN;
pub use chat_template::IMAGE_TOKEN_ID;
pub use chat_template::ImagePlaceholderInfo;
pub use chat_template::PAD;
pub use chat_template::PAD_TOKEN_ID;
pub use chat_template::TOOL_CALL_END;
pub use chat_template::TOOL_CALL_START;
pub use chat_template::expand_image_placeholders;
pub use chat_template::BUNDLED_CHAT_TEMPLATE_JINJA;inference
pub use chat_template::ContentItem;inference
pub use chat_template::Message;inference
pub use chat_template::UserContent;inference
pub use chat_template::apply_chat_template;inference
pub use error::Error;
pub use error::Result;
pub use options::ImageBudget;
pub use options::Options;
pub use options::RequestOptions;
pub use options::ThreadOptions;
pub use preproc::decode_bytes_with_orientation;decoders
pub use preproc::decode_with_orientation;decoders and non-WebAssembly
pub use preproc::PreprocessedImage;
pub use preproc::Preprocessor;
pub use preproc::TileGrid;

Modules§

chat_template
Chat-template rendering for LFM2.5-VL.
error
Error type for the lfm crate.
options
Configuration types: RequestOptions, ImageBudget, ThreadOptions, Options.
preproc
Image preprocessing for LFM2.5-VL. Wasm-compatible.

Structs§

ChatMessage
One message in a multi-turn conversation.
Engineinference and decoders
Public engine for LFM2.5-VL inference.
EnginePathsinference and decoders
Paths to the four model files used by Engine::from_paths.
ImageAnalysis
Structured single-image VLM output. Construct via an engine’s ImageAnalysisTask::parse (the Task::parse impl) or, for tests/builders, ImageAnalysis::new followed by with_* chains. All fields are private; the accessor surface follows the rest of the crate’s scenesdetect-style getter / with_* / set_* convention.
ImageAnalysisTask
The scene-analysis task. Construct via ImageAnalysisTask::new.

Enums§

ChatContent
Content payload of a ChatMessage.
ContentPart
One part inside a ChatContent::Parts multimodal message.
GraphOptimizationLevelinference
ONNX Runtime provides various graph optimizations to improve performance. Graph optimizations are essentially graph-level transformations, ranging from small graph simplifications and node eliminations to more complex node fusions and layout optimizations.
ImageInput
An image supplied to generate: either a file path or raw bytes.
JsonParseError
Convenience parse-error type for crate::Task implementations whose model output is JSON. Available behind the json feature.

Traits§

Task
A structured-output task description.