Crate autoagents_onnx

Crate autoagents_onnx 

Source
Expand description

§Liquid Edge - Generic Edge Inference Runtime

A lightweight, efficient inference runtime designed for edge computing environments. Supports multiple backends for running deep learning models on edge devices.

Re-exports§

pub use device::cpu;
pub use device::cpu_with_threads;
pub use device::Device;
pub use error::EdgeError;
pub use error::EdgeResult;
pub use model::Model;
pub use runtime::inference::OnnxBackend;
pub use runtime::inference::OnnxModel;
pub use runtime::onnx_model;
pub use runtime::InferenceInput;
pub use runtime::InferenceOutput;
pub use runtime::InferenceRuntime;

Modules§

chat
autoagents-onnx backend implementation for local model inference.
device
Device abstraction for liquid-edge inference
error
Error types for autoagents-onnx runtime
model
Model abstraction layer for autoagents-onnx
runtime
Onnx inference runtime for edge computing