Expand description
§Liquid Edge - Generic Edge Inference Runtime
A lightweight, efficient inference runtime designed for edge computing environments. Supports multiple backends for running deep learning models on edge devices.
Re-exports§
pub use device::cpu;pub use device::cpu_with_threads;pub use device::cuda;pub use device::cuda_default;pub use device::Device;pub use error::EdgeError;pub use error::EdgeResult;pub use model::Model;pub use runtime::InferenceInput;pub use runtime::InferenceOutput;pub use runtime::InferenceRuntime;pub use runtime::RuntimeBackend;