Expand description
Generic inference runtime for edge computing
This module provides a generic interface for running inference on various deep learning models using different backends.
Modules§
- onnx
- ONNX Runtime backend for liquid-edge inference
Structs§
- Inference
Input - Generic input for inference operations
- Inference
Output - Generic output from inference operations
- Inference
Runtime - Main inference runtime that manages different backends
Traits§
- Runtime
Backend - Generic inference runtime trait