InferenceEngine

Trait InferenceEngine 

Source
pub trait InferenceEngine:
    Send
    + Sync
    + Debug {
    type Input: Send + Sync + Debug;
    type Output: Send + Sync + Debug;

    // Required methods
    fn infer(&self, input: &Self::Input) -> Result<Self::Output, OCRError>;
    fn engine_info(&self) -> String;

    // Provided method
    fn validate_inference_input(
        &self,
        _input: &Self::Input,
    ) -> Result<(), OCRError> { ... }
}
Expand description

Trait for inference engine operations.

This trait handles running the actual model inference, whether through ONNX Runtime, TensorRT, PyTorch, or other backends.

Required Associated Types§

Source

type Input: Send + Sync + Debug

Input type for inference (typically a tensor)

Source

type Output: Send + Sync + Debug

Output type from inference (typically a tensor)

Required Methods§

Source

fn infer(&self, input: &Self::Input) -> Result<Self::Output, OCRError>

Perform inference on preprocessed input.

§Arguments
  • input - Preprocessed input ready for inference
§Returns

Raw inference output or an error

Source

fn engine_info(&self) -> String

Get information about the inference engine.

§Returns

String describing the inference engine (model type, backend, etc.)

Provided Methods§

Source

fn validate_inference_input(&self, _input: &Self::Input) -> Result<(), OCRError>

Validate that the input is suitable for inference.

§Arguments
  • input - Input to validate
§Returns

Result indicating success or validation error

Implementors§