Skip to main content

ComputeModel

Trait ComputeModel 

Source
pub trait ComputeModel<InP: Payload, OutP: Payload> {
    // Required methods
    fn init(&mut self) -> Result<(), InferenceError>;
    fn infer_one(
        &mut self,
        inp: &InP,
        out: &mut OutP,
    ) -> Result<(), InferenceError>;
    fn drain(&mut self) -> Result<(), InferenceError>;
    fn reset(&mut self) -> Result<(), InferenceError>;
    fn metadata(&self) -> ModelMetadata;

    // Provided method
    fn infer_batch(
        &mut self,
        inps: Batch<'_, InP>,
        outs: &mut [OutP],
    ) -> Result<(), InferenceError> { ... }
}
Expand description

A loaded model that can perform inference.

Required Methods§

Source

fn init(&mut self) -> Result<(), InferenceError>

Prepare internal state (allocate work buffers, compile kernels, etc.).

Source

fn infer_one(&mut self, inp: &InP, out: &mut OutP) -> Result<(), InferenceError>

Single-item inference (1×1).

Source

fn drain(&mut self) -> Result<(), InferenceError>

Ensure outstanding device work is complete (if any).

Source

fn reset(&mut self) -> Result<(), InferenceError>

Reset internal state to a known baseline (drop caches, etc.).

Source

fn metadata(&self) -> ModelMetadata

Return model metadata (I/O placement preferences, limits).

Provided Methods§

Source

fn infer_batch( &mut self, inps: Batch<'_, InP>, outs: &mut [OutP], ) -> Result<(), InferenceError>

Optional: batched inference. Default loops infer_one.

Implementors§

Source§

impl ComputeModel<Tensor<u32, TEST_TENSOR_ELEMENT_COUNT, 2>, Tensor<u32, TEST_TENSOR_ELEMENT_COUNT, 2>> for TestTensorModel