pub struct EimModel { /* private fields */ }Expand description
Edge Impulse Model Runner for Rust
This module provides functionality for running Edge Impulse machine learning models on Linux systems. It handles model lifecycle management, communication, and inference operations.
§Key Components
EimModel: Main struct for managing Edge Impulse modelsSensorType: Enum representing supported sensor input typesContinuousState: Internal state management for continuous inference modeMovingAverageFilter: Smoothing filter for continuous inference results
§Features
- Model process management and Unix socket communication
- Support for both single-shot and continuous inference modes
- Debug logging and callback system
- Moving average filtering for continuous mode results
- Automatic retry mechanisms for socket connections
- Visual anomaly detection (FOMO AD) support with normalized scores
§Example Usage
use edge_impulse_runner::{EimModel, InferenceResult};
// Create a new model instance
let mut model = EimModel::new("path/to/model.eim").unwrap();
// Run inference with some features
let features = vec![0.1, 0.2, 0.3];
let result = model.infer(features, None).unwrap();
// For visual anomaly detection models, normalize the results
if let InferenceResult::VisualAnomaly { anomaly, visual_anomaly_max, visual_anomaly_mean, visual_anomaly_grid } = result.result {
let (normalized_anomaly, normalized_max, normalized_mean, normalized_regions) =
model.normalize_visual_anomaly(
anomaly,
visual_anomaly_max,
visual_anomaly_mean,
&visual_anomaly_grid.iter()
.map(|bbox| (bbox.value, bbox.x as u32, bbox.y as u32, bbox.width as u32, bbox.height as u32))
.collect::<Vec<_>>()
);
println!("Anomaly score: {:.2}%", normalized_anomaly * 100.0);
}§Communication Protocol
The model communicates with the Edge Impulse process using JSON messages over Unix sockets:
- Hello message for initialization
- Model info response
- Classification requests
- Inference responses
§Error Handling
The module uses a custom EimError type for error handling, covering:
- Invalid file paths
- Socket communication errors
- Model execution errors
- JSON serialization/deserialization errors
§Visual Anomaly Detection
For visual anomaly detection models (FOMO AD):
- Scores are normalized relative to the model’s minimum anomaly threshold
- Results include overall anomaly score, maximum score, mean score, and anomalous regions
- Region coordinates are provided in the original image dimensions
- All scores are clamped to [0,1] range and displayed as percentages
- Debug mode provides detailed information about thresholds and regions
§Threshold Configuration
Models can be configured with different thresholds:
- Anomaly detection:
min_anomaly_scorethreshold for visual anomaly detection - Object detection:
min_scorethreshold for object confidence - Object tracking:
keep_grace,max_observations, andthresholdparameters
Thresholds can be updated at runtime using set_learn_block_threshold.
Implementations§
Source§impl EimModel
impl EimModel
Sourcepub fn new<P: AsRef<Path>>(path: P) -> Result<Self, EimError>
pub fn new<P: AsRef<Path>>(path: P) -> Result<Self, EimError>
Creates a new EimModel instance from a path to the .eim file.
This is the standard way to create a new model instance. The function will:
- Validate the file extension
- Spawn the model process
- Establish socket communication
- Initialize the model
§Arguments
path- Path to the .eim file. Must be a valid Edge Impulse model file.
§Returns
Returns Result<EimModel, EimError> where:
Ok(EimModel)- Successfully created and initialized modelErr(EimError)- Failed to create model (invalid path, process spawn failure, etc.)
§Examples
use edge_impulse_runner::EimModel;
let model = EimModel::new("path/to/model.eim").unwrap();Sourcepub fn new_with_socket<P: AsRef<Path>, S: AsRef<Path>>(
path: P,
socket_path: S,
) -> Result<Self, EimError>
pub fn new_with_socket<P: AsRef<Path>, S: AsRef<Path>>( path: P, socket_path: S, ) -> Result<Self, EimError>
Creates a new EimModel instance with a specific Unix socket path.
Similar to new(), but allows specifying the socket path for communication.
This is useful when you need control over the socket location or when running
multiple models simultaneously.
§Arguments
path- Path to the .eim filesocket_path- Custom path where the Unix socket should be created
Sourcepub fn new_with_debug<P: AsRef<Path>>(
path: P,
debug: bool,
) -> Result<Self, EimError>
pub fn new_with_debug<P: AsRef<Path>>( path: P, debug: bool, ) -> Result<Self, EimError>
Create a new EimModel instance with debug output enabled
Sourcepub fn new_with_socket_and_debug<P: AsRef<Path>, S: AsRef<Path>>(
path: P,
socket_path: S,
debug: bool,
) -> Result<Self, EimError>
pub fn new_with_socket_and_debug<P: AsRef<Path>, S: AsRef<Path>>( path: P, socket_path: S, debug: bool, ) -> Result<Self, EimError>
Create a new EimModel instance with debug output enabled and a specific socket path
Sourcepub fn set_debug_callback<F>(&mut self, callback: F)
pub fn set_debug_callback<F>(&mut self, callback: F)
Set a debug callback function to receive debug messages
When debug mode is enabled, this callback will be invoked with debug messages from the model runner. This is useful for logging or displaying debug information in your application.
§Arguments
callback- Function that takes a string slice and handles the debug message
Sourcepub fn socket_path(&self) -> &Path
pub fn socket_path(&self) -> &Path
Get the socket path used for communication
Sourcepub fn sensor_type(&self) -> Result<SensorType, EimError>
pub fn sensor_type(&self) -> Result<SensorType, EimError>
Get the sensor type for this model
Sourcepub fn parameters(&self) -> Result<&ModelParameters, EimError>
pub fn parameters(&self) -> Result<&ModelParameters, EimError>
Get the model parameters
Sourcepub fn infer(
&mut self,
features: Vec<f32>,
debug: Option<bool>,
) -> Result<InferenceResponse, EimError>
pub fn infer( &mut self, features: Vec<f32>, debug: Option<bool>, ) -> Result<InferenceResponse, EimError>
Run inference on the input features
This method automatically handles both continuous and non-continuous modes:
§Non-Continuous Mode
- Each call is independent
- All features must be provided in a single call
- Results are returned immediately
§Continuous Mode (automatically enabled for supported models)
- Features are accumulated across calls
- Internal buffer maintains sliding window of features
- Moving average filter smooths results
- Initial calls may return empty results while buffer fills
§Arguments
features- Vector of input featuresdebug- Optional debug flag to enable detailed output for this inference
§Returns
Returns Result<InferenceResponse, EimError> containing inference results
Sourcepub fn input_size(&self) -> Result<usize, EimError>
pub fn input_size(&self) -> Result<usize, EimError>
Get the required number of input features for this model
Returns the number of features expected by the model for each classification. This is useful for:
- Validating input size before classification
- Preparing the correct amount of data
- Padding or truncating inputs to match model requirements
§Returns
The number of input features required by the model