Expand description
Detector factory and runtime configuration for CortenForge inference.
This crate bridges the models crate (Burn Modules) and the vision_core::interfaces::Detector
trait. The InferenceFactory loads weights and constructs detector implementations that can be
used in vision pipelines.
§Backend Selection
backend-wgpu: Uses WGPU for GPU acceleration (recommended for production).- Default: Falls back to NdArray CPU backend.
§Model Selection
convolutional_detector: UsesMultiboxModelfor multi-box detection.- Default: Uses
LinearClassifierfor binary classification.
Type aliases InferenceModel and InferenceModelConfig adapt to the selected features.
Re-exports§
pub use factory::InferenceFactory;pub use factory::InferenceThresholds;