1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
// Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
//! # Ultralytics YOLO Inference Library
//!
//! [](https://crates.io/crates/ultralytics-inference)
//! [](https://docs.rs/ultralytics-inference)
//! [](https://github.com/ultralytics/inference/blob/main/LICENSE)
//!
//! High-performance YOLO model inference library written in Rust, providing a safe
//! and efficient interface for running [Ultralytics](https://ultralytics.com) YOLO
//! models on images, videos, and streams.
//!
//! ## Features
//!
//! - **High Performance** - Pure Rust with zero-cost abstractions and SIMD-optimized preprocessing
//! - **ONNX Runtime** - Leverages ONNX Runtime for cross-platform hardware acceleration
//! - **Supported YOLO Versions** - `YOLO11` and `YOLO26` (including YOLO26 end-to-end NMS-free exports)
//! - **All Tasks** - Detection, segmentation, pose estimation, classification, and OBB
//! - **Ultralytics API** - Results API matches the Python package for easy migration
//! - **Multiple Backends** - CPU, CUDA, `TensorRT`, `CoreML`, `OpenVINO`, and more
//! - **Multiple Sources** - Images, directories, glob patterns, video, webcam, streams
//!
//! ## Installation
//!
//! Add to your `Cargo.toml`:
//!
//!
//! Or install the CLI tool:
//!
//! ```bash
//! cargo install ultralytics-inference
//! ```
//!
//! ## Quick Start (Library)
//!
//! ```no_run
//! use ultralytics_inference::{YOLOModel, InferenceConfig};
//!
//! fn main() -> Result<(), Box<dyn std::error::Error>> {
//! // Load model - metadata (classes, task, imgsz) is read automatically
//! let mut model = YOLOModel::load("yolo26n.onnx")?;
//!
//! // Run inference
//! let results = model.predict("image.jpg")?;
//!
//! // Process results
//! for result in &results {
//! if let Some(ref boxes) = result.boxes {
//! println!("Found {} detections", boxes.len());
//! for i in 0..boxes.len() {
//! let cls = boxes.cls()[i] as usize;
//! let conf = boxes.conf()[i];
//! let name = result.names.get(&cls).map(|s| s.as_str()).unwrap_or("unknown");
//! println!(" {} {:.2}", name, conf);
//! }
//! }
//! }
//!
//! Ok(())
//! }
//! ```
//!
//! ## CLI Usage
//!
//! The `ultralytics-inference` CLI provides a command-line interface for running YOLO inference:
//!
//! ```bash
//! # Install the CLI
//! cargo install ultralytics-inference
//!
//! # Run with defaults (auto-downloads model and sample images)
//! ultralytics-inference predict
//!
//! # Select task — auto-downloads the matching nano model
//! ultralytics-inference predict --task segment
//! ultralytics-inference predict --task pose
//! ultralytics-inference predict --task obb
//! ultralytics-inference predict --task classify
//!
//! # Run on a specific image
//! ultralytics-inference predict --model yolo26n.onnx --source image.jpg
//!
//! # Run on a directory of images
//! ultralytics-inference predict --model yolo26n.onnx --source images/
//!
//! # With custom thresholds
//! ultralytics-inference predict -m yolo26n.onnx -s image.jpg --conf 0.5 --iou 0.7
//!
//! # Filter by class IDs
//! ultralytics-inference predict --source image.jpg --classes "0,1,2"
//!
//! # With visualization window
//! ultralytics-inference predict --model yolo26n.onnx --source video.mp4 --show
//!
//! # Save annotated results
//! ultralytics-inference predict --model yolo26n.onnx --source image.jpg --save
//!
//! # Save individual frames for video input
//! ultralytics-inference predict --source video.mp4 --save-frames
//!
//! # Show help
//! ultralytics-inference help
//!
//! # Show version
//! ultralytics-inference version
//! ```
//!
//! **CLI Options:**
//!
//! | Option | Short | Description | Default |
//! |--------|-------|-------------|---------|
//! | `--model` | `-m` | Path to ONNX model file; auto-downloaded if a known YOLO11/YOLO26 name | `yolo26n.onnx` |
//! | `--task` | | Task type (`detect`, `segment`, `pose`, `obb`, `classify`); selects nano model when `--model` is omitted | `detect` |
//! | `--source` | `-s` | Input source (image, directory, glob, video, webcam index, or URL) | Task-dependent sample assets |
//! | `--conf` | | Confidence threshold | `0.25` |
//! | `--iou` | | `IoU` threshold for NMS | `0.7` |
//! | `--max-det` | | Maximum number of detections | `300` |
//! | `--imgsz` | | Inference image size | Model metadata |
//! | `--rect` | | Enable rectangular inference (minimal padding) | `true` |
//! | `--batch` | | Batch size for inference | `1` |
//! | `--half` | | Use FP16 half-precision inference | `false` |
//! | `--save` | | Save annotated results to runs/\<task\>/predict | `true` |
//! | `--save-frames` | | Save individual frames for video input | `false` |
//! | `--show` | | Display results in a window | `false` |
//! | `--device` | | Device (cpu, cuda:0, mps, coreml, directml:0, openvino, tensorrt:0, xnnpack) | `cpu` |
//! | `--verbose` | | Show verbose output | `true` |
//! | `--classes` | | Filter by class IDs, e.g. `0` or `"0,1,2"` or `"[0, 1, 2]"` | all classes |
//!
//! ## Task-Specific Examples
//!
//! The library supports all YOLO tasks. Export models from Python:
//!
//! ```bash
//! # Detection (default)
//! yolo export model=yolo26n.pt format=onnx
//!
//! # Segmentation
//! yolo export model=yolo26n-seg.pt format=onnx
//!
//! # Pose Estimation
//! yolo export model=yolo26n-pose.pt format=onnx
//!
//! # Classification
//! yolo export model=yolo26n-cls.pt format=onnx
//!
//! # Oriented Bounding Boxes
//! yolo export model=yolo26n-obb.pt format=onnx
//! ```
//!
//! The task is auto-detected from ONNX metadata:
//!
//! ```no_run
//! use ultralytics_inference::YOLOModel;
//!
//! # fn main() -> Result<(), Box<dyn std::error::Error>> {
//! // Segmentation model - returns masks
//! let mut model = YOLOModel::load("yolo26n-seg.onnx")?;
//! let results = model.predict("image.jpg")?;
//! if let Some(ref masks) = results[0].masks {
//! println!("Found {} instance masks", masks.len());
//! }
//!
//! // Pose model - returns keypoints
//! let mut model = YOLOModel::load("yolo26n-pose.onnx")?;
//! let results = model.predict("image.jpg")?;
//! if let Some(ref keypoints) = results[0].keypoints {
//! println!("Found {} poses", keypoints.len());
//! }
//!
//! // Classification model - returns probabilities
//! let mut model = YOLOModel::load("yolo26n-cls.onnx")?;
//! let results = model.predict("image.jpg")?;
//! if let Some(ref probs) = results[0].probs {
//! println!("Top-1: class {} ({:.1}%)", probs.top1(), probs.top1conf() * 100.0);
//! }
//! # Ok(())
//! # }
//! ```
//!
//! ## Custom Configuration
//!
//! Use the builder pattern to customize inference settings:
//!
//! ```rust
//! use ultralytics_inference::InferenceConfig;
//!
//! let config = InferenceConfig::new()
//! .with_confidence(0.5) // Confidence threshold
//! .with_iou(0.45) // NMS IoU threshold
//! .with_max_det(300) // Max detections per image
//! .with_imgsz(640, 640); // Input image size
//! ```
//!
//! ## Hardware Acceleration
//!
//! Enable hardware acceleration with Cargo features:
//!
//! ```bash
//! # NVIDIA CUDA
//! cargo build --release --features cuda
//!
//! # NVIDIA TensorRT
//! cargo build --release --features tensorrt
//!
//! # Apple CoreML
//! cargo build --release --features coreml
//!
//! # Intel OpenVINO
//! cargo build --release --features openvino
//! ```
//!
//! ## Results API
//!
//! The [`Results`] struct provides access to inference outputs:
//!
//! - [`Boxes`] - Bounding boxes with `xyxy()`, `xywh()`, `xyxyn()`, `xywhn()`, `conf()`, `cls()` methods
//! - [`Masks`] - Segmentation masks with `data`, `orig_shape` fields
//! - [`Keypoints`] - Pose keypoints with `xy()`, `xyn()`, `conf()` methods
//! - [`Probs`] - Classification probabilities with `top1()`, `top5()`, `top1conf()`, `top5conf()` methods
//! - [`Obb`] - Oriented bounding boxes with `xyxyxyxy()`, `xywhr()`, `conf()`, `cls()` methods
//!
//! ## Module Overview
//!
//! | Module | Description |
//! |--------|-------------|
//! | [`model`] | Core [`YOLOModel`] for loading models and running inference |
//! | [`results`] | Output types ([`Results`], [`Boxes`], [`Masks`], etc.) |
//! | [`inference`] | [`InferenceConfig`] for customizing inference settings |
//! | [`source`] | Input source handling ([`Source`], [`SourceIterator`]) |
//! | [`task`] | YOLO task types ([`Task`]: Detect, Segment, Pose, etc.) |
//! | [`mod@error`] | Error types ([`InferenceError`], [`Result`]) |
//! | [`preprocessing`] | Image preprocessing utilities |
//! | [`postprocessing`] | Detection post-processing (NMS, decode) |
//! | [`metadata`] | ONNX model metadata parsing |
//!
//! ## Feature Flags
//!
//! | Feature | Description |
//! |---------|-------------|
//! | `annotate` | Image annotation support (default) |
//! | `visualize` | Real-time window display (default) |
//! | `video` | Video file support (`FFmpeg`) |
//! | `cuda` | NVIDIA CUDA acceleration |
//! | `tensorrt` | NVIDIA `TensorRT` optimization |
//! | `coreml` | Apple `CoreML` (macOS/iOS) |
//! | `openvino` | Intel `OpenVINO` |
//!
//! ## License
//!
//! This project is dual-licensed under [AGPL-3.0](https://github.com/ultralytics/inference/blob/main/LICENSE)
//! for open-source use or [Ultralytics Enterprise License](https://ultralytics.com/license)
//! for commercial applications.
// Modules
// Re-export main types for convenience
pub use Device;
pub use ;
pub use InferenceConfig;
pub use YOLOModel;
pub use ;
pub use ;
pub use Task;
// Re-export metadata for advanced use
pub use ModelMetadata;
// Re-export preprocessing utilities
pub use ;
/// Library version.
pub const VERSION: &str = env!;
/// Library name.
pub const NAME: &str = env!;