usls is a Rust library integrated with ONNXRuntime that provides a collection of state-of-the-art models for Computer Vision and Vision-Language tasks, including:
- YOLO Models: YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv9, YOLOv10
- SAM Models: SAM, SAM2, MobileSAM, EdgeSAM, SAM-HQ, FastSAM
- Vision Models: RTDETR, RTMO, DB, SVTR, Depth-Anything-v1-v2, DINOv2, MODNet, Sapiens
- Vision-Language Models: CLIP, BLIP, GroundingDINO, YOLO-World
Supported Models
| Model | Task / Type | Example | CUDA f32 | CUDA f16 | TensorRT f32 | TensorRT f16 |
|---|---|---|---|---|---|---|
| YOLOv5 | ClassificationObject DetectionInstance Segmentation | demo | ✅ | ✅ | ✅ | ✅ |
| YOLOv6 | Object Detection | demo | ✅ | ✅ | ✅ | ✅ |
| YOLOv7 | Object Detection | demo | ✅ | ✅ | ✅ | ✅ |
| YOLOv8 | Object DetectionInstance SegmentationClassificationOriented Object DetectionKeypoint Detection | demo | ✅ | ✅ | ✅ | ✅ |
| YOLOv9 | Object Detection | demo | ✅ | ✅ | ✅ | ✅ |
| YOLOv10 | Object Detection | demo | ✅ | ✅ | ✅ | ✅ |
| RTDETR | Object Detection | demo | ✅ | ✅ | ✅ | ✅ |
| FastSAM | Instance Segmentation | demo | ✅ | ✅ | ✅ | ✅ |
| SAM | Segment Anything | demo | ✅ | ✅ | ||
| SAM2 | Segment Anything | demo | ✅ | ✅ | ||
| MobileSAM | Segment Anything | demo | ✅ | ✅ | ||
| EdgeSAM | Segment Anything | demo | ✅ | ✅ | ||
| SAM-HQ | Segment Anything | demo | ✅ | ✅ | ||
| YOLO-World | Object Detection | demo | ✅ | ✅ | ✅ | ✅ |
| DINOv2 | Vision-Self-Supervised | demo | ✅ | ✅ | ✅ | ✅ |
| CLIP | Vision-Language | demo | ✅ | ✅ | ✅ Visual❌ Textual | ✅ Visual❌ Textual |
| BLIP | Vision-Language | demo | ✅ | ✅ | ✅ Visual❌ Textual | ✅ Visual❌ Textual |
| DB | Text Detection | demo | ✅ | ✅ | ✅ | ✅ |
| SVTR | Text Recognition | demo | ✅ | ✅ | ✅ | ✅ |
| RTMO | Keypoint Detection | demo | ✅ | ✅ | ❌ | ❌ |
| YOLOPv2 | Panoptic Driving Perception | demo | ✅ | ✅ | ✅ | ✅ |
| Depth-Anything | Monocular Depth Estimation | demo | ✅ | ✅ | ❌ | ❌ |
| MODNet | Image Matting | demo | ✅ | ✅ | ✅ | ✅ |
| GroundingDINO | Open-Set Detection With Language | demo | ✅ | ✅ | ||
| Sapiens | Body Part Segmentation | demo | ✅ | ✅ |
⛳️ ONNXRuntime Linking
You have two options to link the ONNXRuntime library
-
Option 1: Manual Linking
-
For detailed setup instructions, refer to the ORT documentation.
-
For Linux or macOS Users:
- Download the ONNX Runtime package from the Releases page.
- Set up the library path by exporting the
ORT_DYLIB_PATHenvironment variable:export ORT_DYLIB_PATH=/path/to/onnxruntime/lib/libonnxruntime.so.1.19.0
-
-
Option 2: Automatic Download
Just use
--features autocargo run -r --example yolo --features auto
🎈 Demo
cargo run -r --example yolo # blip, clip, yolop, svtr, db, ...
🥂 Integrate Into Your Own Project
-
Add
uslsas a dependency to your project'sCargo.tomlcargo add uslsOr use a specific commit:
[] = { = "https://github.com/jamjamjon/usls", = "commit-sha" } -
Follow the pipeline
-
Build model with the provided
modelsandOptions -
Load images, video and stream with
DataLoader -
Do inference
-
Annotate inference results with
Annotator -
Retrieve inference results from
Vec<Y>use ;
-
📌 License
This project is licensed under LICENSE.