fpo-rust
Fast License Plate OCR inference in pure Rust. A high-performance port of fast-plate-ocr that provides fast and accurate license plate character recognition across multiple countries.
Features
- Pure Rust implementation using
tract-onnxfor ONNX model inference - Multiple pre-trained models for global and regional license plates
- Character-level confidence scores for result validation
- Region detection for plates with regional information (e.g., country/state)
- Efficient batch processing with automatic image resizing
- Offline support with model caching and custom model paths
- Cross-platform support (Linux, macOS, Windows)
Installation
As a Library
Add to your Cargo.toml:
[]
= "0.1"
Or use the git repository:
[]
= { = "https://github.com/alparslanahmed/fpo-rust" }
Check the crates.io page for the latest version.
As a CLI Tool
Clone the repository and build:
The binary will be available at target/release/fpo-rust.
Quick Start
Using as a Library
use ;
Using the CLI
Run inference on images:
# Using a hub model (downloads model on first run)
# Using a custom model and config
# Keep padding characters in output
Run benchmark:
# Benchmark with 500 iterations, batch size 1
# Custom benchmark settings
# Include pre/post-processing time in benchmark
Available Models
Global Models (Recommended)
cct-s-v2-global-model⭐ - Compact Convolutional Transformer Small v2, optimized for global platescct-xs-v2-global-model- XSmall v2, faster but less accuratecct-s-v1-global-model- Small v1, previous versioncct-xs-v1-global-model- XSmall v1, lightweight
Regional Models
european-plates-mobile-vit-v2-model- Optimized for European license platesglobal-plates-mobile-vit-v2-model- Mobile-optimized global modelargentinian-plates-cnn-model- CNN model for Argentinian platesargentinian-plates-cnn-synth-model- CNN model trained with synthetic data
Offline Usage & Custom Models
Automatic Caching
Models are automatically downloaded and cached on first use. The default cache location is:
- Linux/macOS:
~/.cache/fpo-rust/ - Windows:
%APPDATA%\fpo-rust\
Using Custom Models
If you have your own ONNX model and config file:
use ;
use Path;
Save Models to Custom Directory
Download models to a specific directory for offline use:
use ;
use Path;
CLI: Download Models Offline
Download all models to a directory for offline use:
# Create a models directory
# Run CLI with custom model path (this will download if not present)
API Reference
Main Types
LicensePlateRecognizer
The main inference engine.
Methods:
-
from_hub(model: OcrModel, force_download: bool) -> Result<Self>- Load a model from the hub with automatic caching
-
from_hub_to_dir(model: OcrModel, save_dir: &Path, force_download: bool) -> Result<Self>- Load a model from hub, saving to a specific directory
-
from_files(onnx_path: impl AsRef<Path>, config_path: impl AsRef<Path>) -> Result<Self>- Load a custom ONNX model with its config
-
run(inputs: &[PlateInput], return_confidence: bool, remove_pad_char: bool) -> Result<Vec<PlatePrediction>>- Run inference on multiple images
-
run_one(input: PlateInput, return_confidence: bool, remove_pad_char: bool) -> Result<PlatePrediction>- Run inference on a single image
PlateInput<'a>
Represents a single plate image input. Can be:
// From file path
from
// From Path object
from
// From pre-loaded DynamicImage
use open;
let img = open?;
from
PlatePrediction
Output of a single inference.
Fields:
plate: String- Recognized plate textregion: Option<String>- Region/country if detectedregion_prob: Option<f32>- Confidence of region (0.0-1.0)char_probs: Option<Vec<f32>>- Per-character confidence scores
Methods:
avg_char_confidence() -> f32- Average confidence across all characters
OcrModel
Enum of available hub models:
Output Format
Library Output
let pred = recognizer.run_one?;
println!;
println!;
println!;
CLI Output
plate.jpg: 34PE7523 [Turkey] (98.7%) - Char Confidence: 0.99
34PE7523- Recognized plate text[Turkey]- Detected region (if available)(98.7%)- Region confidence score0.99- Average character confidence (0.0-1.0)
Performance
Performance varies by model size and hardware. Example benchmarks on standard hardware:
| Model | Inference Time | Memory |
|---|---|---|
| cct-xs-v2-global | ~10-15ms | ~50MB |
| cct-s-v2-global | ~20-30ms | ~100MB |
| mobile-vit-v2-global | ~30-50ms | ~150MB |
Use fpo-rust benchmark --model <name> to measure performance on your hardware.
Configuration
The library respects the XDG_CACHE_HOME environment variable on Linux/macOS. To use a custom cache directory:
# Use custom cache directory
Building for Production
Optimize the binary:
The optimized binary will be at target/release/fpo-rust.
Static linking (optional):
For maximum portability, you can create a statically-linked binary. This requires additional setup depending on your platform.
Troubleshooting
Model Download Fails
- Check your internet connection
- Verify GitHub is not blocked
- Use
RUST_LOG=debugfor more details:RUST_LOG=debug
Low Confidence Scores
- Try a different model optimized for your region
- Ensure plate images have sufficient contrast and resolution
- Pre-process images (crop, rotate) if plates are at odd angles
Out of Memory
- Use a smaller model (e.g.,
cct-xs-v2-global-model) - Reduce batch size in benchmarks:
--batch 1
Development
Running Tests
Running Benchmarks
Dependencies
- tract-onnx - Pure-Rust ONNX runtime
- image - Image loading and processing
- serde - Serialization framework
- serde_yml - YAML parsing for config files
- ureq - Lightweight HTTP client for model downloads
- anyhow - Error handling
License
This project is licensed under the MIT License - see the LICENSE file for details.
References
- fast-plate-ocr - Original Python implementation
- tract - Pure Rust ONNX runtime
- ONNX - Open Neural Network Exchange format
Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.