docs.rs failed to build paddle-inference-rs-0.1.0
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
paddle-inference-rs
Rust bindings for PaddlePaddle inference library, providing safe and ergonomic access to PaddlePaddle's C API for deep learning inference.
Features
- Safe Rust API: Type-safe wrappers around PaddlePaddle's C inference API
- Cross-platform: Supports Windows, Linux, and macOS
- Async support: Optional async/await support for inference operations
- Memory safe: Proper resource management with RAII patterns
- Zero-cost abstractions: Minimal overhead compared to direct C API usage
Installation
Add this to your Cargo.toml:
[]
= "0.1.0"
Prerequisites
You need to have the PaddlePaddle inference library installed. The library expects the following structure:
paddle/
├── include/
│ ├── pd_common.h
│ ├── pd_config.h
│ ├── pd_inference_api.h
│ ├── pd_predictor.h
│ ├── pd_tensor.h
│ ├── pd_types.h
│ └── pd_utils.h
└── lib/
├── paddle_inference_c.dll (Windows)
├── paddle_inference_c.so (Linux)
└── paddle_inference_c.dylib (macOS)
Usage
Basic Example
use ;
Advanced Example with Async
use ;
use task;
async
API Overview
Config
- Model configuration and optimization settings
- Hardware backend selection (CPU/GPU/XPU)
- Precision settings (FP32/FP16/INT8)
- Memory optimization options
Predictor
- Main inference interface
- Input/output tensor management
- Batch inference support
- Thread-safe operations
Tensor
- Multi-dimensional data container
- Data type support (Float32, Int32, Int64, UInt8, Int8)
- Shape manipulation and data copying
- Lod (Level of Detail) support for variable-length sequences
Building from Source
- Clone the repository:
- Build with cargo:
- For binding generation (requires bindgen):
Platform Support
- Windows: Requires Visual Studio build tools and PaddlePaddle Windows binaries
- Linux: Requires gcc/clang and PaddlePaddle Linux binaries
- macOS: Requires Xcode command line tools and PaddlePaddle macOS binaries
Performance
The library provides near-native performance with minimal overhead:
- <1% overhead compared to direct C API calls
- Zero-copy data transfer when possible
- Efficient memory management with RAII
- Thread-safe operations for concurrent inference
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- PaddlePaddle team for the excellent inference library
- Rust community for amazing tools and libraries
- Contributors and users of this crate
Support
If you encounter any issues or have questions:
- Check the documentation
- Search existing issues
- Create a new issue with detailed information
Version Compatibility
| paddle-inference-rs | PaddlePaddle | Rust |
|---|---|---|
| 0.1.x | 2.4+ | 1.65+ |
Made with ❤️ for the Rust and AI communities