Expand description
ยงToRSh FFI - Foreign Function Interface for ToRSh Deep Learning Framework
This crate provides comprehensive foreign function interface (FFI) bindings for the ToRSh deep learning framework, enabling seamless integration across multiple programming languages and platforms. Built with production-grade performance, safety, and ease of use in mind.
ยง๐ Supported Languages & Platforms
ยงCore C API
- Direct C API: High-performance C interface with comprehensive tensor operations
- Error Handling: Advanced error reporting with 25+ specific error types
- Memory Management: Efficient memory pooling and automatic cleanup
- Thread Safety: Full thread-safe operation with internal synchronization
ยงLanguage Bindings
- ๐ Python (
pyo3): Full PyTorch-compatible API with NumPy integration - ๐ Ruby: Native Ruby bindings with familiar syntax
- โ Java (
JNI): Java Native Interface for enterprise applications - ๐ท C# (
P/Invoke): .NET integration for Windows/Linux/macOS - ๐น Go (
CGO): Go bindings for high-performance services - ๐ Swift: Native iOS/macOS integration with C interop
- ๐ R: Statistical computing integration for data science
- ๐ฌ Julia: High-performance scientific computing bindings
- ๐งฎ MATLAB (
MEX): MATLAB integration for mathematical computing - ๐ Lua: Lightweight scripting and embedding support
- ๐ Node.js (
N-API): JavaScript/TypeScript server-side integration
ยง๐๏ธ Architecture Overview
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Language Bindings โ
โ Python โ Java โ C# โ Go โ Swift โ Ruby โ R โ Julia โ ... โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ C API Layer โ
โ โข Tensor Operations โข Neural Networks โ
โ โข Memory Management โข Optimizers โ
โ โข Error Handling โข Device Management โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ToRSh Core Engine โ
โ โข torsh-tensor โข torsh-autograd โ
โ โข torsh-nn โข torsh-optim โ
โ โข SciRS2 Integration โข Backend Abstraction โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโยง๐ฏ Key Features
ยงPerformance & Efficiency
- Memory Pooling: Automatic buffer reuse reduces allocation overhead
- Batched Operations: Execute multiple operations efficiently
- Async Queue: Non-blocking operation execution
- Operation Caching: Intelligent caching of computation results
- SIMD Optimization: Vectorized operations via SciRS2 integration
ยงSafety & Reliability
- Memory Safety: Rustโs ownership system prevents memory issues
- Thread Safety: Safe concurrent access with proper synchronization
- Error Recovery: Comprehensive error handling with recovery suggestions
- Type Safety: Strong typing across language boundaries
- Resource Management: Automatic cleanup and leak detection
ยงDeveloper Experience
- PyTorch Compatibility: Familiar API for PyTorch users
- Comprehensive Documentation: Extensive examples and API docs
- Performance Profiling: Built-in profiling and benchmarking tools
- Integration Utilities: Tools for migrating from other frameworks
- Custom Exceptions: Rich error context with actionable suggestions
ยง๐ Quick Start Examples
ยงPython (PyTorch-like API)
import torsh
# Create tensors
x = torsh.randn([2, 3])
y = torsh.ones([3, 4])
# Neural network operations
linear = torsh.Linear(3, 4)
optimizer = torsh.Adam(linear.parameters(), lr=0.001)
# Forward pass
output = linear(x)
loss = torsh.mse_loss(output, target)
# Backward pass
loss.backward()
optimizer.step()ยงC API
#include "torsh.h"
int main() {
// Initialize ToRSh
if (torsh_init() != TORSH_SUCCESS) {
fprintf(stderr, "Failed to initialize ToRSh\n");
return -1;
}
// Create tensors
size_t shape[] = {2, 3};
TorshTensor* x = torsh_tensor_randn(shape, 2);
TorshTensor* y = torsh_tensor_ones(shape, 2);
TorshTensor* result = torsh_tensor_zeros(shape, 2);
// Perform operations
TorshError status = torsh_tensor_add(x, y, result);
if (status != TORSH_SUCCESS) {
const char* error = torsh_get_last_error();
fprintf(stderr, "Operation failed: %s\n", error);
}
// Neural network
TorshModule* linear = torsh_linear_new(3, 2, true);
TorshOptimizer* adam = torsh_adam_new(0.001, 0.9, 0.999, 1e-8);
// Cleanup
torsh_tensor_free(x);
torsh_tensor_free(y);
torsh_tensor_free(result);
torsh_linear_free(linear);
torsh_optimizer_free(adam);
torsh_cleanup();
return 0;
}ยงJava
import com.torsh.*;
public class Example {
public static void main(String[] args) {
// Initialize ToRSh
TorshNative.init();
// Create tensor
float[] data = {1.0f, 2.0f, 3.0f, 4.0f};
int[] shape = {2, 2};
Tensor tensor = new Tensor(data, shape);
// Operations
Tensor result = tensor.relu();
float[] output = result.getData();
// Cleanup
tensor.free();
result.free();
TorshNative.cleanup();
}
}ยง๐ง Advanced Usage
ยงPerformance Monitoring
use torsh_ffi::performance::{profile_operation, get_performance_stats};
// Profile an operation
let result = profile_operation("matrix_multiply", || {
// Your computation here
expensive_computation()
});
// Get statistics
let stats = get_performance_stats();
println!("Average operation time: {:.2}ms", stats.avg_time_ms);
println!("Cache hit rate: {:.1}%", stats.cache_hit_rate() * 100.0);ยงMemory Management
use torsh_ffi::performance::{get_pooled_buffer, return_pooled_buffer};
// Use memory pool for efficient allocation
let buffer = get_pooled_buffer(1024);
// ... use buffer ...
return_pooled_buffer(buffer); // Return to pool for reuseยงError Handling (Python)
import torsh
try:
result = torsh.matmul(tensor_a, tensor_b)
except torsh.ShapeError as e:
print(f"Shape mismatch: {e}")
print(f"Suggestion: {e.suggestion}")
print(f"Operation: {e.operation}")
print(f"Recoverable: {e.recoverable}")ยง๐ญ Production Features
- Comprehensive Testing: 95%+ test coverage across all bindings
- Benchmarking Suite: Performance regression detection
- Memory Leak Detection: Automatic resource tracking
- Cross-Platform: Windows, Linux, macOS support
- CI/CD Integration: Automated testing and deployment
- Semantic Versioning: Stable API with clear upgrade paths
ยง๐ Module Structure
This crate is organized into focused modules for maintainability and clarity:
Re-exportsยง
Modulesยง
- android
- Android-specific bindings with Kotlin Coroutines, Flow, NNAPI, and Jetpack Compose Android-specific bindings for ToRSh
- api_
docs - API documentation generator for FFI bindings API Documentation Generator for ToRSh FFI
- benchmark_
suite - Comprehensive benchmark suite for performance testing Comprehensive benchmark suite for torsh-ffi performance testing
- binding_
generator - Binding generator for automatically generating FFI bindings Binding Generator for ToRSh FFI
- c_api
- C FFI exports (base API for all language bindings) C API for ToRSh
- conversions
- Unified conversion utilities to reduce code duplication Unified type conversion utilities
- csharp
- C# P/Invoke bindings for .NET integration C# P/Invoke bindings for ToRSh
- dotnet6
- .NET 6+ modern async/await and high-performance features .NET 6+ Modern Integration for ToRSh
- error
- Plotting utilities for data visualization Jupyter widgets integration for interactive notebooks Error types for FFI operations Enhanced error types for FFI operations
- go
- Go CGO bindings for Go language integration Go CGO bindings for ToRSh
- graalvm
- GraalVM integration for polyglot JVM support and native image compilation GraalVM Integration for ToRSh
- ios
- iOS-specific bindings with Swift Concurrency, Combine, Core ML, and Metal support iOS-specific bindings for ToRSh
- java
- Java JNI bindings for Java Native Interface integration Java JNI bindings for ToRSh
- julia
- Julia language bindings for high-performance scientific computing Julia Language Bindings for ToRSh
- migration_
tools - Migration tools for transitioning from other frameworks Migration tools for transitioning from other ML frameworks to ToRSh
- model_
optimization - Model optimization (pruning, distillation, fusion) Model Optimization Techniques for Compression and Acceleration
- numpy_
compatibility - NumPy compatibility layer for seamless integration NumPy compatibility layer for seamless integration with ToRSh
- pandas_
support - Pandas support for data manipulation and analysis Pandas support for ToRSh tensors
- performance
- Performance optimizations and batched operations Performance optimizations and batched operations for ToRSh FFI
- prelude
- python
- Python bindings for ToRSh via PyO3
- quantization
- Model quantization and compression for edge deployment Model Quantization and Optimization for Edge Deployment
- r_lang
- R language bindings for statistical computing integration R Language Bindings for ToRSh
- ruby
- Ruby FFI bindings using direct C API calls Ruby FFI bindings for ToRSh
- scipy_
integration - SciPy integration for scientific computing functionality SciPy integration for ToRSh tensors
- swift
- Swift C interop bindings for iOS/macOS integration Swift C interop bindings for ToRSh
- test_
generator - Test generator for automatic test suite generation Automatic test generator for ToRSh FFI bindings
- type_
system - Unified type system for consistent cross-language type handling Unified type system for FFI bindings
- wasm
- MATLAB MEX interface for mathematical computing integration Lua bindings for scripting and embedding integration Node.js N-API bindings for JavaScript/TypeScript integration WebAssembly bindings for browser and edge deployment WebAssembly (WASM) bindings for ToRSh
- webgpu
- WebGPU hardware acceleration for WASM (browser GPU support) WebGPU Hardware Acceleration for ToRSh WASM