ringkernel-ecosystem
Ecosystem integrations for RingKernel.
Overview
This crate provides optional integrations with popular Rust ecosystem libraries for actors, web frameworks, data processing, and observability. All integrations are opt-in via feature flags.
Key Feature: Persistent GPU kernel integration enables 11,327x faster command injection (~0.03µs vs ~317µs) compared to traditional kernel launch patterns.
Feature Flags
| Feature | Description |
|---|---|
persistent |
Core persistent GPU kernel traits (backend-agnostic) |
persistent-cuda |
CUDA implementation of PersistentHandle |
actix |
Actix actor framework bridge with GpuPersistentActor |
tower |
Tower service middleware with PersistentKernelService |
axum |
Axum web framework with persistent GPU state |
axum-ws |
WebSocket support for Axum streaming |
axum-sse |
Server-Sent Events support for Axum streaming |
grpc |
gRPC server with streaming RPCs via Tonic |
arrow |
Apache Arrow data processing |
polars |
Polars DataFrame operations |
candle |
Candle ML framework bridge |
config |
Configuration file management |
tracing-integration |
Enhanced tracing support |
prometheus |
Prometheus metrics export |
persistent-full |
Full persistent ecosystem (CUDA + all web frameworks) |
full |
All integrations enabled |
Installation
[]
= { = "0.1", = ["axum", "persistent"] }
Persistent GPU Integration
The persistent GPU integration leverages RingKernel's persistent actor model for ultra-low-latency command injection:
| Operation | Traditional | Persistent | Speedup |
|---|---|---|---|
| Inject command | 317 µs | 0.03 µs | 11,327x |
| Mixed workload | 40.5 ms | 15.3 ms | 2.7x |
Core Trait
use PersistentHandle;
Actix Integration
GPU-backed actor with persistent kernel:
use ;
let handle: = create_persistent_handle;
let actor = new.start;
// Ultra-low-latency commands (~0.03µs)
actor.send.await?;
actor.send.await?;
Axum Integration
REST API with SSE streaming:
use ;
let state = new;
let app = new
.merge // POST /api/step, /api/impulse, GET /api/stats
.route // SSE streaming
.with_state;
Tower Integration
Tower service with middleware support:
use ;
use ServiceBuilder;
let service = new
.timeout
.rate_limit
.service;
gRPC Integration
Unary and streaming RPCs:
use ;
let service = new;
// Unary RPC
let response = service.run_steps.await?;
// Server-streaming RPC
let stream = service.stream_responses;
while let Some = stream.next.await
CUDA Bridge
For NVIDIA GPU support, use CudaPersistentHandle:
use ;
use CudaDevice;
use CudaPersistentHandle;
// Create CUDA simulation
let device = new?;
let config = new.with_tile_size;
let simulation = new?;
// Create ecosystem handle
let handle = new;
handle.start?;
// Use with any ecosystem integration
let actor = new.start;
Examples
Run the Axum REST API example:
This demonstrates:
- Setting up Axum routes for persistent kernel control
- REST endpoints for step execution, impulse injection, and stats
- Graceful shutdown handling
Traditional Integration
For applications not using persistent kernels:
Axum
use ;
let app = new
.route
.with_state;
Prometheus Metrics
use PrometheusExporter;
let exporter = new;
exporter.register_runtime_metrics;
Configuration
use ConfigManager;
let config = load?;
let runtime = config.create_runtime.await?;
Testing
# Basic tests
# With persistent features
# All features
License
Apache-2.0