SIDS - Actor-Based Data Collection in Rust
SIDS is an experimental actor-model library in Rust. It includes runnable examples and architecture notes so you can evaluate the model quickly.
Getting Started
Run the example logging demonstration:
For a streaming example, run:
For an actor-critic machine learning example:
What You Get
This project focuses on building concurrent systems in Rust with:
- Actor Model: A message-passing architecture with isolated, concurrent actors
- Streaming Pipelines: Functional reactive programming patterns for data processing
SIDS supports both Tokio-based async actors and blocking actors in the same system.
Basic Concepts
An actor implements an Actor<MType, Response> trait that includes a receive function accepting a
message type of Message<MType, Response>.
The Message struct covers common actor behaviors (stop, response handling, and payload transport).
MType can be any type that is Send + 'static (for example, String, u32, or an enum).
Enums are often a good fit for message protocols. See the
Rust documentation on enum types for more information
Response is any enum used for replies. A generic
ResponseMessage can be used by default.
Once you choose an MType, the ActorSystem uses the same message type throughout the system.
Currently, one MType is used per actor system.
let mut actor_system = ;
Starting an actor system initializes the system and runs a 'boss' actor called the Guardian with
an id of 0. You can ping the boss using sids::actors::ping_actor_system(&actor_system);
You can add an actor to the system by creating a structure that implements the Actor<MType> trait.
All actors must receive a Message<MType>.
use Actor;
use ;
use info;
// you can include some attributes like a name if you wish
;
async
## Streaming Module
The streaming module provides a functional reactive programming approach to data processing built on top of the actor system. It allows you to create pipelines that process data through various transformations in a non-blocking, efficient manner.
### Key Components
**Source**: Entry point for data into the pipeline. Generates or reads data and emits it downstream.
**Flow**: Transforms messages as they pass through. Can modify, filter, or enrich data in the pipeline.
**Sink**: Terminal point in a pipeline that consumes messages and performs side effects .
**Materializer**: Executes the pipeline by connecting sources, flows, and sinks within the actor system.
### Streaming Example
```bash
# Run the streaming example with the feature enabled
cargo run --example source --features streaming
This example includes:
use ;
async
Building with Streaming
The streaming module is an optional feature. To build and test with streaming enabled:
# Build with streaming
# Run tests with streaming
Actor-Critic Reinforcement Learning Example
# Run the actor-critic reinforcement learning example
This example builds a small actor-critic loop over a 3-armed bandit:
- Environment: Multi-armed bandit with 3 arms (different reward probabilities)
- Actor Agent: Learns which arm to pull (action policy)
- Critic Agent: Evaluates expected rewards (value function)
- Coordinator: Manages the training loop with message passing
The example includes:
- Multiple coordinating actors working together
- Request-response patterns using
get_response_handler - Temporal difference (TD) learning with actor-critic updates
- An end-to-end training loop with message passing
After 500 episodes, the actor typically shifts toward the arm with the highest reward probability (50%).
Testing and Coverage
Current test coverage across modules:
- Streaming module: 36 tests
- Actor module: 30 tests
- Actor System module: 19 tests
- Total: 85 tests
Running Tests
# Run all tests with streaming feature
# Run specific module tests
Configuration
This library can be configured using a TOML file. Keep your real config out of git and copy the example:
Example snippet:
[]
= 100
= 5000
Usage in code:
use SidsConfig;
let config = load_from_file
.expect;
let mut actor_system = start_actor_system_with_config;
Code Coverage
The project uses cargo-llvm-cov for code coverage analysis, configured to comply with institutional file system constraints.
Quick start:
.\run-coverage.ps1
This will automatically:
- Install
cargo-llvm-covif needed - Run tests with coverage instrumentation
- Generate an HTML coverage report
- Open the report in your browser
- Keep all artifacts within the project directory
For more details, see docs/COVERAGE.md.
Documentation
- GUIDE.md - Complete user guide covering error handling, response handling, supervision, and shutdown
- CHANGELOG.md - Version history and migration guides
- CONTRIBUTING.md - Contribution guidelines for developers
- docs/STABILITY.md - API stability policy and semantic versioning guarantees
- docs/architecture/ - System architecture and design documentation
- API Documentation - Complete API reference on docs.rs
Contributing
We welcome contributions! Please see CONTRIBUTING.md for:
- Development setup
- Testing guidelines
- Code style requirements
- Pull request process
Project Status
The project is in a stable prototype phase. Future changes are expected to focus on performance, safety, and maintenance improvements.
Citations
The following resources helped me a lot during the building of this demonstration.
- Mara Bos (2023) Rust Atomics and Locks: Low-level concurrency in Practice. O'Reilly Press.
- Alice Ryhl (2021) Actors with Tokio [Blog Post].