Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
nexus-rt
Single-threaded, event-driven runtime primitives with pre-resolved dispatch.
nexus-rt provides the building blocks for constructing runtimes where
user code runs as handlers dispatched over shared state. It is not an
async runtime — there is no task scheduler, no work stealing, no Future
polling. Your main() is the executor.
Philosophy
nexus-rt is a lightweight, single-threaded runtime for event-driven
systems. It provides the state container, dependency injection, lifecycle
management, and dispatch infrastructure — but no implicit executor. Your
main() is the event loop. You decide what polls, in what order, and when.
The core idea: declare what your functions need, and the framework wires
it up at build time. Write plain Rust functions with Res<T> and
ResMut<T> parameters. The framework resolves those dependencies once when
you build the handler, then dispatches with zero framework overhead — a
single pointer deref per resource, no hashing, no lookups, no allocation.
What nexus-rt is:
- A typed singleton store (
World) with direct-pointer access - A dependency injection system for plain functions
- Composable handler and pipeline abstractions
- Single-threaded by design — for latency, not by accident
What nexus-rt is not:
- Not an async runtime (no
Future, noasync/await) - Not a game engine ECS (no entities, no components, no archetypes)
- Not opinionated about IO, networking, or wire protocols — bring your own
If you need an analogy: it's the Bevy SystemParam + World model,
stripped down to singletons and adapted for sequential event processing
instead of parallel frame-based simulation.
What in the World?!
The World
Everything in nexus-rt revolves around the World — a typed singleton
store where each registered type gets exactly one value:
use WorldBuilder;
let mut builder = new;
builder.; // one u64, initialized to 0
builder.; // one String
let mut world = builder.build; // freeze — no more registration
WorldBuilder is mutable — you register types into it. build() produces
a frozen World. After that, no types can be added or removed. This
constraint enables direct pointer access: each type gets a ResourceId
that is a direct pointer to its storage, and dispatch-time access is a
single pointer deref — zero framework overhead.
Outside of handlers, you can read and write resources directly:
let mut builder = new;
builder.;
let mut world = builder.build;
assert_eq!;
*world. = 42;
assert_eq!;
Res<T> and ResMut<T> — Dependency Injection
The real power is that handler functions declare their dependencies in their signatures. You don't pass resources manually — the framework resolves them:
use ;
This function declares:
Res<u64>— "I need shared read access to theu64resource"ResMut<String>— "I need exclusive write access to theStringresource"event: f64— "I receive anf64as my event" (always the last parameter)
When you convert this function into a handler, the framework resolves each
parameter against the World's registry. At dispatch time, it fetches
the resources by direct pointer — no HashMap lookup, no type checking,
just a pointer deref.
ResMut<T> provides exclusive write access via DerefMut. For change
detection, use the reactor system's interest-based notification — mark
data sources when resources change, and subscribed reactors wake
automatically.
Handlers — Connecting Functions to the World
IntoHandler converts a plain function into a Handler — the object-safe
dispatch trait. The conversion resolves parameters; after that, calling
.run() is a direct dispatch through pre-resolved indices:
use ;
let mut builder = new;
builder.;
let mut world = builder.build;
let mut handler = tick.into_handler;
handler.run;
handler.run;
assert_eq!;
The event parameter is always last. Everything before it is resolved as a
Param from the registry. If a required resource isn't registered,
into_handler panics at build time — not at dispatch time. Fail fast.
Named functions only. Closures do not work with
IntoHandlerfor arity-1+ (functions withParamarguments). This is a Rust type inference limitation with HRTBs and GATs — the same limitation Bevy has. Arity-0 pipeline steps (noParam) do accept closures.
Plugins — Composable Registration
When you have a group of related resources, package them as a Plugin:
use ;
let mut builder = new;
builder.install_plugin;
// PriceCache and RiskLimits are now registered
Plugins are consumed by value — fire and forget. They're for organizing registration, not for runtime behavior. Compose your system from multiple plugins, each owning a domain's resources.
Lifecycle — Startup, Run, Shutdown
After build(), you often need to initialize state that depends on
multiple resources being present. run_startup runs a system once with
full dependency injection:
use ;
let mut builder = new;
builder.register;
builder.register;
let mut world = builder.build;
world.run_startup;
For the event loop itself, world.run() polls until a handler triggers
shutdown:
use Shutdown;
// Handler triggers shutdown when done
world.run;
world.run() is a convenience — it's just while !shutdown { f(self) }.
You can also write the loop yourself if you need access to the shutdown
handle, custom exit conditions, or pre/post-iteration bookkeeping. Both
patterns are equivalent; world.run() is shorter when a shutdown flag is
all you need.
Shutdown is automatically registered by WorldBuilder::build(). The
event loop owns a ShutdownHandle (obtained via world.shutdown_handle()
if needed outside world.run()). With the signals feature,
shutdown.enable_signals() registers SIGINT/SIGTERM handlers
automatically.
The full lifecycle:
WorldBuilder::new()
→ register resources
→ install_plugin(plugin)
→ install_driver(installer) → returns poller
→ build()
→ World (frozen)
→ run_startup(init_fn) // one-shot init
→ run(|world| { ... }) // poll loop until shutdown
Design
nexus-rt is heavily inspired by Bevy ECS.
Handlers as plain functions, Param for declarative dependency
injection, Res<T> / ResMut<T> wrappers, the Plugin trait for
composable registration — these
are Bevy's ideas, and in many cases the implementation follows Bevy's
patterns closely (including the HRTB double-bound trick that makes
IntoHandler work). Credit where it's due: Bevy's system model is
an excellent piece of API design.
Where nexus-rt diverges is the target workload. Bevy is built for
simulation: many entities mutated per frame, parallel schedules,
component queries over archetypes. nexus-rt is built for event-driven
systems: singleton resources, sequential dispatch, and monotonic sequence
numbers instead of frame ticks. There are no entities, no components,
no archetypes — just a typed resource store where each event advances
a sequence counter and causality is tracked per-resource.
The result is a much smaller surface area tuned for low-latency event processing rather than game-world state management.
Architecture
Build Time Dispatch Time
┌──────────────────┐ ┌──────────────────────┐
│ │ │ │
│ WorldBuilder │ │ World │
│ │ │ │
│ ┌────────────┐ │ build() │ ┌────────────────┐ │
│ │ Registry │──┼──────────────►│ │ ResourceSlot[] │ │
│ │ TypeId→Idx │ │ │ │ ptr → value │ │
│ └────────────┘ │ │ └───────┬────────┘ │
│ │ │ │ │
│ install_plugin │ │ get(id) ~3 cyc │
│ install_driver │ │ │ │
└──────────────────┘ └──────────┼───────────┘
│ │
│ returns Poller │
▼ ▼
┌──────────────────┐ ┌──────────────────────┐
│ Driver Poller │ │ poll(&mut World) │
│ │ │ │
│ Pre-resolved │──────────────►│ 1. next_sequence() │
│ ResourceIds │ │ 2. get resources │
│ │ │ 3. poll IO source │
│ Owns pipeline │ │ 4. dispatch events │
│ or handlers │ │ via pipeline │
└──────────────────┘ └──────────────────────┘
Flow
- Build — Register resources into
WorldBuilder. Install plugins (fire-and-forget resource registration) and drivers (returns a poller). - Freeze —
builder.build()produces an immutableWorld. AllResourceIdvalues are direct pointers, valid for the lifetime of the World. - Poll loop — Your code calls
driver.poll(&mut world)in a loop. Each driver owns its event lifecycle internally: poll IO, decode events, dispatch through its pipeline, mutate world state. - Sequence — Each event gets a monotonic sequence number via
world.next_sequence(). Drivers are responsible for calling this before dispatching each event — the built-in timer and mio pollers do this automatically.world.run()does not advance the sequence; it is purely a shutdown-checked loop.
Dispatch tiers
| Tier | Purpose | Overhead |
|---|---|---|
| Pipeline | Pre-resolved step chains inside drivers. The workhorse. | ~2 cycles p50 |
| Callback | Dynamic per-instance context + pre-resolved params. | ~2 cycles p50 |
| Handler | Box<dyn Handler<E>> for type-erased dispatch. |
~2 cycles p50 |
| Template | Pre-resolved handler stamping for re-registration. | ~1 cycle p50 (generate) |
| DAG | Monomorphized fan-out / merge data-flow graphs. | ~1-3 cycles p50 |
| FanOut / Broadcast | Static or dynamic fan-out by reference. | ~2 cycles p50 |
| Reactor | Interest-based per-instance dispatch with dedup. | ~19 cycles p50 (amortized) |
All tiers resolve Param state at build time. Dispatch-time cost is
a direct pointer deref — no hashing, no searching, no bounds check,
no Vec indirection.
See BENCHMARKS.md for full criterion numbers.
Driver Model
Drivers are event sources. The Installer trait handles installation;
the returned poller is a concrete type with its own poll() signature.
use ;
// Poller defines its own poll signature — NOT a trait method.
The executor is your main():
let mut wb = new;
wb.install_plugin;
let timer = wb.install_driver;
let io = wb.install_driver;
let mut world = wb.build;
loop
Features
For Bevy users: Many concepts map directly —
World(singletons only, no entities/archetypes),Res<T>/ResMut<T>(same semantics),SystemParam→Param,IntoSystem/System→IntoHandler/Handler,Plugin(same pattern),Local<T>(same). The divergence is the execution model: sequential event dispatch instead of parallel frame-based schedules.
World — typed singleton store
Type-erased resource storage with direct ResourceId pointers.
Dispatch-time access is a single pointer deref — zero framework overhead.
Frozen after build — no inserts, no removes.
Res / ResMut — resource parameters
Declare resource dependencies in function signatures. Res<T> for shared
reads, ResMut<T> for exclusive writes. See
Dependency Injection above.
Optional resources
Option<Res<T>> and Option<ResMut<T>> resolve to None if the type
was not registered, rather than panicking at build time. Useful for
handlers that can operate with or without a particular resource.
Param — build-time / dispatch-time resolution
The Param trait is the mechanism behind Res<T>, ResMut<T>,
Local<T>, and all other handler parameters. Two-phase resolution:
- Build time —
Param::init(registry)resolves opaque state (e.g. aResourceId) and panics if the required type isn't registered. - Dispatch time —
Param::fetch(world, state)uses the cached state to produce a reference via a single pointer deref — zero framework overhead.
Built-in impls: Res<T>, ResMut<T>, Option<Res<T>>,
Option<ResMut<T>>, Local<T>, RegistryRef, (), and tuples up to
8 params.
Access conflicts are caught at build time. If two parameters in the
same handler would borrow the same resource (e.g. Res<T> + ResMut<T>,
or two ResMut<T> for the same T), into_handler / .then() panics
with "conflicting access". Pipeline and DAG steps enforce the same check
per-step. This is a build-time guarantee — dispatch never hits a conflict.
Handler / IntoHandler — fn-to-handler conversion
IntoHandler converts a plain fn into a Handler trait object.
Event E is always the last parameter; everything before it is resolved
as Param from a Registry. Named functions only — closures do not
work with IntoHandler due to Rust's HRTB inference limitations with
GATs. See Handlers
above.
Pipeline — pre-resolved processing chains
Typed composition chains where each step is a named function with
Param dependencies resolved at build time.
let reg = world.registry;
let mut pipeline = new
.then // Order → Result<Order, Error>
.and_then // Order → Result<Order, Error>
.catch // Error → () (side effect)
.map // Order → Receipt
.build; // → Pipeline<Order, _> (concrete)
pipeline.run;
Option and Result combinators (.map(), .and_then(), .catch(),
.filter(), .unwrap_or(), etc.) enable typed flow control without
runtime overhead. .splat() destructures a tuple output (2-5 elements)
into individual function arguments for the next step — see
Splat below. Pipeline implements
Handler<In>, so it can be boxed or stored alongside other handlers.
Batch pipeline — per-item processing over a buffer
build_batch(capacity) produces a BatchPipeline that owns a
pre-allocated input buffer. Each item flows through the same chain
independently — errors are handled per-item, not per-batch.
let reg = world.registry;
let mut batch = new
.then // Order → Result<Order, Error>
.catch // handle error, continue batch
.map // runs for valid items only
.then
.build_batch;
// Driver fills input buffer
batch.input_mut.extend_from_slice;
batch.run; // drains buffer, no allocation
No intermediate buffers between steps. The compiler monomorphizes the per-item chain identically to the single-event pipeline.
DAG Pipeline — fan-out, merge, and data-flow graphs
DagBuilder builds a monomorphized data-flow graph where topology is
encoded in the type system. After monomorphization the entire DAG is
a single flat function — all values are stack locals, no arena, no
vtable dispatch.
use ;
use DagBuilder;
let mut wb = new;
wb.;
let mut world = wb.build;
let reg = world.registry;
let mut dag = new
.root
.fork
.arm
.arm
.merge
.then
.build;
dag.run;
// root: 10, arm_a: 11, arm_b: 30, merge: 41
assert_eq!;
Fan-out arms borrow the fork output by reference — no Clone needed.
Option and Result combinators (.map(), .and_then(), .catch(),
etc.) work on both the main chain and within arms. Dag implements
Handler<E>, so it can be boxed or stored alongside other handlers.
For linear chains without fan-out, prefer Pipeline.
DAG combinator quick reference
| Category | Combinator | Signature | Effect |
|---|---|---|---|
| Topology | .root(fn, reg) |
E → T |
Entry point — takes event by value |
.then(fn, reg) |
&T → U |
Chain step — input by reference | |
.fork() |
Begin fan-out — arms observe &T |
||
.arm(|a| a.then(...)) |
Build one arm of a fork | ||
.merge(fn, reg) |
&A, &B → T |
Combine arm outputs | |
.join() |
Terminate fork without merge (all arms → ()) |
||
| Flow control | .guard(fn, reg) |
&T → Option<T> |
Wrap in Option via predicate |
.tap(fn, reg) |
&T → &T |
Observe without consuming | |
.route(pred, reg, arm_t, arm_f) |
&T → U |
Binary conditional routing | |
.tee(arm) |
&T → &T |
Side-effect arm, chain continues | |
.scan(init, fn, reg) |
&mut Acc, &T → U |
Stateful transform with accumulator | |
.dedup() |
T → Option<T> |
Suppress consecutive duplicates | |
| Option<T> | .map(fn, reg) |
&T → U |
Map inner value (Some only) |
.filter(fn, reg) |
&T → Option<T> |
Keep on true, None on false | |
.inspect(fn, reg) |
&T → &T |
Observe Some values | |
.and_then(fn, reg) |
&T → Option<U> |
Flat-map inner value | |
.on_none(fn, reg) |
Side effect on None | ||
.ok_or(fn, reg) |
→ Result<T, E> |
Convert None to Err | |
.ok_or_else(fn, reg) |
→ Result<T, E> |
Convert None to Err (produced) | |
.unwrap_or(default) |
→ T |
Unwrap with fallback | |
.unwrap_or_else(fn, reg) |
→ T |
Unwrap with produced fallback | |
| Result<T, E> | .map(fn, reg) |
&T → U |
Map Ok value |
.and_then(fn, reg) |
&T → Result<U, E> |
Flat-map Ok value | |
.catch(fn, reg) |
E → () |
Handle Err, continue with Ok | |
.map_err(fn, reg) |
E → E2 |
Transform error type | |
.or_else(fn, reg) |
E → Result<T, E2> |
Recover from error | |
.inspect(fn, reg) |
&T → &T |
Observe Ok values | |
.inspect_err(fn, reg) |
&E → &E |
Observe Err values | |
.ok() |
→ Option<T> |
Discard Err | |
.unwrap_or(default) |
→ T |
Unwrap with fallback | |
.unwrap_or_else(fn, reg) |
→ T |
Unwrap Err with produced fallback | |
| Bool | .not() |
bool → bool |
Logical NOT |
.and(fn, reg) |
bool → bool |
Short-circuit AND | |
.or(fn, reg) |
bool → bool |
Short-circuit OR | |
.xor(fn, reg) |
bool → bool |
Logical XOR | |
| Tuple | .splat() |
&(A, B, ...) → (&A, &B, ...) |
Destructure tuple so next .then() sees &A, &B, ... args |
| Terminal | .dispatch(handler) |
&T → () |
Hand off to a Handler |
.cloned() |
&T → T |
Clone reference to owned | |
.build() |
Finalize into Dag<E> |
All combinators accepting functions resolve Param dependencies at build
time via IntoStep, IntoRefStep, or IntoProducer — named functions
get direct-pointer access. Arity-0 closures work everywhere. Raw
&mut World closures are available as an escape hatch via Opaque.
Splat — tuple destructuring
Pipeline and DAG steps follow a single-value-in, single-value-out convention.
When a step naturally produces multiple outputs (e.g. splitting an order into
an ID and a price), .splat() destructures the tuple so the next step
receives individual arguments instead of the whole tuple:
// Pipeline (by value): fn(Params..., A, B) -> Out
new
.then
.splat // (OrderId, f64) → individual args
.then // receives OrderId, f64 separately
.build;
// DAG (by reference): fn(Params..., &A, &B) -> Out
new
.root
.splat // (OrderId, f64) → &OrderId, &f64
.then
.build;
Supported for tuples of 2-5 elements. Beyond 5 arguments, use a named struct — if a combinator stage needs that many inputs, the data likely deserves its own type.
FanOut / Broadcast — handler-level fan-out
FanOut dispatches the same event by reference to a fixed set of
handlers. Zero allocation, concrete types, monomorphizes to direct
calls. Macro-generated for arities 2-8.
Broadcast is the dynamic variant — stores Vec<Box<dyn RefHandler<E>>>
for runtime-determined handler counts.
use ;
use ;
let mut builder = new;
builder.;
builder.;
let mut world = builder.build;
let h1 = write_a.into_handler;
let h2 = write_b.into_handler;
let mut fan = fan_out!;
fan.run;
assert_eq!;
assert_eq!;
Handlers inside combinators receive &E. Use Cloned or Owned
adapters for handlers that expect owned events.
For fan-out with merge (data flowing back together), use DagBuilder.
Change detection
Per-resource change detection has been replaced by the reactor system
(behind the reactors feature). Event handlers call ReactorNotify::mark(source)
to signal which data changed. Subscribed reactors wake automatically with
dedup — per-instrument, per-strategy granularity.
// Setup: register data sources and spawn reactors
let btc_md = world.register_source;
world.spawn_reactor.subscribe;
// Event handler: mark which data changed
// Post-frame: dispatch woken reactors (deduped)
world.dispatch_reactors;
Local — per-handler state
Local<T> is state stored inside the handler instance, not in World.
Initialized with Default::default() at handler creation time. Each
handler instance gets its own independent copy — two handlers created
from the same function have separate Local values.
let mut handler_a = count_events.into_handler;
let mut handler_b = count_events.into_handler;
handler_a.run; // handler_a local=1
handler_b.run; // handler_b local=1 (independent)
handler_a.run; // handler_a local=2
Callback — context-owning handlers
Callback<C, F, Params> is a handler with per-instance owned context.
Use it when each handler instance needs private state that isn't shared
via World — per-timer metadata, per-connection codec state, protocol
state machines.
Convention: fn handler(ctx: &mut C, params..., event: E) — context
first, Param-resolved resources in the middle, event last.
let mut cb = on_timeout.into_callback;
cb.run;
// Context is pub — accessible outside dispatch
assert_eq!;
HandlerTemplate / CallbackTemplate — resolve once, stamp many
When handlers are created repeatedly on the hot path — IO readiness
re-registration, timer rescheduling, connection accept loops — each
into_handler(registry) call pays for HashMap lookups to resolve the
same ResourceId values every time.
Templates resolve parameters once, then generate() stamps out
handlers by copying pre-resolved state — a flat memcpy vs ~20-70 cycles
of HashMap lookups for into_handler.
A [Blueprint] declares the event and parameter types. The template
resolves them against the registry once:
use ;
use ;
;
let mut builder = new;
builder.;
let mut world = builder.build;
let template = new;
// Stamp out handlers — no HashMap lookups, just Copy.
let mut h1 = template.generate;
let mut h2 = template.generate;
h1.run;
h2.run;
assert_eq!;
For context-owning handlers, CallbackTemplate works the same way —
each generate(ctx) takes an owned context value:
;
let mut builder = new;
builder.;
let mut world = builder.build;
let cb_template = new;
let mut cb = cb_template.generate;
cb.run;
assert_eq!;
Convenience macros reduce Blueprint boilerplate:
use handler_blueprint;
handler_blueprint!;
Constraints:
P::State: Copy— excludesLocal<T>with non-Copy state (incompatible with template stamping). All World-backed params (Res,ResMut,Optionvariants) haveState = ResourceIdwhich isCopy.- Zero-sized callables only — named functions and captureless closures. Capturing closures and function pointers are rejected at compile time.
Handler state sizes (for capacity planning with inline storage):
ResourceId is pointer-sized (8 bytes on 64-bit). Each resource param
(Res<T>, ResMut<T>, Option<Res<T>>, Option<ResMut<T>>) stores
one ResourceId (8 bytes). Handler base overhead is 16 bytes (&str
name). Callbacks add the context size.
| Handler type | 0 params | 1 param | 2 params | 4 params | 8 params |
|---|---|---|---|---|---|
| HandlerFn (no ctx) | 16 B | 24 B | 32 B | 48 B | 80 B |
| Callback (8 B ctx) | 24 B | 32 B | 40 B | 56 B | 88 B |
Formula: 16 + (8 × params) + context_size. All fit comfortably
within 256-byte inline buffers (FlatVirtual, InlineTimerWheel).
RegistryRef — runtime handler creation
RegistryRef is a Param that provides read-only access to the
Registry during handler dispatch. Enables handlers to create new
handlers at runtime via IntoHandler::into_handler or
IntoCallback::into_callback.
Installer — event source installation
Installer is the install-time trait for event sources. The installer
registers its resources into WorldBuilder and returns a concrete
poller whose poll() method drives the event lifecycle. See the
Driver Model section for the full pattern.
Timer driver (feature: timer)
Integrates nexus_timer::Wheel as a driver. TimerInstaller registers
the wheel into WorldBuilder and returns a TimerPoller.
TimerPoller::poll(world, now)drains expired timers and fires handlers- Handlers reschedule themselves via
ResMut<TimerWheel<S>> Periodichelper for recurring timers- Inline storage variants behind
smartptrfeature:InlineTimerWheel,FlexTimerWheel
Mio driver (feature: mio)
Integrates mio as an IO driver. MioInstaller registers the
MioDriver (wrapping mio::Poll + handler slab) and returns a
MioPoller.
MioPoller::poll(world, timeout)polls for readiness and fires handlers- Move-out-fire pattern: handler is removed from slab, fired, and must re-insert itself to receive more events
- Stale tokens (already removed) are silently skipped
- Inline storage variants behind
smartptrfeature:InlineMio,FlexMio
Virtual / FlatVirtual / FlexVirtual — storage aliases
Type aliases for type-erased handler storage:
use Virtual;
// Heap-allocated (default)
let handler: = Boxnew;
// Behind "smartptr" feature — inline storage via nexus-smartptr
// use nexus_rt::FlatVirtual;
// let handler: FlatVirtual<Event> = flat!(my_handler.into_handler(registry));
Virtual<E> for heap-allocated. FlatVirtual<E> for fixed inline
(panics if handler doesn't fit). FlexVirtual<E> for inline with
heap fallback.
System / IntoSystem — reconciliation logic
Handlers react to individual events. But some computations need to run after a batch of events has been processed — recomputing a theoretical price after market data updates, checking risk limits after fills, etc. These are reconciliation passes: they read the current state of the world, decide if anything changed, and propagate downstream if so.
System is the dispatch trait for this. Distinct from Handler<E>,
systems take no event parameter and return bool to control downstream
propagation in a DAG scheduler.
| Handler | System | |
|---|---|---|
| Trigger | Per-event | Per-scheduler-pass |
| Event param | Yes (E) |
No |
| Return | () |
bool |
| Purpose | React | Reconcile |
IntoSystem accepts two signatures:
fn(params...) -> bool— returns propagation decision for scheduler DAGsfn(params...)— void return, always propagates (true). Useful forrun_startupand systems that unconditionally propagate.
// Bool-returning: controls DAG propagation
// Void-returning: always propagates
Convert via IntoSystem (same HRTB pattern as IntoHandler):
use ;
let mut system = compute_theo.into_system;
let changed = system.run;
DAG Scheduler — topological system execution
SchedulerInstaller builds a DAG of Systems executed in topological order.
Root systems (no upstreams) always run. Non-root systems run only if at
least one upstream returned true (OR semantics).
use SchedulerInstaller;
let mut installer = new;
let theo = installer.add;
let quotes = installer.add;
let risk = installer.add;
installer.after; // quotes runs after theo
installer.after; // risk runs after quotes
let mut scheduler = wb.install_driver;
let mut world = wb.build;
// In event loop: run scheduler after event processing
let systems_run = scheduler.run;
Propagation is tracked via a u64 bitmask (one bit per system), limiting
the scheduler to MAX_SYSTEMS (64) systems. Systems return bool to
control downstream execution — true means "my outputs changed, run
downstream." For per-item change detection, use the reactor system.
Reactor system (feature: reactors)
Interest-based per-instance dispatch with O(1) dedup. Replaces per-resource change detection with explicit, fine-grained notification.
// Setup — auto-registered by WorldBuilder::build()
let btc_md = world.register_source;
world.spawn_reactor.subscribe;
// Event handler marks data source
// Post-frame dispatch (deduped — each reactor runs at most once)
world.dispatch_reactors;
ReactorNotify— World resource: reactor storage, data source fan-out, registration. Event handlers mark viaResMut.SourceRegistry— maps domain keys (InstrumentId,StrategyId, tuples) toDataSourcevalues for runtime lookup.DeferredRemovals— reactors self-remove by pushing their token. Cleanup runs after dispatch completes.PipelineReactor— reactor body is aCtxPipelineorCtxDag. Pipeline internals fully monomorphized; oneBoxper reactor.- ~19 cycles per reactor (amortized at 50 reactors). See BENCHMARKS.md.
Startup & Lifecycle
Shutdown is an interior-mutable flag automatically registered by
WorldBuilder::build(). Handlers trigger shutdown via Res<Shutdown>;
the event loop checks via ShutdownHandle:
use ;
use Shutdown;
// Handler side
// Event loop side
let mut world = new.build;
let shutdown = world.shutdown_handle;
while !shutdown.is_shutdown
With the signals feature, ShutdownHandle::enable_signals() registers
SIGINT/SIGTERM handlers (Linux only) that flip the shutdown flag
automatically.
CatchAssertUnwindSafe — panic resilience
Wraps a handler to catch panics during run(), ensuring the handler is
never lost during move-out-fire dispatch (timer wheels, IO slabs). The
caller asserts that the handler and resources can tolerate partial writes.
use ;
let handler = tick.into_handler;
let guarded = new;
let mut boxed: = Boxnew;
// Panics inside run() are caught — handler survives for re-dispatch
Testing — TestHarness and TestTimerDriver
TestHarness provides isolated handler testing without wiring up drivers.
It owns a World and auto-advances the sequence counter before each dispatch.
use TestHarness;
use ;
let mut builder = new;
builder.;
let mut harness = new;
let mut handler = accumulate.into_handler;
harness.dispatch;
harness.dispatch;
assert_eq!;
TestTimerDriver (feature: timer) wraps TimerPoller with virtual time
control — advance(duration), set_now(instant), poll(world) — for
deterministic timer testing without wall-clock waits.
ByRef / Cloned / Owned — event-type adapters
Adapters bridge between owned and reference event types:
ByRef<H>— wrapsHandler<&E>to implementHandler<E>(borrow before dispatch)Cloned<H>— wrapsHandler<E>to implementHandler<&E>(clone before dispatch)Owned<H, E>— wrapsHandler<E::Owned>to implementHandler<&E>viaToOwned
Primary use: including owned-event handlers in reference-based contexts
(FanOut, Broadcast), or vice versa.
use ;
// Handler expects owned u32
// Adapt for &u32 context (FanOut dispatches by reference)
let h = process.into_handler;
let adapted = Cloned; // now implements Handler<&u32>
// For &str → String:
let h = append.into_handler;
let adapted = new; // implements Handler<&str>
Adapt<F, H> is a separate adapter for wire-format decoding: F: FnMut(Wire) -> Option<T>
filters and transforms before dispatching to Handler<T>.
When to Use What
| Situation | Use | Why |
|---|---|---|
| One-time setup, test harness | IntoHandler / IntoCallback |
Simple, direct. Construction cost paid once. |
| Pipeline steps inside a driver | Pipeline / BatchPipeline |
Zero-cost monomorphized chains, typed flow control. |
| IO re-registration (accept, echo) | HandlerTemplate / CallbackTemplate |
Handler recreated every event — template eliminates per-event HashMap lookups. |
| Timer rescheduling | HandlerTemplate / CallbackTemplate |
Same pattern — recurring handlers should not pay construction cost repeatedly. |
| Type-erased handler storage | Box<dyn Handler<E>> / Virtual<E> |
When you need heterogeneous collections (driver slabs, timer wheels). |
| Per-instance private state | Callback (via IntoCallback) or CallbackTemplate |
Context-owning handlers for connection state, timer metadata, etc. |
| Composable resource registration | Plugin |
Fire-and-forget, consumed by WorldBuilder. |
| Fan-out with merge | DagBuilder → Dag |
Monomorphized data-flow graph. Zero vtable, all stack locals. |
| Static fan-out (known count) | FanOut / fan_out! |
Dispatch &E to N handlers. Zero allocation, concrete types. |
| Dynamic fan-out (runtime count) | Broadcast |
Vec<Box<dyn RefHandler>>. One heap alloc per handler, zero clones. |
Rule of thumb: If a handler is created once, use IntoHandler. If
it's created repeatedly on every event (move-out-fire pattern), use a
template. For data that must fan out and merge back, use DagBuilder.
For fire-and-forget fan-out, use FanOut (static) or Broadcast
(dynamic).
Practical Guidance
Boxing recommendation
Pipeline, DAG, and composed handler types are fully monomorphized — the
concrete types are deeply nested generics, often unnameable, and can be
very large. Strongly recommend Box<dyn Handler<E>> (or Virtual<E>)
for storage.
The cost is a single vtable dispatch at the handler boundary. All internal dispatch within the handler/pipeline/DAG remains zero-cost monomorphized. One vtable call amortized over many internal steps is the design:
// Concrete type is unnameable — box it
let handler: = Boxnew;
Named functions vs closures
Arity-0 closures work in Pipeline and DAG steps. Arity-1+ (with Param
arguments) requires named functions. This is a feature, not a limitation:
- Named functions are testable in isolation
- Named functions are inspectable (handler
.name()returns the function path) - Named functions are reusable across pipelines
For cases where you need &mut World access in a closure (e.g. dynamic
resource lookup), pass a |world: &mut World, input| { ... } closure —
it resolves via the Opaque marker with no Param overhead. The same
pattern works for OpaqueHandler (closures as Handler<E>).
Keep step functions small and focused — one function per transformation.
Pipeline vs DAG
| Pipeline | DAG | |
|---|---|---|
| Topology | Linear chain | Fan-out / merge |
| Value flow | By value (move) | By reference within arms |
| Clone needed | No | No (shared &T) |
| Use when | Steps are sequential | Data needs to go to multiple places |
Both compose into Handler<E> via .build(). Use Pipeline for the common
case; reach for DAG when you need .fork().
Performance
All measurements in CPU cycles, pinned to a single core with turbo boost disabled.
Dispatch (hot path)
| Operation | p50 | p99 | p999 |
|---|---|---|---|
| Baseline hand-written fn | 2 | 3 | 4 |
| 3-stage pipeline (bare) | 2 | 2 | 4 |
| 3-stage pipeline (Res<T>) | 2 | 3 | 5 |
| Handler + Res<T> (read) | 2 | 4 | 5 |
| Handler + ResMut<T> (write) | 3 | 8 | 8 |
| Box<dyn Handler> | 2 | 9 | 9 |
Pipeline dispatch matches hand-written code — zero-cost abstraction confirmed.
Batch throughput
Total cycles for 100 items through the same pipeline chain.
| Operation | p50 | p99 | p999 |
|---|---|---|---|
| Batch bare (100 items) | 130 | 264 | 534 |
| Linear bare (100 calls) | 196 | 512 | 528 |
| Batch Res<T> (100 items) | 390 | 466 | 612 |
| Linear Res<T> (100 calls) | 406 | 550 | 720 |
Batch dispatch amortizes to ~1.3 cycles/item for compute-heavy chains (~1.5x faster than individual calls).
Construction (cold path)
| Operation | p50 | p99 | p999 |
|---|---|---|---|
| into_handler (1 param) | 21 | 30 | 79 |
| into_handler (4 params) | 45 | 86 | 147 |
| into_handler (8 params) | 93 | 156 | 221 |
| .then() (2 params) | 28 | 48 | 96 |
Construction cost is paid once at build time, never on the dispatch hot path.
Template generation (hot path handler creation)
| Operation | p50 | p99 | p999 |
|---|---|---|---|
| generate (1 param) | 1 | 1 | 2 |
| generate (2 params) | 1 | 1 | 2 |
| generate (4 params) | 1 | 1 | 1 |
| generate (8 params) | 1 | 1 | 1 |
| generate callback (2 params) | 1 | 2 | 2 |
| generate callback (4 params) | 1 | 1 | 1 |
generate() copies pre-resolved ResourceId values — a flat memcpy
at every arity. Compare with into_handler above: 24-70x faster for
handlers created on every event (IO re-registration, timer rescheduling).
Running benchmarks
DAG dispatch (hot path)
| Operation | p50 | p99 | p999 |
|---|---|---|---|
| DAG linear 3 stages | 1 | 2 | 3 |
| DAG linear 5 stages | 1 | 2 | 3 |
| DAG diamond fan=2 (5 stages) | 1 | 3 | 5 |
| DAG fan-out 2 (join) | 2 | 6 | 9 |
| DAG complex (fan+linear+merge) | 1 | 4 | 5 |
| DAG complex+Res<T> (Param fetch) | 3 | 3 | 5 |
| DAG linear 3 via Box<dyn Handler> | 1 | 4 | 4 |
| DAG diamond-2 via Box<dyn Handler> | 2 | 2 | 5 |
DAG dispatch matches Pipeline dispatch — topology adds no measurable overhead. Boxing adds ~1 cycle at the boundary.
Scheduler dispatch
| Operation | p50 | p99 | p999 |
|---|---|---|---|
| Flat 1 system | 11 | 20 | 48 |
| Flat 4 systems | 25 | 41 | 82 |
| Flat 8 systems | 43 | 67 | 124 |
| Chain 4 systems (all propagate) | 25 | 42 | 84 |
| Chain 8 systems (all propagate) | 44 | 73 | 124 |
| Diamond fan=4 (6 systems) | 35 | 53 | 93 |
| Skipped chain 8 (1 runs, 7 skip) | 17 | 28 | 68 |
| Skipped chain 32 (1 runs, 31 skip) | 46 | 76 | 118 |
Scheduler overhead is ~8-12 cycles per system. Skipped systems
(upstream returned false) cost ~2 cycles each (bitmask check).
Limitations
Named functions only
IntoHandler, IntoCallback, and IntoStep (arity 1+) require named
fn items — closures do not work due to Rust's HRTB inference limitations
with GATs. This is the same limitation as Bevy's system registration.
Arity-0 pipeline steps (no Param) do accept closures:
// Works — arity-0 closure
pipeline.then;
// Does NOT work — arity-1 closure with Param
// pipeline.then(|config: Res<Config>, x: u32| x, registry);
// Works — named function
pipeline.then;
Single-threaded
World is !Sync by design. All dispatch is single-threaded, sequential.
This is intentional — for latency-sensitive event processing, eliminating
coordination overhead matters more than parallelism.
Frozen after build
No resources can be added or removed after WorldBuilder::build(). All
registration happens at build time. This enables stable pointers and
eliminates runtime bookkeeping.
Examples
mock_runtime— Complete driver model: plugin registration, driver installation, explicit poll looppipeline— Pipeline composition: bare value, Option, Result with catch, combinators, build into Handlerdag— DAG pipeline: linear, diamond, fan-out, route, tap, tee, dedup, guard, boxingscheduler_dag— DAG scheduler: reconciliation systems, boolean propagation, change detectionhandlers— Handler composition: IntoHandler, Callback, boxing, FanOut, Broadcast, adapterstemplates— Template generation: HandlerTemplate, CallbackTemplate, handler_blueprint macrotesting_example— TestHarness usage for isolated handler unit testinglocal_state— Per-handler state withLocal<T>, independent across handler instancesoptional_resources— Optional dependencies withOption<Res<T>>/Option<ResMut<T>>perf_pipeline— Dispatch latency benchmarks with codegen inspection probesperf_dag— DAG dispatch latency benchmarks across topologiesperf_scheduler— Scheduler dispatch latency benchmarksperf_construction— Construction-time latency benchmarks at various aritiesperf_template— Template generation vsinto_handlerconstruction benchmarksperf_fetch— Fetch dispatch strategy benchmarksmio_timer— Echo server combining mio and timer drivers with template construction benchmarks
License
See workspace root for license details.