Crate reactive_mutiny
source ·Expand description
reactive-mutiny crate
async Event-Driven Reactive Library for Rust with advanced & optimized containers (channels) and Stream executors
Browse the Docs.
Rust’s reactive-mutiny was designed to allow building efficient & elegant asynchronous event processing pipelines (using
Streams – a.k.a. “async Iterators”), easing flexible & decoupled microservice architectures (distributed or not), ready for production*.
The core of this library is composed of a Uni and a Multi – hence the name “Mutiny”. Both process streams of events:
Uniallows a single listener OR multiple consumers for each produced payload – also definable as allows a single event processing pipeline;Multiallows multiple listeners AND multiple consumers for each produced payload, allowing several event processing pipelines – or, in Kafka parlance, allows several consumer groupsMultimay do whatUnidoes, but the former does it faster – hence, justifying its existence:Unidoesn’t use any reference counting for the payloads and uses a single queue/channel (whereMultirequires as many as there are listeners).
Moreover, zero-costs metrics & logs are available – getting optimized away if not used.
Taste the library in this excerpt:
use reactive_mutiny::prelude::*;
fn logic_1(events_stream: impl Stream<Item=InputEventType>) -> impl Stream<Item=OutputEventType> {
// your logic goes here using Rust's Stream / Iterator functions
}
fn main() {
// build the event processing pipeline
let events_handle = UniZeroCopy::<InputEventType, 1024, 1>::new()
.spawn_non_futures_non_fallible_executor("Consumer of InputEventType and issiuer of OutputEventType",
|events_stream| {
logic_2(logic_1(events_stream))
.inspect(|outgoing_event| send(outgoing_event))
},
|_executor| async { /* on-close logic */ });
}
// see more details in examples/uni-microserviceCore components:
- A set of channels through which events are sent from producers to consumers – all context-switch-free (AKA “lock-free”) – including zero-copy & mmap log based ones;
- Custom allocators, for superior performance and flexibility;
- A set of generic
Streamexecutors for all possible combinations of Future/non-Future & Fallible/non-Fallible event types, with the option of enforcing or not a Timeout on each event’s resolution of theirFuture. The API was carefully designed to allow the compiler to fully optimize everything: most of times, all of the reactive code ends up in the executors and the whole Multi / Uni abstractions are zeroed out; - Instrumentation & Metrics collectors for visibility of the performance and operation;
- The main
MultiandUniobjects, along with a set of prelude type aliases binding the channels and allocators together.
WARNING: * This crate is still in its first steps into production usage: no known bugs exist, speed is amazing, API is reasonably stable, but improved docs & code cleanup will still be (slowly) improved, along with any evolutions from community feedback + any API symmetry adjustments that are detected along the way
Performance
This crate was very losely inspired by the SmallRye’s Mutiny library for Java, from which some names were borrowed.
Little had to be done to bring the same functionality to Rust, due to the native functional approach, powerful error
handling, async support and wonderful & flexible Iterator/Stream APIs supported by the language, so the focus of this work went into
bringing the events to their maximum processing speed & operability: special queues, topics, stacks, channels and Stream executors have
been created, offering a superior performance over the Rust’s native & community versions – inspect the benches folder for details:
performance characteristics of the standard/community vs our provided raw senders of payloads from one thread to another
performance characteristics comparison of standard vs our provided type wrappers and allocators, used for zero-copy channels – with raw memcopy and allocators baselines
Where to go next
Docs will still be improved. Meanwhile, the following sequence is suggested for new users of this crate:
- Look at the
examples/; - Inspect the
reactive-socketcrate; - Study the type aliases in
reactive-mutiny::prelude::advanced::*– at this point, it is safe to trust the docs will provide everything you’ll need.
Comparisons
If you’re familiar with SmallRye’s Mutiny, here are some key differences:
- Both our
UniandMultihere process streams of events. On the original library, a Uni is like a single “async future” and, since we don’t need that in Rust, the names were repurposed: the other Multi is ourUni(may also work as ourMultiwhen using “subscriptions”) and the other Uni you may get by just using any Rust’s async calls & handling anyResult<>, for error treatment; - Each event fed into the pipeline will be executed, regardless if there is an answer at the end; also, there is no “subscription”
(subscription is achieved by adding pipelines to a
Multi); - Executors & their settings are set when the pair producer/pipeline comes to be (when the
Uni/Multiobject is created): there is no .merge() nor .executeAt() to call in the pipeline; - No Multi/Uni pipeline conversion and the corresponding plethora of functions – they are simply not needed;
- No Uni retries, as it just doesn’t make sense to restrict retries to a particular type. See more at the end of this README;
- No timeouts are set in the pipeline – they are a matter for the executor, which will simply cancel events (that are
Futures) that take longer than the configured executor’s maximum (SmallRye’s Uni timeouts are attainable using Tokio’s “futures” timeouts, just like one would do for any async function call); - Incredibly faster: Rust’s compiler makes your pipelines (and most of this library) behave as a zero-cost abstraction (when compiled in Release mode). Const generics play a great role for such optimizations – but this requires some complex types.
- To fully get the original Mutiny’s behavior, you’ll have to use:
- Rust’s
reactive-mutiny(for reactive async event-processing); Tokio(to get responses from Futures and to specify timeouts in async calls, async sleeps… saving a ton of APIs for this crate);- Streams (the original Mutiny kind of mixes Multi & Stream & Iterator functionalities – which, in practice, leads to inefficient abuses of the original Java library’s abstractions – for using a new instance of their Multi where a Stream or Iterator could be used is a common bad parctice / anti-pattern);
- A general retry mechanism to simulate what SmallRye’s Uni have – but for all
Result<>types rathar than just for a particular type – see it in action inexamples/error-handing-and-retryingand observe the meaningful & contextful error messages.
- Rust’s
Modules
- Allows creating
multis, which represent pairs of (producer,event pipeline) that may be used toproduce()asynchronous payloads to be processed by a multipleevent pipelineStreams – and executed by async tasks. - Resting place for MutinyStream
- Taken from ogre-std while it is not made open-source
- Set of re-exported types & aliases to allow clients to use this lib.
See also advanced for advanced/flexible type aliases. - Contains the logic for executing [Stream] pipelines on their own Tokio tasks with zero-cost Instruments options for gathering stats, logging and so on.
Four executors are provided to attend to different Stream output item types: - Common types across this module
- Allows creating
unis, which represent pairs of (producer,event pipeline) that may be used toproduce()asynchronous payloads to be processed by a singleevent pipelineStream – and executed by one or more async tasks.
Macros
- Macro to close, atomically-ish, all Multis passed in as parameters
- Macro to close, atomically, all Unis passed as parameters TODO the closer may receive a time limit to wait – returning how many elements were still there after if gave up waiting TODO busy wait ahead – is it possible to get rid of that without having to poll & sleep? (anyway, this function is not meant to be used in production – unless when quitting the app… so this is not a priority)