Expand description
§Juliet protocol implementation
This crate implements the Juliet multiplexing protocol version 1.0.1 as laid out in the Juliet RFC. It aims to be a secure, simple, easy to verify/review implementation that is still reasonably performant.
§Benefits
The Juliet protocol comes with a core set of features, such as
- carefully designed with security and DoS resilience as its foremoast goal,
- customizable frame sizes,
- up to 256 multiplexed, interleaved channels,
- backpressure support fully baked in, and
- low overhead (4 bytes per frame + 1-5 bytes depending on payload length).
This crate’s implementation includes benefits such as
- a side-effect-free implementation of the Juliet protocol,
- an
asyncIO layer integrated with thebytescrate to use it, and - a type-safe RPC layer built on top.
§Examples
For a quick usage example, see examples/fizzbuzz.rs.
§tracing support
The crate has an optional dependency on the tracing crate, which, if enabled, allows detailed insights through logs. If the feature is not enabled, no log statements are compiled in.
Log levels in general are used as follows:
ERRORandWARN: Actual issues that are not protocol level errors – peer errors are expected and do not warrant aWARNlevel.INFO: Insights into received high level events (e.g. connection, disconnection, etc), except information concerning individual requests/messages.DEBUG: Detailed insights down to the level of individual requests, but not frames. A multi-megabyte single message transmission will NOT clog the logs.TRACE: LikeDEBUG, but also including frame and wire-level information, as well as local functions being called.
At INFO, it is thus conceivable for a peer to maliciously spam local logs, although with some effort if connection attempts are rate limited. At DEBUG or lower, this becomes trivial.
§General usage
This crate is split into three layers, whose usage depends on an application’s specific use
case. At the very core sits the protocol module, which is a side-effect-free implementation
of the protocol. The caller is responsible for all IO flowing in and out, but it is instructed
by the state machine what to do next.
If there is no need to roll custom IO, the io layer provides a complete tokio-based
solution that operates on tokio::io::AsyncRead and tokio::io::AsyncWrite. It handles
multiplexing input, output and scheduling, as well as buffering messages using a wait and a
ready queue.
Most users of the library will likely use the highest level layer, rpc instead. It sits on
top the raw io layer and wraps all the functionality in safe Rust types, making misuse of
the underlying protocol hard, if not impossible.
Modules§
- header
julietheader parsing and serialization.- io
julietIO layer- protocol
- Protocol parsing state machine.
- rpc
- RPC layer.
- varint
- Variable length integer encoding.
Macros§
- try_
outcome try!forOutcome.
Structs§
- Channel
Configuration - Channel configuration values that needs to be agreed upon by all clients.
- Channel
Id - A channel identifier.
- Id
- An identifier for a
julietmessage.
Enums§
- Outcome
- The outcome of a parsing operation on a potentially incomplete buffer.