Middleman
Middleman is a library for sending and receiving serializable data structures over a TCP connection, abstracting away from the raw bytes. This project draws inspiration from an older library, wire, but is intended to play nicely with the mio polling system.
TCP
~~~~~~~~~
▲ ▼
▲ bytes ▼
▲ ▼
~~~~~~~~~
TCP
Middleman
┆ ┆
╵ ↓
Using it yourself
If you want to build some kind of Tcp-based network program, you'll need to do a few things. Many of these are in common with mio, but let's start somewhere. For our example, I will consider the case of setting up a single client-server connection for a baseline.
Before we begin, this is at the core of what you conceptually want on either end of a communication link:
- One
Middlemanthat exposes non-blocking functionssend(&T)andrecv() -> Option<T>, whereTis the type of the message structure(s) you wish to send over the network. Easy.
Old versions of middleman stopped there. This always presented the problem: When should you call recv? The naive solution is just to keep calling it all the time. Thanksfully, mio exists to help with exactly that. It relies on polling to lazily do work and unblock when something can potentially progress. So here we see how to get this all working together smoothly:
- Setup your messages
- Define which message types you wish to send over the network (called 'T' in the description above).
- Make these structures serializable with
serde. I would suggest relying on the macros inserde_derive. - Implement the marker trait
middleman::Messagefor your messages. All in all it may leave things looking like this:
- Setup your mio loop
- For each participant, somehow acquire a
mio::net::TcpStreamobject connected to the relevant peer(s). This is stuff not unique tomiddleman. - Wrap each tcp stream in a
Middleman. - Register your middlemen with their respective
Pollobjects (as you would with themio::TcpStreamitself). - Inside the mio poll loop, call some variant of
Middleman::recvat the appropriate time. Your job is to ensure that you always drain all the waiting messages.recvwill never block, so feel free to spuriously try and recv something. - use your middlemen to
sendas necessary.
- For each participant, somehow acquire a
That's it. The flow isnt' very different from that of the typical Tcp setting. The bulk of the work involves getting your Poll, Middleman and TcpStream objects to all work nicely together. For more detailed examples, see the tests.
Where Mio ends and Middleman begins
When implementing high level algorithms, one likes to think not of bytes and packets, but rather of discrete messages. Enums and Structs are more neat mappings to these theoretical constructs than byte sequences are. Middleman aims to hide all the byte-level stuff, but hide nothing more.
Someone familiar with the use of mio for using the select-loop-like construct to poll the progress of one or more Evented structures will see the use of middleman doesn't change much.
At a high level, your code may look something like this:
let poll = ...
... // setup other mio stuff
let mut mm = new;
poll.register.unwrap;
loop
There are ways of approaching how to precisely get at the messages, when to deserialize them and what to do next,
but this is the crux of it: When you get a notification from poll, you try to read all waiting messages and handle them. That's it. At any point you can send a message the other way using mm.send::<MyType>(& msg). No extra threads are needed. No busy-waiting spinning is required (thanks to mio::Poll).
The special case of recv_blocking
mio is asynchronous and non-blocking by nature. However, sometimes a blocking receive is a more ergonomic fit, for cases where exactly one message is expected. Functions recv_blocking and recv_blocking_solo exist as a compact means of hijacking the polling loop flow temporarily until a message is ready. See the documentation for more details and see the tests for some examples.
A note on message size
This library concentrates on flexibility. Messages of the same type can be represented with different sizes at runtime (eg: an empty hashmap takes fewer bytes than a full one). You don't have much to fear as far as the byte-size of your messages are concerned, but still watch out for the effect some pathological cases may have on the size in memory.
Calling test may print:
packed bytes 9
memory bytes 264