Skip to main content

Crate ipc_channel_mux

Crate ipc_channel_mux 

Source
Expand description

ipc-channel-mux1 is a multiplexing, inter-process implementation of Rust channels (which were inspired by CSP2).

A Rust channel is a unidirectional, FIFO queue of messages which can be used to send messages between threads in a single operating system process. For an excellent introduction to Rust channels, see Using Message Passing to Transfer Data Between Threads in the Rust reference.

ipc-channel-mux extends Rust channels to support inter-process communication (IPC) in a single operating system instance. ipc-channel-mux multiplexes subchannels over IPC primitives to reduce the consumption of such primitives. The serde library is used to serialize and deserialize messages sent over ipc-channel-mux.

§Important caveats

  1. The author of this crate makes no commitment to maintain the code henceforth.
  2. Some latter changes were implemented using Claude Code. If this is not acceptable, those areas would need to be re-implemented.
  3. There is at least one undiagnosed issue on Windows which surfaces as intermittent failures in CI testing. It seems likely that the unsafe Windows code in ipc-channel is responsible, but it is conceivable that this crate is to blame.

§Design goals

  • Resource efficiency: Multiplex subchannels over shared IPC channels to reduce OS resource consumption (file descriptors, sockets, etc.). Subsenders can be cloned and sent without consuming additional OS resources. See When is multiplexing beneficial? for more detail.
  • Drop-in replacement for Rust channels: The API mirrors channel() / Sender<T> / Receiver<T> as closely as possible. See the mapping table below and Semantic differences from Rust channels for the differences.
  • Sender mobility: SubSender implements Serialize and Deserialize, so subsenders can be sent over subchannels to other processes, enabling dynamic communication topologies. See Subsender serialization for how this is implemented efficiently.
  • Disconnection detection: Detect when all senders or the receiver of a subchannel have been dropped, even across process boundaries and even when subsenders are in-flight (being sent over a subchannel but not yet received). See Subsender lifecycle for the mechanism.
  • Deadlock avoidance: Proactively drain IPC channels to prevent buffer-full blocking, which could cause deadlocks when many subchannels share an IPC channel. See Blocking sends and deadlocks for background.

As much as possible, ipc-channel-mux has been designed to be a drop-in replacement for Rust channels. The mapping from the Rust channel APIs to subchannel APIs is as follows:

  • channel()mux::Channel::new().unwrap().sub_channel();
  • Sender<T>mux::SubSender<T> (requires T: Serialize)
  • Receiver<T>mux::SubReceiver<T> (requires T: Deserialize)
  • ipc::bytes_channel()mux::Channel::new().unwrap().bytes_sub_channel();
  • IpcBytesSendermux::BytesSubSender
  • IpcBytesReceivermux::BytesSubReceiver

Note that SubSender<T> implements Serialize and Deserialize, so you can send subsenders over subchannels freely, just as you can with Rust channels. However, you cannot send or receive subreceivers - the reason is explained below.

The easiest way to make your types implement Serialize and Deserialize is to use the serde_macros crate from crates.io as a plugin and then annotate the types you want to send with #[derive(Deserialize, Serialize]). In many cases, that’s all you need to do — the compiler generates all the tedious boilerplate code needed to serialize and deserialize instances of your types.

§Bootstrapping channels between processes

ipc-channel-mux provides a one-shot server to help establish a subchannel between two processes. When a one-shot server is created, a server name is generated and returned along with the server.

The client process calls connect() passing the server name and this returns the sender end of an subchannel from the client to the server. Note that there is a restriction: connect() may be called at most once per one-shot server.

The server process calls accept() on the server to accept a connect request from a client. accept() blocks until a client has connected to the server and sent a message. It then returns a pair consisting of the receiver end of the subchannel from client to server and the first message received from the client.

So, in order to bootstrap a subchannel between processes, you create an instance of the SubOneShotServer type, pass the resultant server name into the client process (perhaps via an environment variable or command line flag), and connect to the server in the client. See spawn_sub_one_shot_server_client() in multiplex_integration_test.rs for an example of how to do this using a command to spawn the client process.

§API overview

Let’s look at the two ways of creating a channel: directly constructing a channel and using a one-shot server.

§Direct channel construction

Creating a subchannel requires a multiplexing IPC channel to be created first:

let channel = mux::Channel::new().unwrap();
...
let (tx, rx) = channel.sub_channel();

§One-shot servers

Multiplexing one-shot servers are used like this:

let (server, server_name) = mux::SubOneShotServer::new().unwrap();
...
let tx = mux::SubSender::connect(server_name).unwrap(); // Typically in another process

let (rx, data) = server.accept().unwrap();

An advantage of creating a subchannel, rather than an IPC channel, using a one-shot server is that the subchannel can then be used to transmit subsenders without consuming scarce operating system resources, such as file descriptors on Linux.

§Blocking receives

SubReceiver supports blocking receives via recv, analogous to the corresponding method on IpcReceiver and std::sync::mpsc::Receiver:

use ipc_channel_mux::mux;

let channel = mux::Channel::new().unwrap();
let (tx, rx) = channel.sub_channel();

tx.send(42).unwrap();
assert_eq!(rx.recv().unwrap(), 42);

recv blocks until a message is available or all senders have been dropped, in which case it returns MuxError::Disconnected.

§Non-blocking receives

SubReceiver supports non-blocking receives via try_recv and try_recv_timeout, analogous to the corresponding methods on IpcReceiver and std::sync::mpsc::Receiver:

use ipc_channel_mux::mux;
use std::time::Duration;

let channel = mux::Channel::new().unwrap();
let (tx, rx) = channel.sub_channel();

// try_recv returns immediately with Empty if no message is available.
match rx.try_recv() {
    Err(mux::TryRecvError::Empty) => (), // no message yet
    _ => unreachable!(),
}

tx.send(42).unwrap();
assert_eq!(rx.try_recv().unwrap(), 42);

// try_recv_timeout waits for up to the specified duration.
match rx.try_recv_timeout(Duration::from_millis(1)) {
    Err(mux::TryRecvError::Empty) => (), // timed out, no message
    _ => unreachable!(),
}

§Routing

The router routes messages from subreceivers to Crossbeam channels. This allows receiving code to utilise Crossbeam features.

The router is in the mux::subchannel_router module.

use ipc_channel_mux::mux;
use mux::subchannel_router::{ROUTER, RouterProxy};

let channel = RouterProxy::new_router_channel(&ROUTER).unwrap();
let (tx, crossbeam_rx) = channel
    .route_to_new_crossbeam_receiver::<i32>()
    .unwrap();

tx.send(42).unwrap();
assert_eq!(crossbeam_rx.recv().unwrap(), 42);

§Bytes subchannels

BytesSubSender and BytesSubReceiver send and receive raw byte data, analogous to ipc-channel’s IpcBytesSender and IpcBytesReceiver:

use ipc_channel_mux::mux;

let channel = mux::Channel::new().unwrap();
let (tx, rx) = channel.bytes_sub_channel();

tx.send(b"hello bytes").unwrap();
assert_eq!(rx.recv().unwrap(), b"hello bytes");

BytesSubSender can be cloned and sent over subchannels, just like SubSender<T>. BytesSubReceiver supports recv, try_recv, and try_recv_timeout.

§Shared memory

mux::SharedMemory is a shared memory region that can be sent over subchannels. It is analogous to ipc-channel’s IpcSharedMemory and is transported efficiently via OS shared memory primitives:

use ipc_channel_mux::mux;

let channel = mux::Channel::new().unwrap();
let (tx, rx) = channel.sub_channel();

let shmem = mux::SharedMemory::from_bytes(b"hello shared world");
tx.send(shmem).unwrap();

let received: mux::SharedMemory = rx.recv().unwrap();
assert_eq!(&*received, b"hello shared world");

SharedMemory can also be included as a field in user-defined message types that derive Serialize and Deserialize.

§Interoperation with ipc-channel

ipc-channel-mux provides bridge types for processes that are migrating from raw ipc-channel incrementally, or that need to exchange endpoints across the boundary between migrated and un-migrated code:

Bridge typeWhat it carriesDirection
mux::IpcSender<T>A raw ipc::IpcSender<T>Through a subchannel
mux::IpcReceiver<T>A raw ipc::IpcReceiver<T>Through a subchannel
mux::IpcChannelSubSender<T>A mux::SubSender<T>Through a raw IPC channel
§IPC senders and receivers

mux::IpcSender<T> and mux::IpcReceiver<T> are wrappers that allow ipc-channel senders and receivers to be transmitted over subchannels. They are analogous to mux::SharedMemory for IpcSharedMemory: OS handles are transported via ipc-channel’s serialization layer.

use ipc_channel::ipc;
use ipc_channel_mux::mux;

let channel = mux::Channel::new().unwrap();
let (tx, rx) = channel.sub_channel();

// Create a raw ipc-channel and wrap the sender end.
let (raw_tx, raw_rx) = ipc::channel::<u32>().unwrap();
let wrapped_tx = mux::IpcSender::from(raw_tx);

// Send the wrapped sender over the subchannel.
tx.send(wrapped_tx).unwrap();

// On the receiving side, unwrap to recover the raw sender.
let received: mux::IpcSender<u32> = rx.recv().unwrap();
let raw_tx: ipc::IpcSender<u32> = received.into_inner();

raw_tx.send(99).unwrap();
assert_eq!(raw_rx.recv().unwrap(), 99);

mux::IpcReceiver<T> works the same way. Call into_inner() after receiving to get the underlying ipc::IpcReceiver<T>. An IpcReceiver may only be serialized (sent) once; a second attempt returns an error.

mux::IpcSender<T> and mux::IpcReceiver<T> can also be included as fields in user-defined message types that derive Serialize and Deserialize.

§Subchannel senders over IPC channels

mux::IpcChannelSubSender<T> is the reverse of mux::IpcSender<T>: it wraps a SubSender<T> for transmission over a raw ipc-channel IPC channel. This is useful when bootstrapping a complex communication topology where processes need to exchange subsenders before a subchannel is established between the processes.

use ipc_channel::ipc;
use ipc_channel_mux::mux;

let channel = mux::Channel::new().unwrap();
let (tx, rx) = channel.sub_channel::<u32>();

// Wrap the SubSender for IPC channel transport (consuming it).
let (raw_tx, raw_rx) = ipc::channel().unwrap();
raw_tx.send(mux::IpcChannelSubSender::from(tx)).unwrap();

// On the receiving side, reconstruct the SubSender.
let transport: mux::IpcChannelSubSender<u32> = raw_rx.recv().unwrap();
let tx: mux::SubSender<u32> = transport.into_sub_sender().unwrap();

tx.send(42).unwrap();
assert_eq!(rx.recv().unwrap(), 42);

From<SubSender<T>> is consuming; clone the SubSender first if the original is also needed. IpcChannelSubSender<T> can be included as a field in any user-defined message type. The reconstructed SubSender<T> is fully functional: it detects subreceiver disconnection and sends Disconnect when dropped.

§Opaque senders and receivers

OpaqueSubSender and OpaqueSubReceiver are type-erased versions of SubSender<T> and SubReceiver<T>. They are useful when the message type is not known statically or when handling heterogeneous channels. For example, the router uses OpaqueSubReceiver internally so it can manage receivers of different message types together.

To convert between typed and opaque forms, use to_opaque() and to::<T>():

let opaque_tx: OpaqueSubSender = tx.to_opaque();
let tx: SubSender<MyMessage> = opaque_tx.to();

let opaque_rx: OpaqueSubReceiver = rx.to_opaque();
let rx: SubReceiver<MyMessage> = opaque_rx.to();

§Semantic differences from Rust channels

  • Rust channels can be either unbounded or bounded whereas subchannels are always unbounded and send() never blocks.
  • Rust channels do not consume OS IPC resources whereas subchannels consume IPC resources such as sockets, file descriptors, shared memory segments, named pipes, and such like, depending on the OS.
  • Rust channels transfer ownership of messages whereas subchannels serialize and deserialize messages.
  • Rust channels are type safe whereas subchannels depend on client and server programs using identical message types (or at least message types with compatible serial forms).

§Semantic differences from IPC channels

IPC channels are provided by Servo’s ipc-channel crate which the implementation of ipc-channel-mux uses for IPC communication.

  • Subchannel creation requires the underlying IPC channel to have been created already. Reusing the underlying channel when creating multiple subchannels enables those subchannels to be multiplexed over the underlying channel.
  • Subchannel receivers, or subreceivers, may not be sent or received. This is a consequence of the MPSC nature of the underlying IPC channel: sending a subreceiver would entail sending the underlying IPC receiver and this would break any other subreceivers using that IPC receiver.
  • IPC channel creation can fail, as can multiplexing IPC channel creation, but subchannel creation never fails.3
  • IPC receivers can be moved into an IpcReceiverSet and then monitored together using a “select” operation. There is no corresponding feature in the ipc-channel-mux API since certain scenarios involving subreceivers sharing an underlying IPC channel, some of which are in one set, some in another, and some not in a set give rise to liveness and fairness difficulties without much practical benefit. The main practical use of IpcReceiverSet is in implementing routing, which is implemented in ipc-channel-mux without adding a subreceiver set construct to the API.

§When is multiplexing beneficial?

Readers familiar with ipc-channel may be experiencing some déjà vu at this point since ipc-channel-mux is built on top of ipc-channel and has a similar API. The main difference is that ipc-channel-mux multiplexes subchannels over the IPC channels provided by ipc-channel.

We’ll now explore when it’s worth using ipc-channel-mux instead of ipc-channel. First, it’s important to note some other differences between the two kinds of channel:

  • Subchannel senders, or subsenders, may be sent and received without consuming scarce operating system resources, such as file descriptors on Unix variants.4 (Servo has encountered process crashes due to IPC channels consuming all the file descriptors for a process.)
  • In order to communicate subreceiver drop to all the subchannel senders, one additional IPC channel is needed per sender of the IPC channel underlying the subchannel. The additional IPC channel’s consumption of scare operating system resources, such as file descriptors on Unix variants, is amortised across multiple subchannels which share the sender of the IPC channel underlying the original subchannel.
  • Subchannels sharing the same underlying IPC channel could interfere with each other’s performance. For example, message latency on a subchannel sharing the same underlying IPC channel as a busy subchannel could be increased.

To replace an IPC channel with a subchannel and get some benefit, it is necessary to either:

  • multiplex other subchannels over the subchannel’s underlying IPC channel, or
  • send multiple subsenders over the subchannel.4

Using a one-shot server to create a subchannel means that only that one subchannel can be multiplexed over the underlying IPC channel. So, to replace an IPC one-shot server with a multiplexed one-shot server and get some benefit, it is necessary to either:

  • set up other subchannels between the sending process (the one which called connect()) and the receiving process (the one which called accept()), or
  • send multiple subsenders over the subchannel.4

§Packaging

ipc-channel-mux is packaged in its own repository and crate, separate from ipc-channel. This has the following advantages:

  • The code is more easily navigated, since it’s portable rather than multiplatform.
  • Changes may be promoted more easily, since IPC channel committers need not be involved.
  • The crate can be published to crates.io for ease of consumption by Servo5 while avoiding “infecting” the published IPC channel crate and its public API with experimental code which might be ditched if multiplexing turns out not to be useful to Servo.
  • Documentation, especially this overview, is focused on multiplexing.
  • Tests run fast since the IPC channel tests are elsewhere.6
  • The dependencies of ipc-channel-mux are kept separate from those of IPC channel.
  • Implementing ipc-channel-mux using the public API of IPC channel makes the projects easier to understand than if they were combined.
  • If multiplexing proves useful and is applied to some IPC channel usecases in Servo, it will be possible to release a version of ipc-channel-mux and keep enhancing it and experimenting with applying it to other Servo usecases without giving it the (possibly misleading) status of being part of the IPC channel API. In particular, the multiplexing API can be changed as necessary without impacting backwards compatibility of IPC channel.

One possible disadvantage is that ipc-channel-mux cannot use IPC channel internals, which would have been possible if they were in the same repository.

Another disadvantage is that Servo will require an additional dependency. However, it would be feasible to merge ipc-channel-mux into the IPC channel repository later.

§Testing

To run the tests, issue:

cargo test

To run the benchmarks, issue:

cargo bench

Linux is the reference platform for ipc-channel-mux, meaning that bugs encountered on other platforms should be reproduced on Linux so that a complete regression test is available on Linux.

§Diagnostics

ipc-channel-mux uses the log crate to produce log messages when logging is enabled for one or more processes.

You can emit these log messages from an executable by setting the environment variable RUST_LOG to debug or, for more detail, trace. For example:

RUST_LOG=debug someexecutable

If you want to see the log messages from a test, pass the --nocapture flag to the test executable, e.g.

RUST_LOG=trace cargo test mux_test::multiplex_simple -- --nocapture

Note: RUST_LOG is not automatically propagated between processes, so you have to ensure this is done if you want to enable logging for launched processes.

For more information, see Configure Logging in The Rust Cookbook.

§Implementation overview

ipc-channel-mux multiplexes its subchannels over IPC channels provided by ipc-channel which is implemented in terms of native IPC primitives: file descriptor passing over Unix sockets on Unix variants, Mach ports on macOS, and named pipes on Windows.

Multiplexed one-shot servers are implemented using IPC channel one-shot servers. One-shot server names are implemented as a file system path (for Unix variants, with the file system path bound to the socket) or other kinds of generated names on macOS and Windows.

The following sections describe the principles of multiplexing subchannels over IPC channels and some of the design considerations.

§Subchannel identifiers

Each subchannel needs a separate identifier. This is used to tag messages for that subchannel before they are sent to the IPC channel underlying the subchannel. On message receipt, the subchannel id. is used to route the message to the appropriate subchannel.

§Subsender serialization

When a subsender is sent over a subchannel, the underlying IPC sender must be transmitted to the receiving process. To avoid redundantly transmitting the same IPC sender multiple times, the implementation uses a UUID-based optimization:

  • The first time a subsender is sent over a particular IPC channel, both the IPC sender and a UUID identifying it are transmitted.
  • Subsequent sends of clones of the same subsender over the same IPC channel transmit only the UUID — the receiving process already has the IPC sender from the first transmission.

This is tracked using two complementary data structures: a Source (using weak references to track which endpoints have been sent from the sending side) and a Target (mapping UUIDs to endpoints on the receiving side). Thread-local context is used during serialization and deserialization to pass this metadata without changing serde’s signatures.

§Subsender lifecycle

Subsenders have a complex lifecycle because they can be cloned, sent over subchannels to other processes, and dropped independently. A subsender that has been sent over a subchannel but not yet received by the other process is said to be in-flight.

It would be incorrect to report a subchannel as disconnected while a subsender is still in-flight, since the receiving process may yet receive it and use it to send messages. The SubSenderStateMachine manages this by tracking:

  • Sources: the set of processes that currently hold a copy of the subsender.
  • In-flight entries: subsenders that have been serialized and sent but not yet deserialized and received.

A subchannel is only considered disconnected when all sources have dropped their copies and no copies are in-flight. Periodic probing detects process crashes that might prevent in-flight subsenders from ever being received.

§Shared memory transport

SharedMemory is a thin wrapper around IpcSharedMemory with custom serialization that works with the mux’s two-stage serialization model. ipc-channel uses thread-local storage to transport IpcSharedMemory values out-of-band via OS shared memory primitives. The mux’s inner serialization (using postcard) would lose these values, so SharedMemory uses its own thread-local mechanism:

  1. Serialization (send path): When a SharedMemory value is serialized during inner (postcard) serialization, the underlying IpcSharedMemory is captured into a mux-managed thread-local and only an index is written into the payload bytes. After inner serialization completes, the captured values are included in the protocol message as Vec<IpcSharedMemory>, so that ipc-channel’s outer serialization transports them efficiently via OS shared memory.

  2. Deserialization (receive path): The outer deserialization reconstructs the Vec<IpcSharedMemory> from the protocol message. Before inner (postcard) deserialization, these values are placed in a mux-managed thread-local. The SharedMemory deserializer reads the index from the payload and retrieves the corresponding IpcSharedMemory from the thread-local.

This approach avoids any modifications to ipc-channel while still benefiting from its efficient OS-level shared memory transport.

§IPC sender and receiver transport

mux::IpcSender<T> and mux::IpcReceiver<T> use the same thread-local mechanism as SharedMemory. ipc-channel’s OpaqueIpcSender and OpaqueIpcReceiver types would be lost if passed through postcard inner serialization, so the wrappers lift them out-of-band into the protocol message:

  1. Serialization (send path): When an IpcSender<T> or IpcReceiver<T> is serialized during inner (postcard) serialization, the underlying opaque handle is captured into a mux-managed thread-local and only an index is written into the payload bytes. After inner serialization completes, the captured handles are included in the protocol message as Vec<OpaqueIpcSender> / Vec<SyncOpaqueIpcReceiver>, so that ipc-channel’s outer serialization transports them as OS handles.

  2. Deserialization (receive path): The outer deserialization reconstructs the handle vecs from the protocol message. Before inner (postcard) deserialization, these handles are placed in mux-managed thread-locals. The IpcSender/IpcReceiver deserializers read the index from the payload and retrieve the corresponding handle from the thread-local.

IpcReceiver<T> stores its handle in a RefCell<Option<…>> so the handle can be moved out during Serialize (which takes &self). Attempting to serialize an IpcReceiver a second time returns an error.

An internal SyncOpaqueIpcReceiver wrapper adds unsafe impl Sync to OpaqueIpcReceiver, which is required because OpaqueIpcReceiver is !Sync (it contains a Cell<i32>) and MultiMessage must be Sync for the ROUTER static. The unsafe is safe in practice because MultiMessage values are serialized and sent immediately after construction and are never shared between threads.

§Subchannel sender over IPC channel transport

IpcChannelSubSender<T> takes the opposite approach to IpcSender<T> / IpcReceiver<T>: instead of hiding OS handles from ipc-channel, it exposes them directly. It derives Serialize/Deserialize with an embedded IpcSender<MultiMessage> field, so ipc-channel’s own OS handle mechanism transports it without any mux thread-locals.

Send path (From<SubSender<T>>): the conversion extracts the subchannel ID, clones the underlying IpcSender<MultiMessage>, creates a keepalive IPC channel, and sends MultiMessage::SendingViaIpcChannel { scid, keepalive_rx } to the demuxer. This registers an in-flight entry in the SubSenderStateMachine with a probe on keepalive_rx, preventing premature disconnection and enabling crash detection.

Receive path (into_sub_sender()): connect_sender creates a response channel and sends MultiMessage::Connect to the demuxer so the reconstructed sender can receive SubReceiverDisconnected notifications. A SubReceiverProxy is inserted for the subchannel so is_receiver_connected can detect subreceiver disconnection. A new source UUID is generated and MultiMessage::Received is sent to transition the in-flight entry to a registered source. A SubSenderTracker disconnector is created that sends MultiMessage::Disconnect when the reconstructed SubSender is eventually dropped.

To ensure SubReceiverDisconnected is delivered even if the SubReceiver is dropped immediately after into_sub_sender() returns, SubChannelReceiver::drop drains all pending IPC messages (including any in-flight Connect) before broadcasting the disconnection notification.

§When to block

Generally, sends are non-blocking (but see below) so the main blocking consideration is for receives. A receive on a subchannel may have to receive from the underlying IPC channel, unless the message has already been received (and placed on a standard Rust channel corresponding to the subchannel receiver).

On subchannel receive, we first of all issue a non-blocking receive (try_recv) on the corresponding standard channel. If this returns a message, we can return the message as the result of subchannel receive.

If the corresponding standard channel is empty, we can safely issue a blocking receive on the IPC channel underlying the multi-receiver. (This wouldn’t be true if the code supported multi-threading.)

Once a message is received, we can re-try the non-blocking receive on the standard channel to see if a message has been received for the subreceiver. If not, we can block again on the IPC channel.

§Polling

In the last section, we mentioned issuing a blocking receive on the IPC channel underlying a multi-receiver. It’s actually a little more complicated than that because we need to poll for in-flight subsenders having been destroyed. We do this by probing the response channel associated with the IPC channel used to transmit the subsender.

Each MultiSender has a dedicated response channel from the receiving side. When the receiving process exits or the response channel’s sender is dropped, try_recv on this response channel returns IpcError::Disconnected. The probe caches this disconnected state so that once disconnection is detected, subsequent probes immediately return false without calling try_recv again. This caching is necessary because multiple subsender state machines may share the same MultiSender (due to the subsender serialization UUID optimization), and try_recv consumes the disconnection error — without caching, only the first state machine to probe would detect disconnection, while others would see an empty channel and incorrectly conclude the remote process is still alive.

Polling is implemented by issuing a try_recv_timeout on the IPC channel. When the timeout occurs, probing can be initiated and we can then drop the sender half of the standard channel for a subreceiver whose “other half” (meaning the senders for all clients) has hung up. This will cause the non-blocking receive on such standard channels to return with an error and we can then return Disconnected from the corresponding subchannel receives.

The receive on the multi-receiver’s IPC channel also serves the purpose of detecting Disconnect messages generated when a subsender and all its clones on a particular client (approximately equivalent to an IPC sender) have been dropped. That’s another way that the sending side of a subchannel can “hang up”, after which a receive from the subchannel should fail with Disconnected.

§Blocking sends and deadlocks

It turns out that a send to an IPC channel can block when the buffer fills up. So we have to be careful to take every opportunity to receive messages from IPC channels when we can, for example before generating Disconnect messages when a subsender and all its clones on a particular client have been dropped.

Failure to do this can result in deadlocks. For example, if a process creates a large number of subchannels and then drops them, messages are sent to notify the “other side” that one side has hung up. If these messages are not received, drop of a subsender or subreceiver can block.

This risk of deadlock was present for non-multiplexed IPC channels, but the risk was lower because fewer messages were sent on each IPC channel. With multiplexing, a potentially large number of messages can be sent. Fortunately, a multireceiver will tend to drain messages when receiving on behalf of a subreceiver. Providing that the application code issues receives fairly frequently, the underlying IPC channels shouldn’t fill up.

§Interprocess protocol

This is described in PROTOCOL.md which, if you are reading the documentation, is reproduced below.

§Migrating from ipc-channel

This is described in MIGRATION.md which, if you are reading the documentation, is reproduced below.

§Major missing features

  • Each one-shot server accepts only one client connect request. This is fine if you simply want to use this API to split your application up into a fixed number of mutually untrusting processes, but it’s not suitable for implementing a system service.
  • Rust channel: MPSC (multi-producer, single-consumer) channels in the Rust standard library. The implementation consists of a single consumer wrapper of a port of Crossbeam channel.
  • Crossbeam channel: extends Rust channels to be more like their Go counterparts. Crossbeam channels are MPMC (multi-producer, multi-consumer).
  • IPC channel: the IPC channels which ipc-channel-mux is implemented on top of.
  • Channels: provides Sender and Receiver types for communicating with a channel-like API across generic IO streams.

§Interprocess Protocol

The following describes the interprocess protocol used by ipc-channel-mux to multiplex subchannels over a single IPC channel.

§Overview

Multiple typed subchannels are multiplexed over one underlying ipc-channel IPC channel (the forward channel). Each subchannel is identified by a UUID (SubChannelId). A second IPC channel (the response channel) carries reverse-direction control messages from the receiver back to the sender.

All protocol messages are variants of two enums which are serialized by ipc-channel:

  • MultiMessage – sent over the forward channel from sender to receiver.
  • MultiResponse – sent over the response channel from receiver to sender.

User-level message payloads are serialized and carried inside MultiMessage::Data as opaque bytes.

§Identifiers

NameTypePurpose
ClientIdUUIDIdentifies a sending process / connection.
SubChannelIdUUIDIdentifies a subchannel.
IPC sender UUIDUUIDIdentifies an underlying IpcSender<MultiMessage> for deduplication.

§Forward Channel Messages (MultiMessage)

§Connect(IpcSender<MultiResponse>, ClientId)

Registers a new sending client with the receiver. The included IpcSender<MultiResponse> is the sender half of a response channel that the receiver will use to send MultiResponse messages back to this client.

§Data(SubChannelId, Vec<u8>, Vec<(SubChannelId, IpcSenderAndOrId)>, Vec<IpcSharedMemory>)

Carries a user-level message. Fields:

  1. Target subchannel – the SubChannelId that the message is destined for.
  2. Payload – the user message serialized to bytes.
  3. Embedded subsenders – a list of (SubChannelId, IpcSenderAndOrId) pairs for any SubSender values that were serialized inside the payload (see Subsender Transmission below).
  4. Shared memory regions – a list of IpcSharedMemory values extracted during serialization (see Shared Memory below).

§SubChannelId(SubChannelId, String)

Advertises a new subchannel to the receiver. Sent during inter-process bootstrapping (the one-shot server flow) immediately after Connect. The String is the server name used to correlate the subchannel with the SubOneShotServer that is accepting.

§Sending { scid, via, via_chan }

Notifies the receiver that a subsender for subchannel scid is in flight, being transmitted inside a Data message on subchannel via. The via_chan field carries the IpcSenderAndOrId of the IPC sender used by the channel carrying the subsender.

§SendingViaIpcChannel { scid, keepalive }

Notifies the receiver that a subsender for subchannel scid is being transported via an IpcChannelSubSender (i.e. over a raw IPC channel to another process). The keepalive field is an IpcReceiver<()> whose sender end (keepalive_tx) is embedded in the IpcChannelSubSender and held by the remote process for the lifetime of the wrapper or the SubSender reconstructed from it. When the remote process drops the sender or crashes, the receiver end closes and the probe detects the disconnection.

This message is used instead of Sending for IPC channel transport because Sending’s probe mechanism (checking the response channel of the carrying IPC sender) does not detect remote-process crashes when the carrying sender belongs to the local process.

§Received { scid, via, new_source }

Confirms that a subsender for subchannel scid, which was in flight via subchannel via, has been successfully deserialized at a new source identified by new_source (a UUID). This transitions the subsender’s lifecycle from in flight to connected from a new source.

§ReceiveFailed { scid, via }

Indicates that a subsender for subchannel scid, which was in flight via subchannel via, could not be received (e.g. because the target subchannel’s receiver was already dropped). This removes the in-flight entry. If no sources remain and nothing is in flight, the subchannel is considered disconnected.

§Disconnect(SubChannelId, Uuid)

Indicates that all copies of a subsender for the given SubChannelId at the source identified by the given UUID have been dropped. Once all sources and in-flight transmissions for a subchannel have disconnected, the subchannel’s receiver is notified of disconnection.

§Response Channel Messages (MultiResponse)

§SubReceiverDisconnected(SubChannelId)

Sent from the receiver to all connected clients when a SubReceiver is dropped. This allows senders to detect early that the receiving end of a subchannel is gone, so that subsequent send calls can return MuxError::Disconnected without attempting the IPC send.

§IPC Sender Deduplication (IpcSenderAndOrId)

Transmitting an IpcSender over an IPC channel consumes operating-system resources (e.g. file descriptors). To avoid sending the same underlying IPC sender repeatedly, the protocol uses IpcSenderAndOrId:

IpcSenderAndOrId::IpcSender(IpcSender<MultiMessage>, String)
IpcSenderAndOrId::IpcSenderId(String)

The String is the UUID of the IPC sender.

  • First transmission: The IPC sender has not been sent before, so IpcSender(sender, uuid) is sent. The receiver creates a response channel and sends Connect(response_sender, client_id) back over the IPC sender.
  • Subsequent transmissions: Only IpcSenderId(uuid) is sent. The receiver looks up the existing IPC sender by UUID. No new response channel or Connect message is needed.

§API Operations and Their Protocol Messages

§In-Process Channel Setup (Channel::new / sub_channel)

Channel::new() creates a forward IPC channel and a response IPC channel. No protocol messages are sent. Calling sub_channel() creates a new SubChannelId and returns a SubSender<T> / SubReceiver<T> pair. No protocol messages are sent during subchannel creation either.

§Inter-Process Bootstrapping (SubOneShotServer / SubSender::connect)

Client side (SubSender::connect):

  1. Connects to the IPC one-shot server, creating an IpcSender<MultiMessage>.
  2. Creates a response channel.
  3. Generates a ClientId.
  4. Sends Connect(response_sender, client_id) to the server.
  5. Generates a SubChannelId for the new subchannel.
  6. Sends SubChannelId(subchannel_id, name) to the server.

Server side (server.accept()):

  1. Accepts the IPC connection.
  2. Receives Connect and registers the response sender.
  3. Receives SubChannelId(subchannel_id, name) and validates it against the server name.
  4. Receives the first Data message and returns (SubReceiver<T>, T).

§Sending a Message (SubSender::send)

  1. The sender checks whether the subchannel’s receiver is still connected by checking for any SubReceiverDisconnected messages on the response channel. If disconnected, returns MuxError::Disconnected.
  2. The user value is serialized. Any embedded SubSender values and SharedMemory values are extracted during serialization.
  3. For each embedded subsender, a Sending { scid, via, via_chan } message is sent to notify the receiver that the subsender is in flight.
  4. A Data(subchannel_id, payload, subsenders, shmems) message is sent over the forward IPC channel.

§Receiving a Message (SubReceiver::recv)

  1. When a Data message arrives, it is routed by SubChannelId to the correct subchannel.
  2. The payload is deserialized. Any embedded SubSender values are reconstructed, and a Received { scid, via, new_source } message is sent back for each to confirm receipt.
  3. The deserialized value is returned.

§Subsender Transmission Lifecycle

When a SubSender is sent inside a message on another subchannel, its lifecycle is tracked to ensure proper disconnection detection:

  1. Serialization: The SubSender’s IPC sender and subchannel ID are extracted (not serialized into the payload).
  2. Sending notification: A Sending { scid, via, via_chan } message is sent. The receiver registers the subsender as in flight via subchannel via.
  3. Data transmission: The Data message carries the subsender information alongside the payload.
  4. Receipt confirmation: When the payload is deserialized, a Received { scid, via, new_source } message is sent. The subsender transitions from in flight to connected from new_source.
  5. Disconnection: When the received subsender is dropped, a Disconnect(scid, source) message is sent. Once all sources are disconnected and no copies are in flight, the subchannel is fully disconnected.

§IpcChannelSubSender Lifecycle

When a SubSender is transported to another process via IpcChannelSubSender (a raw ipc-channel IPC channel), a different lifecycle applies:

  1. Wrap: IpcChannelSubSender::from(sub_tx) calls begin_ipc_channel_transport, which:
    • Creates a keepalive IPC channel (keepalive_tx, keepalive_rx).
    • Sends SendingViaIpcChannel { scid, keepalive: keepalive_rx } to the local demuxer to register the in-flight entry and install the probe.
    • Embeds keepalive_tx in the IpcChannelSubSender value.
    • The original SubSender is consumed; its disconnector fires, sending Disconnect(scid, ORIGIN). Because the in-flight entry was registered first, the state machine defers full disconnection.
  2. Transport: The IpcChannelSubSender (including keepalive_tx) is sent over a raw IPC channel to the remote process. The OS handle for keepalive_tx is duplicated into the remote process.
  3. Reconstruct: The remote process calls into_sub_sender(), which:
    • Extracts keepalive_tx and stores it in the reconstructed SubChannelSender so it is held for the sender’s lifetime.
    • Sends Received { scid, via: EMPTY_SUBCHANNEL_ID, new_source } to register the remote process as the new source.
  4. Disconnection: When the reconstructed SubSender is dropped in the remote process, keepalive_tx drops and Disconnect(scid, new_source) is sent as usual.
  5. Crash detection: If the remote process crashes (or drops the IpcChannelSubSender without calling into_sub_sender), keepalive_tx is closed by the OS. The next probe call finds IpcError on keepalive_rx and signals disconnection (see Probing below).

§Probing

When subsenders are in flight the receiver periodically performs a non-blocking probe to detect whether the remote process carrying the subsender has crashed. Two probing mechanisms are used depending on how the subsender is being transported.

Sending (subsender sent inside a Data message on another subchannel)

The probe calls try_recv on the response channel associated with the carrying channel’s IPC sender:

  • If try_recv returns IpcError, the remote process has crashed and all in-flight entries for that channel are removed. If no sources remain, the subchannel is marked as disconnected.
  • If try_recv returns Empty, the channel is still alive.
  • Any SubReceiverDisconnected messages received are processed normally.

SendingViaIpcChannel (subsender transported via IpcChannelSubSender)

The probe calls try_recv on the keepalive IpcReceiver<()> that was delivered in the SendingViaIpcChannel message. The sender end (keepalive_tx) is held by the remote process inside the IpcChannelSubSender or the SubSender reconstructed from it:

  • If try_recv returns IpcError, the remote process has dropped the keepalive sender (crashed or cleanly dropped the subsender) and all in-flight entries are removed. If no sources remain, the subchannel is marked as disconnected.
  • If try_recv returns Empty or Ok, the sender is still alive.

If keepalive channel creation fails at transport time, the implementation falls back to sending a Sending message instead; crash detection is then unavailable for that transport.

Both mechanisms prevent indefinite waits when a process carrying a subsender in transit has crashed.

§Shared Memory

SharedMemory values are transported using a two-stage serialization model:

  1. Serialization: Each SharedMemory value extracts its IpcSharedMemory and serializes as just an index.
  2. Transport (ipc-channel): The collected IpcSharedMemory values are included in the Data message. The ipc-channel layer transports them efficiently using operating-system shared memory primitives.
  3. Deserialization: Each SharedMemory reads its index and retrieves the corresponding IpcSharedMemory from the Data message.

This avoids duplicating shared memory contents into the binary payload.

§Error Flows

§Receiver Disconnection (SubReceiver Dropped)

  1. SubReceiverDisconnected(subchannel_id) is broadcast via the response channel to all connected clients.
  2. Before each send, the sender checks the response channel. If it finds SubReceiverDisconnected for the target subchannel, subsequent sends return MuxError::Disconnected.
  3. Any queued messages containing embedded subsenders are cleaned up by sending ReceiveFailed { scid, via } for each, so their lifecycles are properly resolved.

§Sender Disconnection (SubSender Dropped)

  1. When the last clone of a SubSender is dropped, Disconnect(subchannel_id, source) is sent over the forward channel.
  2. Once all sources are gone and no transmissions are in flight, the subchannel is fully disconnected and SubReceiver::recv returns MuxError::Disconnected.

§IPC Channel Failure

If the underlying IPC channel encounters an error (e.g. the remote process crashed), ipc-channel returns an IpcError. This is wrapped in MuxError::IpcError and propagated to the caller of send or recv.

§Migration guide from ipc-channel to ipc-channel-mux

This guide shows how to replace each ipc-channel API with its ipc-channel-mux equivalent.

§Quick reference

ipc-channelipc-channel-muxNotes
ipc::channel::<T>()mux::Channel::new()?.sub_channel::<T>()Two steps; reuse Channel for multiplexing
IpcSender<T>SubSender<T>
IpcReceiver<T>SubReceiver<T>Cannot be sent over subchannels
IpcSender::connect()SubSender::connect()
IpcSender::send()SubSender::send()
IpcSender::to_opaque()SubSender::to_opaque()
IpcReceiver::recv()SubReceiver::recv()
IpcReceiver::try_recv()SubReceiver::try_recv()
IpcReceiver::try_recv_timeout()SubReceiver::try_recv_timeout()
IpcReceiver::to_opaque()SubReceiver::to_opaque()
OpaqueIpcSenderOpaqueSubSender
OpaqueIpcReceiverOpaqueSubReceiverNot serializable
IpcOneShotServer<T>SubOneShotServer<T>
IpcSharedMemorySharedMemory
IpcReceiverSetNo equivalentUse subchannel_router instead
ipc::bytes_channel()channel.bytes_sub_channel()Two steps; reuse Channel for multiplexing
IpcBytesSenderBytesSubSender
IpcBytesReceiverBytesSubReceiver
IpcErrorMuxErrorDifferent variant structure
ipc::TryRecvErrormux::TryRecvErrorDifferent variant structure
router::ROUTERsubchannel_router::ROUTERDifferent method signatures
router::RouterProxysubchannel_router::RouterProxyDifferent method signatures

§Channel creation

Creating a subchannel requires a multiplexing Channel to be created first. Reusing the same Channel for multiple subchannels is what enables multiplexing.

Before:

use ipc_channel::ipc;

let (tx, rx) = ipc::channel::<String>()?;

After:

use ipc_channel_mux::mux;

let channel = mux::Channel::new()?;
let (tx, rx) = channel.sub_channel::<String>();

Key differences:

  • Channel::new() can fail (it creates the underlying IPC channel). sub_channel() never fails.
  • Create additional subchannels from the same Channel to benefit from multiplexing:
let (tx1, rx1) = channel.sub_channel::<String>();
let (tx2, rx2) = channel.sub_channel::<i32>();
// tx1/rx1 and tx2/rx2 share the same underlying IPC channel

§Sending and receiving

The send, recv, try_recv, and try_recv_timeout methods have the same signatures and semantics.

Before:

tx.send("hello".to_string())?;
let msg = rx.recv()?;

After:

tx.send("hello".to_string())?;
let msg = rx.recv()?;

§One-shot servers

The API for bootstrapping channels between processes is structurally identical.

Before:

use ipc_channel::ipc;

let (server, name) = ipc::IpcOneShotServer::<String>::new()?;

// In another process:
let tx = ipc::IpcSender::connect(name)?;
tx.send("hello".to_string())?;

// Back in the server process:
let (rx, first_msg) = server.accept()?;

After:

use ipc_channel_mux::mux;

let (server, name) = mux::SubOneShotServer::<String>::new()?;

// In another process:
let tx = mux::SubSender::connect(name)?;
tx.send("hello".to_string())?;

// Back in the server process:
let (rx, first_msg) = server.accept()?;

§Opaque (type-erased) senders and receivers

Before:

let opaque_tx: ipc::OpaqueIpcSender = tx.to_opaque();
let tx: ipc::IpcSender<String> = opaque_tx.to();

let opaque_rx: ipc::OpaqueIpcReceiver = rx.to_opaque();
let rx: ipc::IpcReceiver<String> = opaque_rx.to();

After:

let opaque_tx: mux::OpaqueSubSender = tx.to_opaque();
let tx: mux::SubSender<String> = opaque_tx.to();

let opaque_rx: mux::OpaqueSubReceiver = rx.to_opaque();
let rx: mux::SubReceiver<String> = opaque_rx.to();

Key difference: OpaqueIpcReceiver implements Serialize and Deserialize, but OpaqueSubReceiver does not (subreceivers cannot be sent over subchannels).

§Shared memory

Before:

use ipc_channel::ipc;

let (tx, rx) = ipc::channel::<ipc::IpcSharedMemory>()?;

let shmem = ipc::IpcSharedMemory::from_bytes(b"hello");
tx.send(shmem)?;

let received = rx.recv()?;
assert_eq!(&*received, b"hello");

After:

use ipc_channel_mux::mux;

let channel = mux::Channel::new()?;
let (tx, rx) = channel.sub_channel::<mux::SharedMemory>();

let shmem = mux::SharedMemory::from_bytes(b"hello");
tx.send(shmem)?;

let received = rx.recv()?;
assert_eq!(&*received, b"hello");

Both types support from_bytes, from_byte, deref_mut (unsafe), take, and Deref<Target=[u8]>.

SharedMemory also implements From<IpcSharedMemory> and Into<IpcSharedMemory> for conversion between the two types.

§Error handling

ipc-channel errors map to ipc-channel-mux errors as follows:

ipc-channelipc-channel-mux
IpcError::DisconnectedMuxError::Disconnected
IpcError::SerializationError(_)MuxError::IpcError(IpcError::SerializationError(_))
IpcError::Io(_)MuxError::IpcError(IpcError::Io(_))
MuxError::InternalError(_) (new)
TryRecvError::IpcError(_)TryRecvError::MuxError(_)
TryRecvError::EmptyTryRecvError::Empty

Before:

match rx.try_recv() {
    Ok(msg) => { /* use msg */ }
    Err(ipc::TryRecvError::Empty) => { /* no message yet */ }
    Err(ipc::TryRecvError::IpcError(ipc::IpcError::Disconnected)) => { /* disconnected */ }
    Err(e) => { /* other error */ }
}

After:

match rx.try_recv() {
    Ok(msg) => { /* use msg */ }
    Err(mux::TryRecvError::Empty) => { /* no message yet */ }
    Err(mux::TryRecvError::MuxError(mux::MuxError::Disconnected)) => { /* disconnected */ }
    Err(e) => { /* other error */ }
}

§Sending senders over channels

Both APIs support sending senders over channels.

Before:

let (inner_tx, inner_rx) = ipc::channel::<i32>()?;
let (outer_tx, outer_rx) = ipc::channel::<ipc::IpcSender<i32>>()?;

outer_tx.send(inner_tx)?;
let received_tx = outer_rx.recv()?;

After:

let channel = mux::Channel::new()?;
let (inner_tx, inner_rx) = channel.sub_channel::<i32>();
let (outer_tx, outer_rx) = channel.sub_channel::<mux::SubSender<i32>>();

outer_tx.send(inner_tx)?;
let received_tx = outer_rx.recv()?;

Key difference: subreceivers cannot be sent over subchannels (this would break other subreceivers sharing the underlying IPC channel).

§Bytes channels

§bytes_channel()

Creating a bytes subchannel requires a multiplexing Channel to be created first, just like typed subchannels.

Before:

use ipc_channel::ipc;

let (tx, rx) = ipc::bytes_channel()?;
tx.send(b"hello")?;
let data: Vec<u8> = rx.recv()?;

After:

use ipc_channel_mux::mux;

let channel = mux::Channel::new()?;
let (tx, rx) = channel.bytes_sub_channel();
tx.send(b"hello")?;
let data: Vec<u8> = rx.recv()?;

BytesSubSender::send takes &[u8], matching IpcBytesSender::send. BytesSubReceiver supports recv, try_recv, and try_recv_timeout, matching IpcBytesReceiver.

BytesSubSender can be cloned and sent over subchannels, just like SubSender<T>.

§Router

The router APIs have different structures. In ipc-channel, routes are added for existing receivers. In ipc-channel-mux, a RouterChannel creates new subchannels that are automatically routed.

Before:

use ipc_channel::ipc;
use ipc_channel::router::ROUTER;

let (tx, rx) = ipc::channel::<i32>()?;
let crossbeam_rx = ROUTER.route_ipc_receiver_to_new_crossbeam_receiver(rx);

tx.send(42)?;
assert_eq!(crossbeam_rx.recv().unwrap(), 42);

After:

use ipc_channel_mux::mux;
use mux::subchannel_router::{ROUTER, RouterProxy};

let router_channel = RouterProxy::new_router_channel(&ROUTER)?;
let (tx, crossbeam_rx) = router_channel.route_to_new_crossbeam_receiver::<i32>()?;

tx.send(42)?;
assert_eq!(crossbeam_rx.recv().unwrap(), 42);

§Router method mapping

ipc-channel ROUTER methodipc-channel-mux RouterChannel method
route_ipc_receiver_to_new_crossbeam_receiver(rx)route_to_new_crossbeam_receiver::<T>()
route_ipc_receiver_to_crossbeam_sender(rx, sender)route_to_crossbeam_sender::<T>(sender)
add_typed_route(rx, callback)add_typed_route::<T>(callback)
add_typed_one_shot_route(rx, callback)No equivalent

Key differences:

  • RouterChannel methods create the subchannel internally and return the SubSender<T>. In ipc-channel, you pass an existing IpcReceiver<T>.
  • Create RouterChannel via RouterProxy::new_router_channel(&ROUTER).
  • The callback type is Box<dyn FnMut(Result<T, MuxError>) + Send> (vs Box<dyn Fn(Result<T, SerDeError>) + Send + 'static> for ipc-channel’s multi handler).

§APIs with no equivalent

§IpcReceiverSet

ipc-channel-mux does not provide a receiver set. Use subchannel_router to route subreceivers to crossbeam channels, then use crossbeam’s select! macro for multi-channel waiting.

Before:

use ipc_channel::ipc;

let (tx1, rx1) = ipc::channel::<i32>()?;
let (tx2, rx2) = ipc::channel::<String>()?;

let mut set = ipc::IpcReceiverSet::new()?;
let id1 = set.add(rx1)?;
let id2 = set.add(rx2)?;

for result in set.select()? {
    match result {
        ipc::IpcSelectionResult::MessageReceived(id, msg) if id == id1 => {
            let val: i32 = msg.to().unwrap();
        }
        ipc::IpcSelectionResult::MessageReceived(id, msg) if id == id2 => {
            let val: String = msg.to().unwrap();
        }
        _ => {}
    }
}

After:

use ipc_channel_mux::mux;
use mux::subchannel_router::{ROUTER, RouterProxy};

let router_channel = RouterProxy::new_router_channel(&ROUTER)?;
let (tx1, cb_rx1) = router_channel.route_to_new_crossbeam_receiver::<i32>()?;
let (tx2, cb_rx2) = router_channel.route_to_new_crossbeam_receiver::<String>()?;

crossbeam_channel::select! {
    recv(cb_rx1) -> result => {
        let val: i32 = result.unwrap();
    }
    recv(cb_rx2) -> result => {
        let val: String = result.unwrap();
    }
}

§IpcReceiver serialization

IpcReceiver<T> implements Serialize and Deserialize, allowing receivers to be sent over IPC channels. SubReceiver<T> does not support this because sending a subreceiver would require sending the underlying IPC receiver, which would break other subreceivers sharing that IPC channel.

§add_typed_one_shot_route

ipc-channel’s router supports one-shot routes (callbacks invoked once then removed). ipc-channel-mux’s router does not have a direct equivalent. Use add_typed_route with a callback that handles a single message and ignores subsequent ones, or use route_to_new_crossbeam_receiver and call recv() once.

§Async/futures support

ipc-channel’s IpcReceiver::to_stream() (behind the "async" feature flag) converts a receiver into a futures::Stream. ipc-channel-mux does not currently provide async support.

§Incremental migration

When migrating a multi-process application, you may need ipc-channel and ipc-channel-mux to interoperate temporarily — some processes using raw IPC channels while others have been migrated to subchannels. The following bridge types support this:

TypeDirectionUse case
mux::IpcSender<T>IPC endpoint → subchannelPass a raw ipc::IpcSender<T> through a subchannel
mux::IpcReceiver<T>IPC endpoint → subchannelPass a raw ipc::IpcReceiver<T> through a subchannel
mux::IpcChannelSubSender<T>Subchannel sender → IPC channelPass a SubSender<T> through a raw IPC channel

§Bridge types and file descriptor consumption

Unlike plain subsender transmission — which consumes no file descriptors — the bridge types all consume file descriptors when transmitted on Unix variants: mux::IpcSender<T> and mux::IpcReceiver<T> each consume one file descriptor in the receiving process, while mux::IpcChannelSubSender<T> consumes three in total (two in the receiving process and one in the sending process).

If bridge types are used at scale — for example, transmitting many wrapped senders or receivers in a loop — file descriptors can be exhausted just as they would be with raw ipc-channel usage, negating one of the key benefits of multiplexing.

Bridge types should therefore be used sparingly, only at the boundary between migrated and unmigrated code, and replaced with plain subsenders as soon as both sides of a connection have been migrated.

§Passing a raw IPC endpoint through a subchannel

If a migrated process needs to hand a raw IpcSender<T> or IpcReceiver<T> to another process via a subchannel, wrap it in mux::IpcSender<T> or mux::IpcReceiver<T> first:

use ipc_channel::ipc;
use ipc_channel_mux::mux;

// Un-migrated side: create a raw IPC channel.
let (raw_tx, raw_rx) = ipc::channel::<u32>().unwrap();

// Migrated side: pass the raw sender through a subchannel.
let channel = mux::Channel::new().unwrap();
let (tx, rx) = channel.sub_channel::<mux::IpcSender<u32>>();

tx.send(mux::IpcSender::from(raw_tx)).unwrap();

// Receiving side: unwrap back to the raw sender.
let wrapped: mux::IpcSender<u32> = rx.recv().unwrap();
let raw_tx: ipc::IpcSender<u32> = wrapped.into_inner();
raw_tx.send(42).unwrap();

assert_eq!(raw_rx.recv().unwrap(), 42);

mux::IpcReceiver<T> works the same way. Note that an IpcReceiver<T> may only be sent once; a second attempt returns a serialization error.

§Passing a subchannel sender through a raw IPC channel

If a process needs to bootstrap a subchannel connection before a mux channel is available, wrap the SubSender<T> in mux::IpcChannelSubSender<T> and send it over a raw IPC channel:

use ipc_channel::ipc;
use ipc_channel_mux::mux;

// The subchannel exists in the local process.
let channel = mux::Channel::new().unwrap();
let (tx, rx) = channel.sub_channel::<u32>();

// Wrap for raw IPC transport.
let transport = mux::IpcChannelSubSender::from(tx);

// Send over a raw IPC channel to the remote process.
let (raw_tx, raw_rx) = ipc::channel::<mux::IpcChannelSubSender<u32>>().unwrap();
raw_tx.send(transport).unwrap();

// Remote process: reconstruct the SubSender.
let received: mux::IpcChannelSubSender<u32> = raw_rx.recv().unwrap();
let tx: mux::SubSender<u32> = received.into_sub_sender().unwrap();

tx.send(42).unwrap();
assert_eq!(rx.recv().unwrap(), 42);

The reconstructed SubSender<T> is fully functional: it sends Disconnect when dropped and detects subreceiver disconnection via send returning Err(MuxError::Disconnected).

From<SubSender<T>> is consuming. Clone the SubSender first if the original is also needed after wrapping.


  1. The term mux is an abbreviation for multiplexer. 

  2. Tony Hoare conceived Communicating Sequential Processes (CSP) as a concurrent programming language. Stephen Brookes and A.W. Roscoe developed a sound mathematical basis for CSP as a process algebra. CSP can now be used to reason about concurrency and to verify concurrency properties using model checkers such as FDR4. Go channels were also inspired by CSP. 

  3. Creating a subchannel could exhaust the memory of a process, but memory allocation is treated as infallible in Rust as Handling memory exhaustion – State of the art? explores. Essentially, if memory allocation fails, the program will panic or, more likely (at least on Linux), be killed by the Out of Memory killer. 

  4. On Unix variants, each time an IPC sender is received from an IPC channel, a file descriptor is consumed, even when the same IPC sender is received multiple times. The file descriptor is reclaimed when the received IPC sender is dropped, so file descriptor exhaustion occurs when too many received IPC senders are retained. ↩ 1 2 3

  5. An alternative would be to have the relevant Servo branch use a git dependency on ipc-channel-mux

  6. cargo test of ipc-channel-mux currently takes under 4 seconds whereas it used to take over 8 seconds before the multiplexing code was split out of the ipc-channel repo. 

Modules§

mux
This module multiplexes subchannels over IPC channels.