Expand description

Implement a somewhat convenient and somewhat efficient way to perform RPC in an embedded context.

The approach is inspired by Go’s channels, with the restriction that there is a clear separation into a requester and a responder.

Requests may be canceled, which the responder should honour on a best-effort basis.

For each pair of Request and Response types, the macro interchange! generates a type that implements the Interchange trait.

The Requester and Responder types (to send/cancel requests, and to respond to such demands) are generic with only this one type parameter.

Example use cases

  • USB device interrupt handler performs low-level protocol details, hands off commands from the host to higher-level logic running in the idle thread. This higher-level logic need only understand clearly typed commands, as moduled by variants of a given Request enum.
  • trussed crypto service, responding to crypto request from apps across TrustZone for Cortex-M secure/non-secure boundaries.
  • Request to blink a few lights and reply on button press
#[derive(Clone, Debug, PartialEq)]
pub enum Request {
    This(u8, u32),

#[derive(Clone, Debug, PartialEq)]
pub enum Response {
    Here(u8, u8, u8),

interchange::interchange! {
    ExampleInterchange: (Request, Response)

let (mut rq, mut rp) = ExampleInterchange::claim().unwrap();

assert!(rq.state() == State::Idle);

// happy path: no cancelation
let request = Request::This(1, 2);

let request = rp.take_request().unwrap();
println!("rp got request: {:?}", &request);

let response = Response::There(-1);

let response = rq.take_response().unwrap();
println!("rq got response: {:?}", &response);

// early cancelation path

let request =  rq.cancel().unwrap().unwrap();
println!("responder could cancel: {:?}", &request);

assert!(State::Idle == rq.state());

// late cancelation
let request = rp.take_request().unwrap();

println!("responder could cancel: {:?}", &rq.cancel().unwrap().is_none());
assert!(State::Idle == rq.state());

// building into request buffer
impl Default for Request {
  fn default() -> Self {

let request_mut = rq.request_mut().unwrap();
*request_mut = Request::This(1, 2);
let request = rp.take_request().unwrap();
println!("rp got request: {:?}", &request);

// building into response buffer
impl Default for Response {
  fn default() -> Self {

let response_mut = rp.response_mut().unwrap();
*response_mut = Response::Here(3,2,1);
let response = rq.take_response().unwrap();
println!("rq got response: {:?}", &response);


It is assumed that all requests fit in a single Request enum, and that all responses fit in single Response enum. The macro interchange! allocates a static buffer in which either response or request fit, and handles synchronization.

An alternative approach would be to use two heapless Queues of length one each for response and requests. The advantage of our construction is to have only one static memory region in use.


It is possible that this implementation is currently not sound. To be determined!

Due to the macro construction, certain implementation details are more public than one would hope for: the macro needs to run in the code of users of this library. We take a somewhat Pythonic “we’re all adults here” approach, in that the user is expected to only use the publicly documented API (the ideally private details are hidden from documentation).


Use this macro to generate a pair of RPC pipes for any pair of Request/Response enums you wish to implement.


Requesting end of the RPC interchange.

Processing end of the RPC interchange.


State of the RPC interchange


Do NOT implement this yourself! Use the macro interchange!.