shared_mem_queue/lib.rs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172
// Copyright Open Logistics Foundation
//
// Licensed under the Open Logistics Foundation License 1.3.
// For details on the licensing terms, see the LICENSE file.
// SPDX-License-Identifier: OLFL-1.3
#![cfg_attr(not(test), no_std)]
//! This library implements simple single-writer single-reader queues, a
//! [`ByteQueue`](byte_queue::ByteQueue) for a byte-oriented streaming interface (like UART or TCP)
//! and a [`MsgQueue`](msg_queue::MsgQueue) for a message-/packet-/datagram-based interface (like
//! UDP). The queues can be used for
//! inter-processor-communication over a shared memory region. Initially, this has been developed as
//! a simple solution with minimal overhead to communicate between the Cortex-M4 and the Cortex-A7
//! on STM32MP1 microprocessors but it may also be useful in different scenarios though.
//!
//! # Implementation details
//! The underlying byte queue operates on a shared memory region and keeps track of a write- and a
//! read-pointer. To access both pointers from both processors, the write- and read-pointers
//! are stored in the shared memory region itself so the capacity of the queue
//! is `2*size_of::<usize>()` smaller than the memory region size.
//!
//! The main contract here is that only the writer may write to the
//! write-pointer, only the reader may write to the read-pointer, the memory
//! region in front of the write-pointer and up to the read-pointer is owned by
//! the writer, the memory region in front of the read-pointer and up to the
//! write-pointer is owned by the reader. For initialization, both pointers have
//! to be set to 0 at the beginning. This breaks the contract because the initializing processor
//! needs to write both pointers. Therefore, this has to be done by processor A while it is
//! guaranteed that processor B does not access the queue yet to prevent race conditions.
//!
//! Because processor A has to initialize the byte queue and processor B should not
//! reset the write- and read-pointers, there are two methods for
//! initialization: `create()` should be called by the first processor and
//! sets both pointers to 0, `attach()` should be called by the second one.
//!
//! The `ByteQueue` implements both the write- and the read-methods but
//! each processor should have either the writing side or the reading side
//! assigned to it and must not call the other methods. It would also be
//! possible to have a `SharedMemWriter` and a `SharedMemReader` but this
//! design was initially chosen so that the queue can also be used as a simple
//! ring buffer on a single processor.
//!
//! The `MsgQueue` abstraction builds on top of the `ByteQueue` for communication.
//! It handles variable-sized messages using the following format:
//!
//! | Field | Size |
//! |----------------|----------------------------------------------------|
//! | Message Prefix | Fixed size |
//! | Data Size | `<usize>` in size |
//! | Data | variable-sized - determined by the data size field |
//! | CRC | 32 bits |
//!
//! # Usage Examples
//!
//! ## Single-Processor Byte Queue
//! ```
//! # use shared_mem_queue::byte_queue::ByteQueue;
//! let mut buffer = [0u8; 128];
//! let mut queue = unsafe { ByteQueue::create(buffer.as_mut_ptr(), 100) };
//! let tx = [1, 2, 3, 4];
//! queue.blocking_write(&tx);
//! let mut rx = [0u8; 4];
//! queue.blocking_read(&mut rx);
//! assert_eq!(&tx, &rx);
//! ```
//!
//! A more realistic example involves creating a reader and a writer separately; although not shown
//! here, they may be moved to a different thread:
//! ```
//! # use shared_mem_queue::byte_queue::ByteQueue;
//! let mut buffer = [0u8; 128];
//! let mut writer = unsafe { ByteQueue::create(buffer.as_mut_ptr(), 100) };
//! let mut reader = unsafe { ByteQueue::attach(buffer.as_mut_ptr(), 100) };
//! let tx = [1, 2, 3, 4];
//! writer.blocking_write(&tx);
//! let mut rx = [0u8; 4];
//! reader.blocking_read(&mut rx);
//! assert_eq!(&tx, &rx);
//! ```
//!
//! ## Single-Procesor Message Queue
//! ```
//! # use shared_mem_queue::byte_queue::ByteQueue;
//! # use shared_mem_queue::msg_queue::MsgQueue;
//! const DEFAULT_PREFIX: &'static [u8] = b"DEFAULT_PREFIX: "; // 16 byte long
//! let mut bq_buf = [0u8; 128];
//! let mut msg_queue = unsafe {
//! MsgQueue::new(ByteQueue::create(bq_buf.as_mut_ptr(), 128), DEFAULT_PREFIX, [0u8; 128])
//! };
//!
//! let msg = b"Hello, World!";
//! let result = msg_queue.nb_write_msg(msg);
//! assert!(result.is_ok());
//! let read_msg = msg_queue.nb_read_msg().unwrap();
//! assert_eq!(read_msg, msg);
//! ```
//!
//! ## Shared-Memory Queue
//!
//! In general, an `mmap` call is required to access the queue from a Linux system. This can be
//! done with the [`memmap` crate](https://crates.io/crates/memmap). The following example probably
//! panics when executed naively because access to `/dev/mem` requires root
//! privileges. Additionally, the example region in use is probably not viable for this
//! queue on most systems:
//! ```no_run
//! # use shared_mem_queue::byte_queue::ByteQueue;
//! # use std::convert::TryInto;
//! let shared_mem_start = 0x10048000; // example
//! let shared_mem_len = 0x00008000; // region
//! let dev_mem = std::fs::OpenOptions::new()
//! .read(true)
//! .write(true)
//! .open("/dev/mem")
//! .expect("Could not open /dev/mem, do you have root privileges?");
//! let mut mmap = unsafe {
//! memmap::MmapOptions::new()
//! .len(shared_mem_len)
//! .offset(shared_mem_start.try_into().unwrap())
//! .map_mut(&dev_mem)
//! .unwrap()
//! };
//! let mut channel = unsafe {
//! ByteQueue::attach(mmap.as_mut_ptr(), shared_mem_len)
//! };
//! ```
//!
//! ## Bi-Directional Shared-Memory Communication
//!
//! In most inter-processor-communication scenarios, two queues will be required for bi-directional
//! communication. A single `mmap` call is sufficient, the memory region can be split manually
//! afterwards:
//! ```no_run
//! # use shared_mem_queue::byte_queue::ByteQueue;
//! # use std::convert::TryInto;
//! let shared_mem_start = 0x10048000; // example
//! let shared_mem_len = 0x00008000; // region
//! let dev_mem = std::fs::OpenOptions::new()
//! .read(true)
//! .write(true)
//! .open("/dev/mem")
//! .expect("Could not open /dev/mem, do you have root privileges?");
//! let mut mmap = unsafe {
//! memmap::MmapOptions::new()
//! .len(shared_mem_len)
//! .offset(shared_mem_start.try_into().unwrap())
//! .map_mut(&dev_mem)
//! .unwrap()
//! };
//! let mut channel_write = unsafe {
//! ByteQueue::attach(mmap.as_mut_ptr(), shared_mem_len / 2)
//! };
//! let mut channel_read = unsafe {
//! ByteQueue::attach(mmap.as_mut_ptr().add(shared_mem_len / 2), shared_mem_len / 2)
//! };
//! ```
//!
//! # License
//!
//! Open Logistics Foundation License\
//! Version 1.3, January 2023
//!
//! See the LICENSE file in the top-level directory.
//!
//! # Contact
//!
//! Fraunhofer IML Embedded Rust Group - <embedded-rust@iml.fraunhofer.de>
pub mod byte_queue;
pub mod msg_queue;
mod crc;