Crate iceoryx2

Source
Expand description

§iceoryx2

iceoryx2 is a cutting-edge service-oriented zero-copy lock-free inter-process communication middleware. Designed to support various MessagingPatterns iceoryx2 empowers developers with the flexibility of:

  • Publish-Subscribe
  • Events
  • Request-Response
  • Blackboard (planned)
  • Pipeline (planned)

For a comprehensive list of all planned features, please refer to the GitHub Roadmap.

Services are uniquely identified by name and MessagingPattern. They can be instantiated with diverse quality-of-service settings and are envisioned to be deployable in a no_std and safety-critical environment in the future.

Moreover, iceoryx2 offers configuration options that enable multiple service setups to coexist on the same machine or even within the same process without interference. This versatility allows iceoryx2 to seamlessly integrate with other frameworks simultaneously.

iceoryx2 traces its lineage back to the eclipse iceoryx project, addressing a major drawback – the central daemon. iceoryx2 embraces a fully decentralized architecture, eliminating the need for a central daemon entirely.

§Examples

Each service is uniquely identified by a ServiceName. Initiating communication requires the creation of a service, which serves as a port factory. With this factory, endpoints for the service can be created, enabling seamless communication.

For more detailed examples, explore the GitHub example folder.

§Publish-Subscribe

Explore a simple publish-subscribe setup where the subscriber continuously receives data from the publisher until the processes are gracefully terminated by the user with CTRL+C.

Subscriber (Process 1)

use core::time::Duration;
use iceoryx2::prelude::*;

const CYCLE_TIME: Duration = Duration::from_secs(1);

let node = NodeBuilder::new().create::<ipc::Service>()?;

// create our port factory by creating or opening the service
let service = node.service_builder(&"My/Funk/ServiceName".try_into()?)
    .publish_subscribe::<u64>()
    .open_or_create()?;

let subscriber = service.subscriber_builder().create()?;

while node.wait(CYCLE_TIME).is_ok() {
    while let Some(sample) = subscriber.receive()? {
        println!("received: {:?}", *sample);
    }
}

Publisher (Process 2)

use core::time::Duration;
use iceoryx2::prelude::*;

const CYCLE_TIME: Duration = Duration::from_secs(1);

let node = NodeBuilder::new().create::<ipc::Service>()?;

// create our port factory by creating or opening the service
let service = node.service_builder(&"My/Funk/ServiceName".try_into()?)
    .publish_subscribe::<u64>()
    .open_or_create()?;

let publisher = service.publisher_builder().create()?;

while node.wait(CYCLE_TIME).is_ok() {
    let sample = publisher.loan_uninit()?;
    let sample = sample.write_payload(1234);
    sample.send()?;
}

§Request-Response

This is a simple request-response example where a client sends a request, and the server responds with multiple replies until the processes are gracefully terminated by the user with CTRL+C

Client (Process 1)

use core::time::Duration;
use iceoryx2::prelude::*;

const CYCLE_TIME: Duration = Duration::from_secs(1);

let node = NodeBuilder::new().create::<ipc::Service>()?;

let service = node
    .service_builder(&"My/Funk/ServiceName".try_into()?)
    .request_response::<u64, u64>()
    .open_or_create()?;

let client = service.client_builder().create()?;

// sending first request by using slower, inefficient copy API
let mut pending_response = client.send_copy(1234)?;

while node.wait(CYCLE_TIME).is_ok() {
    // acquire all responses to our request from our buffer that were sent by the servers
    while let Some(response) = pending_response.receive()? {
        println!("  received response: {:?}", *response);
    }

    // send all other requests by using zero copy API
    let request = client.loan_uninit()?;
    let request = request.write_payload(5678);

    pending_response = request.send()?;
}

Server (Process 2)

use core::time::Duration;
use iceoryx2::prelude::*;

const CYCLE_TIME: Duration = Duration::from_millis(100);

let node = NodeBuilder::new().create::<ipc::Service>()?;

let service = node
    .service_builder(&"My/Funk/ServiceName".try_into()?)
    .request_response::<u64, u64>()
    .open_or_create()?;

let server = service.server_builder().create()?;

while node.wait(CYCLE_TIME).is_ok() {
    while let Some(active_request) = server.receive()? {
        println!("received request: {:?}", *active_request);

        // use zero copy API, send out some responses to demonstrate the streaming API
        for n in 0..4 {
            let response = active_request.loan_uninit()?;
            let response = response.write_payload(n as _);
            println!("  send response: {:?}", *response);
            response.send()?;
        }
    }
}

§Events

Explore a straightforward event setup, where the listener patiently awaits events from the notifier. This continuous event listening continues until the user gracefully terminates the processes by pressing CTRL+C.

Listener (Process 1)

use core::time::Duration;
use iceoryx2::prelude::*;

const CYCLE_TIME: Duration = Duration::from_secs(1);

let node = NodeBuilder::new().create::<ipc::Service>()?;

let event = node.service_builder(&"MyEventName".try_into()?)
    .event()
    .open_or_create()?;

let mut listener = event.listener_builder().create()?;

while node.wait(Duration::ZERO).is_ok() {
    if let Ok(Some(event_id)) = listener.timed_wait_one(CYCLE_TIME) {
        println!("event was triggered with id: {:?}", event_id);
    }
}

Notifier (Process 2)

use core::time::Duration;
use iceoryx2::prelude::*;

const CYCLE_TIME: Duration = Duration::from_secs(1);

let node = NodeBuilder::new().create::<ipc::Service>()?;

let event = node.service_builder(&"MyEventName".try_into()?)
    .event()
    .open_or_create()?;

let notifier = event.notifier_builder().create()?;

let mut counter: usize = 0;
while node.wait(CYCLE_TIME).is_ok() {
    counter += 1;
    notifier.notify_with_custom_event_id(EventId::new(counter))?;

    println!("Trigger event with id {} ...", counter);
}

§Quality Of Services

Quality of service settings, or service settings, play a crucial role in determining memory allocation in a worst-case scenario. These settings can be configured during the creation of a service, immediately after defining the MessagingPattern. In cases where the service already exists, these settings are interpreted as minimum requirements, ensuring a flexible and dynamic approach to memory management.

§Publish-Subscribe

For a detailed documentation see the publish_subscribe::Builder

use iceoryx2::prelude::*;

let node = NodeBuilder::new().create::<ipc::Service>()?;

let service = node.service_builder(&"PubSubQos".try_into()?)
    .publish_subscribe::<u64>()
    // when the subscriber buffer is full the oldest data is overridden with the newest
    .enable_safe_overflow(true)
    // how many samples a subscriber can borrow in parallel
    .subscriber_max_borrowed_samples(2)
    // the maximum history size a subscriber can request
    .history_size(3)
    // the maximum buffer size of a subscriber
    .subscriber_max_buffer_size(4)
    // the maximum amount of subscribers of this service
    .max_subscribers(5)
    // the maximum amount of publishers of this service
    .max_publishers(2)
    .create()?;

§Request-Response

For a detailed documentation see the request_response::Builder

use iceoryx2::prelude::*;

let node = NodeBuilder::new().create::<ipc::Service>()?;

let service = node.service_builder(&"ReqResQos".try_into()?)
    .request_response::<u64, u64>()
    // overrides the alignment of the request payload
    .request_payload_alignment(Alignment::new(128).unwrap())
    // overrides the alignment of the response payload
    .response_payload_alignment(Alignment::new(128).unwrap())
    // when the server buffer is full the oldest data is overridden with the newest
    .enable_safe_overflow_for_requests(true)
    // when the client buffer is full the oldest data is overridden with the newest
    .enable_safe_overflow_for_responses(true)
    // allows to send requests without expecting an answer
    .enable_fire_and_forget_requests(true)
    // how many requests can a client send in parallel
    .max_active_requests_per_client(2)
    // how many request payload objects can be loaned in parallel
    .max_loaned_requests(1)
    // the max buffer size for incoming responses per request
    .max_response_buffer_size(4)
    // the max number of servers
    .max_servers(2)
    // the max number of clients
    .max_clients(10)
    .create()?;

§Event

For a detailed documentation see the event::Builder

use core::time::Duration;
use iceoryx2::prelude::*;

let node = NodeBuilder::new().create::<ipc::Service>()?;

let event = node.service_builder(&"EventQos".try_into()?)
    .event()
    // the maximum amount of notifiers of this service
    .max_notifiers(2)
    // the maximum amount of listeners of this service
    .max_listeners(2)
    // defines the maximum supported event id value
    // WARNING: an increased value can have a significant performance impact on some
    //          configurations that use a bitset as event tracking mechanism
    .event_id_max_value(256)
    // optional event id that is emitted when a new notifier was created
    .notifier_created_event(EventId::new(999))
    // optional event id that is emitted when a notifier is dropped
    .notifier_dropped_event(EventId::new(0))
    // optional event id that is emitted when a notifier is identified as dead
    .notifier_dead_event(EventId::new(2000))
    // the deadline of the service defines how long a listener has to wait at most until
    // a signal will be received
    .deadline(Duration::from_secs(1))
    .create()?;

§Port Behavior

Certain ports in iceoryx2 provide users with the flexibility to define custom behaviors in specific situations. Custom port behaviors can be specified during the creation of a port, utilizing the port factory or service, immediately following the specification of the port type. This feature enhances the adaptability of iceoryx2 to diverse use cases and scenarios.

use iceoryx2::prelude::*;

let node = NodeBuilder::new().create::<ipc::Service>()?;

let service = node.service_builder(&"My/Funk/ServiceName".try_into()?)
    .publish_subscribe::<u64>()
    .enable_safe_overflow(false)
    .open_or_create()?;

let publisher = service.publisher_builder()
    // the maximum amount of samples this publisher can loan in parallel
    .max_loaned_samples(2)
    // defines the behavior when a sample could not be delivered when the subscriber buffer is
    // full, only useful in an non-overflow scenario
    .unable_to_deliver_strategy(UnableToDeliverStrategy::DiscardSample)
    .create()?;

§Feature Flags

  • dev_permissions - The permissions of all resources will be set to read, write, execute for everyone. This shall not be used in production and is meant to be enabled in a docker environment with inconsistent user configuration.
  • logger_log - Uses the log crate as default log backend
  • logger_tracing - Uses the tracing crate as default log backend

§Custom Configuration

iceoryx2 offers the flexibility to configure default quality of service settings, paths, and file suffixes through a custom configuration file.

For in-depth details and examples, please visit the GitHub config folder.

Modules§

active_request
Represents a “connection” to a Client that corresponds to a previously received RequestMut.
config
Handles iceoryx2s global configuration
node
Central instance that owns all service entities and can handle incoming event in an event loop The Node is the central entry point of iceoryx2. It is the owner of all communication entities and provides additional memory to them to perform reference counting amongst other things.
pending_response
Represents a “connection” to a Server that corresponds to a previously sent RequestMut.
port
The ports or communication endpoints of iceoryx2
prelude
Loads a meaninful subset to cover 90% of the iceoryx2 communication use cases.
request_mut
The payload that is sent by a Client to a Server.
request_mut_uninit
The uninitialized payload that is sent by a Client to a Server.
response
The answer a Client receives from a Server on a RequestMut.
response_mut
The answer a Server allocates to respond to a received RequestMut from a Client
response_mut_uninit
Example
sample
The payload that is received by a Subscriber.
sample_mut
The payload that is sent by a Publisher.
sample_mut_uninit
The uninitialized payload that is sent by a Publisher.
service
The foundation of communication the service with its MessagingPattern
signal_handling_mode
Defines how constructs like the Node or the WaitSet shall handle system signals.
waitset
Event handling mechanism to wait on multiple Listeners in one call, realizing the reactor pattern. (Event multiplexer) A WaitSet is an implementation of an event multiplexer (Reactor of the reactor design pattern). It allows the user to attach notifications, deadlines or intervals.