Expand description
A high-performance, discrete-event computation framework for system simulation.
NeXosim is a developer-friendly, yet highly optimized software simulator able to scale to very large simulation with complex time-driven state machines.
It promotes a component-oriented architecture that is familiar to system engineers and closely resembles flow-based programming: a model is essentially an isolated entity with a fixed set of typed inputs and outputs, communicating with other models through message passing via connections defined during bench assembly. Unlike in conventional flow-based programming, request-reply patterns are also possible.
NeXosim leverages asynchronous programming to perform auto-parallelization in a manner that is fully transparent to model authors and users, achieving high computational throughput on large simulation benches by means of a custom multi-threaded executor.
§A practical overview
Simulating a system typically involves three distinct activities:
- the design of simulation models for each node of the system,
- the assembly of a simulation bench from a set of models, performed by inter-connecting model ports,
- the execution of the simulation, managed through periodical increments of the simulation time and by exchange of messages with simulation models.
The following sections go through each of these activities in more details.
§Authoring models
Models can contain four kinds of ports:
- output ports, which are instances of the
Outputtype and can be used to broadcast a message, - requestor ports, which are instances of the
UniRequestororRequestortypes and can be used to send/broadcast a message and receive a single reply (UniRequestor) or an iterator over the replies of all connected replier ports (Requestor), - input ports, which are synchronous or asynchronous methods that
implement the
InputFntrait and take an&mut selfargument, a message argument and, optionally,&Contextand&Model::Envarguments, - replier ports, which are similar to input ports but implement the
ReplierFntrait and return a reply.
Messages that are broadcast by an output port to an input port are referred to as events, while messages exchanged between requestor and replier ports are referred to as requests and replies.
Models must implement the Model trait, which is most
conveniently done by annotating the impl block of the model with the
#[Model] macro. This trait allows models to specify
a custom Model::init method that is guaranteed to
run exactly once when the simulation is initialized, i.e. after all models
have been connected but before the simulation starts.
More complex models can be built with the ProtoModel
trait. The ProtoModel::build method makes it
possible to:
- build the final
Modelfrom a builder (the model prototype), - perform possibly blocking actions when the model is added to the simulation rather than when the simulation starts, such as establishing a network connection or configuring hardware devices,
- connect submodels and add them to the simulation.
In typical scenarios the Model trait can be
implemented by a Model proc-macro, applied to the
main impl block of the model struct. Definition for the init method
can be provided by using a custom #[nexosim(init)] attribute.
Moreover, input methods can be decorated with
#[nexosim(schedulable)] attribute to allow convenient self-scheduling
within the model.
§A simple model
Let us consider for illustration a simple model that forwards its input after multiplying it by 2. This model has only one input and one output port:
┌────────────┐
│ │
Input ●───────►│ Multiplier ├───────► Output
f64 │ │ f64
└────────────┘Multiplier could be implemented as follows:
use serde::{Deserialize, Serialize};
use nexosim::model::Model;
use nexosim::ports::Output;
#[derive(Default, Serialize, Deserialize)]
pub struct Multiplier {
pub output: Output<f64>,
}
#[Model]
impl Multiplier {
pub async fn input(&mut self, value: f64) {
self.output.send(2.0 * value).await;
}
}§A model using the local context
Models frequently need to schedule actions at a future time or simply get access to the current simulation time. To do so, input and replier methods can take an optional argument that gives them access to a local context.
To show how the local context can be used in practice, let us implement a
Delay model which simply forwards its input after a 1s delay. Note as well
the use of the schedulable! macro which, together
with the [nexosim(Schedulable)] attribute, make it possible for a model to
self-schedule its inputs.
use std::time::Duration;
use serde::{Deserialize, Serialize};
use nexosim::model::{Context, Model, schedulable};
use nexosim::ports::Output;
#[derive(Default, Serialize, Deserialize)]
pub struct Delay {
pub output: Output<f64>,
}
#[Model]
impl Delay {
pub fn input(&mut self, value: f64, cx: &Context<Self>) {
cx.schedule_event(Duration::from_secs(1), schedulable!(Self::send), value).unwrap();
}
#[nexosim(schedulable)]
async fn send(&mut self, value: f64) {
self.output.send(value).await;
}
}§Assembling simulation benches
A simulation bench is a system of inter-connected models that have been migrated to a simulation.
The assembly process usually starts with the instantiation of models and the
creation of a Mailbox for each model. A mailbox is
essentially a fixed-capacity buffer for events and requests. While each
model has only one mailbox, it is possible to create an arbitrary number of
Addresses pointing to that mailbox. For
convenience, methods such as Output::connect
accept as the target either a &Mailbox reference
from which an address is created, or a pre-instantiated
Address.
Addresses are used among others to connect models: each output or requestor
port has a connect method that takes as argument a function pointer to
the corresponding input or replier port method and the address of the
targeted model.
Once all models are connected, they are added to a
SimInit instance, which is a builder type for the
final Simulation.
The easiest way to understand the assembly step is with a short example. Say that we want to assemble the following system from the models implemented above:
┌────────────┐
│ │
┌──►│ Delay ├──┐
┌────────────┐ │ │ │ │ ┌────────────┐
│ │ │ └────────────┘ │ │ │
Input ●──►│ Multiplier ├───┤ ├──►│ Delay ├──► Output
│ │ │ ┌────────────┐ │ │ │
└────────────┘ │ │ │ │ └────────────┘
└──►│ Multiplier ├──┘
│ │
└────────────┘Here is how this could be done:
use std::time::Duration;
use nexosim::ports::{EventSource, SinkState, event_slot};
use nexosim::simulation::{Mailbox, SimInit};
use nexosim::time::MonotonicTime;
use models::{Delay, Multiplier};
// Instantiate models.
let mut multiplier1 = Multiplier::default();
let mut multiplier2 = Multiplier::default();
let mut delay1 = Delay::default();
let mut delay2 = Delay::default();
// Instantiate mailboxes.
let multiplier1_mbox = Mailbox::new();
let multiplier2_mbox = Mailbox::new();
let delay1_mbox = Mailbox::new();
let delay2_mbox = Mailbox::new();
// Connect the models.
multiplier1.output.connect(Delay::input, &delay1_mbox);
multiplier1.output.connect(Multiplier::input, &multiplier2_mbox);
multiplier2.output.connect(Delay::input, &delay2_mbox);
delay1.output.connect(Delay::input, &delay2_mbox);
// Keep handles to the system input and output for the simulation.
let mut bench = SimInit::new();
let input = EventSource::new()
.connect(Multiplier::input, &multiplier1_mbox)
.register(&mut bench);
let (sink, mut output) = event_slot(SinkState::Enabled);
delay2.output.connect_sink(sink);
// Pick an arbitrary simulation start time and build the simulation.
let t0 = MonotonicTime::EPOCH;
let mut simu = bench
.add_model(multiplier1, multiplier1_mbox, "multiplier1")
.add_model(multiplier2, multiplier2_mbox, "multiplier2")
.add_model(delay1, delay1_mbox, "delay1")
.add_model(delay2, delay2_mbox, "delay2")
.init(t0)?;
§Running simulations
The simulation can be controlled in several ways:
- by advancing time, either until the next scheduled event with
Simulation::step, until a specific deadline withSimulation::step_until, or until there are no more scheduled events withSimulation::run. - by sending events or queries without advancing simulation time, using
Simulation::process_eventorSimulation::send_query, - by scheduling events with a
Scheduler.
When initialized with the default clock, the simulation will run as fast as
possible, without regard for the actual wall clock time. Alternatively, the
simulation time can be synchronized to the wall clock time using
SimInit::with_clock and providing a
custom Clock type or a readily-available real-time clock
such as AutoSystemClock.
Simulation outputs can be monitored using
event_slots, event_queues, or
any implementer of the EventSinkWriter trait
connected to one or several model output ports.
This is an example of simulation that could be performed using the above bench assembly:
// Send a value to the first multiplier.
simu.process_event(&input, 21.0)?;
// The simulation is still at t0 so nothing is expected at the output of the
// second delay gate.
assert!(output.try_read().is_none());
// Advance simulation time until the next event and check the time and output.
simu.step()?;
assert_eq!(simu.time(), t0 + Duration::from_secs(1));
assert_eq!(output.try_read(), Some(84.0));
// Get the answer to the ultimate question of life, the universe & everything.
simu.step()?;
assert_eq!(simu.time(), t0 + Duration::from_secs(2));
assert_eq!(output.try_read(), Some(42.0));
§Message ordering guarantees
The NeXosim runtime is based on the actor model, meaning that every simulation model can be thought of as an isolated entity running in its own thread. While in practice the runtime will actually multiplex and migrate models over a fixed set of kernel threads, models will indeed run in parallel whenever possible.
Since NeXosim is a time-based simulator, the runtime will always execute tasks in chronological order, thus eliminating most ordering ambiguities that could result from parallel execution. Nevertheless, it is sometimes possible for events and queries generated in the same time slice to lead to ambiguous execution orders. In order to make it easier to reason about such situations, NeXosim provides a set of guarantees about message delivery order. Borrowing from the Pony programming language, we refer to this contract as causal messaging, a property that can be summarized by these two rules:
- one-to-one message ordering guarantee: if model
Asends two events or queriesM1and thenM2to modelB, thenBwill always processM1beforeM2, - transitivity guarantee: if
AsendsM1toBand thenM2toCwhich in turn sendsM3toB, even thoughM1andM2may be processed in any order byBandC, it is guaranteed thatBwill processM1beforeM3.
Both guarantees also extend to same-time events scheduled from the global
Scheduler, i.e. the relative ordering of events
scheduled for the same time is preserved and warranties 1 and 2 above
accordingly hold (assuming model A stands for the scheduler). Likewise,
the relative order of same-time events self-scheduled by a model using its
Context is preserved.
§Cargo feature flags
§Tracing
The tracing feature flag provides support for the
tracing crate and can be
activated in Cargo.toml with:
[dependencies]
nexosim = { version = "1", features = ["tracing"] }See the tracing module for more information.
§Server
The server feature provides a gRPC server for remote control and
monitoring, e.g. from a Python client. It can be activated with:
[dependencies]
nexosim = { version = "1", features = ["server"] }See the endpoints and server modules for more information.
Front-end usage documentation will be added upon release of the NeXosim Python client.
§Other resources
§Other examples
Several examples are available that contain more fleshed
out examples and demonstrate various capabilities of the simulation
framework.
§Other features and advanced topics
While the above overview does cover most basic concepts, more information is available in the modules’ documentation:
- the
modelmodule provides more details about models, model prototypes and hierarchical models; be sure to check as well the documentation ofmodel::Contextfor topics such as self-scheduling methods and event cancellation, - the
portsmodule discusses in more details model ports and simulation endpoints, as well as the ability to modify and filter messages exchanged between ports; it also providesEventSourceandQuerySourceobjects which can be connected to models just likeOutputandRequestorports, but for use as simulation endpoints. - the
servermodules makes it possible to remotely manage a simulation bench via gRPC, - the
simulationmodule discusses mailbox capacity, deadlocks and custom clocks, - the
timemodule introducesMonotonicTimetimestamps,Clocks andTickers. - the
tracingmodule discusses time-stamping and filtering oftracingevents.
Re-exports§
pub use nexosim_macros;
Modules§
- endpoints
- Registry for sinks and sources.
- model
- Model components.
- path
- Paths for models and endpoints identifiers.
- ports
- Ports for event and query broadcasting.
- server
server - Simulation management through remote procedure calls.
- simulation
- Discrete-event simulation management.
- time
- Simulation time and clocks.
- tracing
tracing - Support for structured logging.
Derive Macros§
- Message
- A helper macro that enables schema generation for the server endpoint data.