1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
//! # Mosaik
//!
//! A Rust runtime for building self-organizing, leaderless distributed
//! systems. Mosaik handles peer discovery, connectivity, consensus, and
//! replicated state so you can focus on application logic.
//!
//! Built on [iroh](https://docs.rs/iroh) (QUIC-based peer-to-peer transport
//! with relay support), mosaik nodes find each other automatically through
//! gossip and DHT, form groups with Raft consensus, and synchronize data
//! through typed streams and replicated collections — all without a central
//! coordinator.
//!
//! For tutorials, architecture guides, and worked examples, see the
//! [Mosaik Book](https://docs.mosaik.world).
//!
//! # Getting started
//!
//! Every mosaik application starts by creating a [`Network`]. Nodes that
//! share the same [`NetworkId`] discover each other automatically:
//!
//! ```rust
//! use mosaik::*;
//!
//! let network = Network::new("my-network-id").await?;
//! ```
//!
//! # Subsystems
//!
//! Mosaik is organized into four subsystems, each accessible from a
//! [`Network`] instance:
//!
//! ## Discovery
//!
//! The [`discovery`] subsystem handles automatic peer finding through
//! gossip and DHT. Nodes announce their presence, the streams they
//! produce, and the groups they belong to. Other nodes learn about them
//! without any manual configuration.
//!
//! ## Streams
//!
//! The [`streams`] subsystem provides typed, async pub/sub channels. A
//! [`Producer`](streams::Producer) publishes data, and any number of
//! [`Consumer`](streams::Consumer)s on the network can subscribe. Streams
//! implement [`futures::Sink`] and [`futures::Stream`], so they plug
//! directly into the async ecosystem:
//!
//! ```rust
//! // Open a producer for a stream of strings
//! let mut producer = network.streams().produce::<String>();
//!
//! // Wait until at least one consumer subscribes
//! producer.when().subscribed().await;
//!
//! // Send data
//! producer.send("hello".to_string()).await?;
//! ```
//!
//! Use the [`stream!`](declare::stream) macro to declare streams at
//! compile time with baked-in configuration:
//!
//! ```rust
//! mosaik::stream!(pub Telemetry = SensorReading,
//! online_when: subscribed().minimum_of(1),
//! );
//! ```
//!
//! ## Groups
//!
//! The [`groups`] subsystem provides consensus groups — clusters of
//! trusted nodes that coordinate through a modified Raft consensus
//! protocol. Groups elect a leader, replicate a command log, and apply
//! entries to a pluggable [`StateMachine`](groups::StateMachine):
//!
//! ```rust
//! let group = network.groups().with_key("my-group-key").join();
//!
//! group.when().leader_elected().await;
//! ```
//!
//! ## Collections
//!
//! The [`collections`] subsystem offers replicated data structures that
//! are built on top of groups. Each collection is backed by its own
//! Raft group, providing strong consistency for mutations:
//!
//! - [`Map<K,V>`](collections::Map) — key-value store
//! - [`Vec<T>`](collections::Vec) — ordered, append-friendly list
//! - [`Set<T>`](collections::Set) — unique-element set
//! - [`Cell<T>`](collections::Cell) — single replicated value
//! - [`Once<T>`](collections::Once) — write-once value
//! - [`PriorityQueue<P,K,V>`](collections::PriorityQueue) — priority queue
//!
//! Each collection has a **writer** (mutates via Raft) and a **reader**
//! (read-only replica):
//!
//! ```rust
//! let scores = collections::Map::<String, u64>::writer(&network, "leaderboard");
//!
//! scores.when().online().await;
//! scores.insert("alice".into(), 42).await?;
//! ```
//!
//! Use the [`collection!`](declare::collection) macro for compile-time
//! declarations:
//!
//! ```rust
//! mosaik::collection!(pub Leaderboard =
//! collections::Map<String, u64>, "leaderboard");
//! ```
//!
//! # Trusted Execution Environments
//!
//! The optional [`tee`] module (enabled with the `tee` feature) adds
//! support for running mosaik nodes inside hardware-isolated enclaves.
//! Currently supported:
//!
//! - **Intel TDX** (`tdx` feature) — nodes running inside a TDX Trust Domain
//! can generate attestation quotes that prove their identity and code
//! integrity. These quotes are used as [`Ticket`]s so that streams and
//! collections can gate access to verified TEE peers only.
//!
//! The `tdx-builder-alpine` and `tdx-builder-ubuntu` features provide
//! build-time image builders for packaging Rust crates into bootable
//! TDX guest images.
//!
//! # Reactive conditions
//!
//! All major types expose a `.when()` builder that returns a future
//! resolving when a topology or consensus condition is met:
//!
//! ```rust,ignore
//! // Wait for a specific number of subscribers
//! producer.when().subscribed().minimum_of(3).await;
//!
//! // Wait for a collection mutation to replicate
//! let ver = scores.insert("bob".into(), 99).await?;
//! reader.when().reaches(ver).await;
//!
//! // Wait for group leader election
//! group.when().leader_elected().await;
//! ```
pub use ;
pub use ;
pub use futures;
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;
/// Compile-time declaration macros for streams and collections.
///
/// Use [`stream!`] to declare typed pub/sub channels and
/// [`collection!`] to declare replicated data structures with
/// baked-in identifiers and configuration. See their respective
/// module docs for full syntax.
pub use ;