1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
// Copyright (c) Sean Lawlor
//
// This source code is licensed under both the MIT license found in the
// LICENSE-MIT file in the root directory of this source tree.
//! # Support for remote nodes in a distributed cluster.
//!
//! A **node** is the same as [Erlang's definition](https://www.erlang.org/doc/reference_manual/distributed.html)
//! for distributed Erlang, in that it's a remote "hosting" process in the distributed pool of processes.
//!
//! In this realization, nodes are simply actors which handle an external connection to the other nodes in the pool.
//! When nodes connect and are authenticated, they spawn their remote-supporting local actors on the remote system
//! as `RemoteActor`s. The additionally handle synchronizing PG groups so the groups can contain both local
//! and remote actors.
//!
//! We have chosen protobuf for our inter-node defined protocol, however you can chose whatever medium you like
//! for binary serialization + deserialization. The "remote" actor will simply encode your message type and send it
//! over the wire for you
//!
//! (Future) When nodes connect, they identify all of the nodes the remote node is also connected to and additionally connect
//! to them as well.
//!
//! ## Important note on message serialization
//!
//! An important note on usage, when utilizing `ractor_cluster` and [ractor] in the cluster configuration
//! (i.e. `ractor/cluster`), you no longer receive the auto-implementation for all types for [ractor::Message]. This
//! is due to specialization (see: <https://github.com/rust-lang/rust/issues/31844>). Ideally we'd have the trait have a
//! "default" non-serializable implementation for all types that could be messages, and specific implementations for
//! those that can be messages sent over the network. However this is presently a `+nightly` only functionality and
//! has a soundness hole in it's definition and usage. Therefore as a workaround, when the `cluster` feature is enabled
//! on [ractor] the default implementation, specifically
//!
//! ```text
//! impl<T: std::any::Any + Send + Sized + 'static> ractor::Message for T {}
//! ```
//! is disabled.
//!
//! This means that you need to specify the implementation of the [ractor::Message] trait on all message types, and when
//! they're not network supported messages, this is just a default empty implementation. When they **are** potentially
//! sent over a network in a dist protocol, then you need to fill out the implementation details for how the message
//! serialization is handled. There however is a procedural macro in `ractor_cluster_derive` to facilitate this, which is
//! re-exposed on this crate under the same naming. Simply derive [RactorMessage] or [RactorClusterMessage] if you want local or
//! remote-supporting messages, respectively.
/// Node's are representing by an integer id
pub type NodeId = u64;
// Satisfy dependencies transitively imported
use async_trait as _;
// ============== Re-exports ============== //
pub use ;
pub use ;
pub use connect as client_connect;
pub use connect_enc as client_connect_enc;
pub use connect_external as client_connect_external;
pub use ClientConnectErr;
pub use NodeEventSubscription;
pub use NodeServer;
pub use NodeServerMessage;
pub use NodeSession;
pub use NodeSessionMessage;
pub use *;
// Re-export the procedural macros so people don't need to reference them directly
pub use RactorClusterMessage;
pub use RactorMessage;