axactor 0.2.1

Actor Model Library
Documentation
# axactor

Tokio actor runtime with local runtime + optional distributed cluster runtime.

## Capabilities

- `#[actor]` / `#[msg]` macro-based API
- user/control mailbox split
- restart policy + restart mailbox policy (`Keep` / `DrainAndDrop`)
- link / monitor / trap-exit
- lifecycle stream (`Started`, `Restarted`, `Stopped`)
- keyed `Registry` with idle reaper
- distributed cluster:
  - shard lease ownership + epoch fencing
  - ensure/direct routing
  - monitor/demonitor/down protocol
  - handshake dedup + request/response auth + protocol negotiation
  - runtime session ids with authenticated same-logical session rotation
  - control/data plane separation
  - overlays: `InMemoryOverlay`, `TcpOverlay`
- ref API split:
  - `LocalRef<M>`
  - `RemoteRef<C: RefContract>`
  - `AnyRef<C>` / `Addr<C>`

## Install

```toml
[dependencies]
axactor = "0.2.0"
tokio = { version = "1", features = ["full"] }
```

Optional Redis lease backend:

```toml
axactor = { version = "0.2.0", features = ["redis-lease"] }
```

## Local Quick Example

```rust
use axactor::{actor, Context, SpawnConfig, System};
use std::sync::Arc;

pub struct Counter {
    n: i64,
}

#[actor]
impl Counter {
    pub fn new() -> Self {
        Self { n: 0 }
    }

    #[msg]
    fn inc(&mut self) {
        self.n += 1;
    }

    #[msg]
    fn get(&mut self) -> i64 {
        self.n
    }

    async fn on_start(&mut self, _ctx: &mut Context) {}
}

#[tokio::main]
async fn main() {
    let system = Arc::new(System::new());
    let cfg = SpawnConfig::new("counter", 256);
    let h = system.spawn_with(Counter::new, cfg).unwrap();
    let r = h.actor();

    r.inc().unwrap();
    assert_eq!(r.get().await.unwrap(), 1);
}
```

## Registry

`Registry::get_or_spawn(...) -> Result<R, RegistryError>`

```rust
use axactor::{registry::Registry, SpawnConfig, System};
use std::sync::Arc;

let system = Arc::new(System::new());
let reg: Registry<String, CounterRef> = Registry::new(system.clone());
let cfg = SpawnConfig::new("counter", 256);
let c = reg.get_or_spawn("k1".to_string(), cfg, Counter::new).await?;
c.inc()?;
# Ok::<(), axactor::registry::RegistryError>(())
```

## Cluster Essentials

Secure mode is default:

- `allow_insecure_auth = false`
- `auth_token` must be set (otherwise `ClusterNode::start(...)` panics)

```rust
use axactor::cluster::{
    ClusterConfig, ClusterNode, InMemoryOverlay, MemoryLeaseStore, NodeId,
};
use std::sync::Arc;

let store = Arc::new(MemoryLeaseStore::new());
let overlay = InMemoryOverlay::new();
let cfg = ClusterConfig {
    auth_token: Some("shared-token".to_string()),
    allow_insecure_auth: false,
    ..ClusterConfig::default()
};

let _node = ClusterNode::start(NodeId::new("node-a"), store, overlay, cfg);
```

`NodeId::new("node-a")` is the logical node name. `ClusterNode::start(...)`
derives a runtime wire id by appending `@boot-<hex>-session-<n>`, and only that
exact suffix shape is treated as a runtime session marker. This avoids
collapsing arbitrary logical names that merely contain `@boot-`.

Remote non-handshake traffic is trusted only after the peer's current runtime
session has completed the hello/auth flow. Session-rotated peers can continue to
resolve pending responses after a successful re-handshake, but unauthenticated
same-logical spoofing is rejected before host execution.

Use `TcpOverlay` for multi-process transport:

```rust
use axactor::cluster::{TcpKeepaliveOptions, TcpOverlay};
use std::time::Duration;

let overlay = TcpOverlay::with_keepalive_options(
    address_book,
    1024 * 1024,
    Duration::from_secs(2),
    TcpKeepaliveOptions {
        enabled: true,
        idle: Some(Duration::from_secs(30)),
        interval: Some(Duration::from_secs(10)),
        probes: Some(5),
    },
);
```

## Ref API Split

`LocalRef<M>`: local hot path, no network/serialization branch.

```rust
let local: axactor::LocalRef<MyActorMsg> = my_actor_ref.as_local_ref();
local.try_send(MyActorMsg::Ping {})?;
```

`RemoteRef<C>`: wire-safe contract (`RemoteSafe` bounds on Tell/Ask/Reply).

```rust
use axactor::{RefContract, RemoteRef};

struct ChatContract;
impl RefContract for ChatContract {
    type Tell = ChatTell;
    type Ask = ChatAsk;
    type Reply = ChatReply;
}
```

`AnyRef<C>` / `Addr<C>`: optional location-transparent wrapper.

`AnyRef::monitor()` is supported.
For local contracts, attach monitor adapter explicitly:

```rust
let local = axactor::LocalContractRef::<MyContract>::new(tell, send_wait, ask)
    .with_monitor_fn(|| Box::pin(async move {
        // return tokio::sync::mpsc::Receiver<axactor::cluster::DownEvent>
        todo!()
    }));
```

## Production Notes

- `InMemoryOverlay` + `MemoryLeaseStore` are for tests/dev.
- For production, use TCP overlay + external lease store (Redis/etcd-equivalent).
- Keep `allow_insecure_auth = false` and set shared auth token (or stronger transport auth).
- Validate both the default and reduced feature sets if you maintain trybuild snapshots:
  - `cargo test -p axactor`
  - `cargo test -p axactor --no-default-features`
  - `cargo test -p axactor --all-features`

See workspace docs:

- [`docs/SLC_V1_DESIGN.md`]../docs/SLC_V1_DESIGN.md
- [`docs/REF_API_MIGRATION.md`]../docs/REF_API_MIGRATION.md

## License

MIT