1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
//! Multi-agent coordination for Common Agent Runtime.
//!
//! Provides six coordination patterns:
//!
//! | Pattern | Description |
//! |---------|-------------|
//! | **Swarm** | N agents on the same problem (parallel, sequential, or debate) |
//! | **Pipeline** | Linear chain — each agent's output feeds the next |
//! | **Supervisor** | One agent reviews workers, iterates until approval |
//! | **Delegator** | Main agent spawns specialists mid-run via a tool |
//! | **MapReduce** | Fan-out to N mappers, reduce into a single result |
//! | **Vote** | N agents answer independently, majority wins |
//!
//! ## How agents communicate
//!
//! Agents communicate through three mechanisms:
//!
//! 1. **Shared state** — all agents in a coordination group share the same
//! `Arc<StateStore>` and `Arc<EventLog>` via `SharedInfra`. Agents can read
//! each other's state writes.
//!
//! 2. **Task enrichment** — orchestrators pass prior agents' outputs into the
//! next agent's task prompt (e.g., sequential swarm, supervisor feedback).
//!
//! 3. **Mailbox** — async channel-based messaging for real-time inter-agent
//! communication during execution.
//!
//! ## The AgentRunner trait
//!
//! Since the runtime doesn't own the model, the caller implements `AgentRunner`
//! to drive the model loop. `car-multi` orchestrates *when* and *how* agents run;
//! the caller decides *what* each agent does.
//!
//! ```rust,ignore
//! use car_multi::{AgentRunner, AgentSpec, AgentOutput, Mailbox, MultiError};
//! use car_engine::Runtime;
//!
//! struct MyRunner { /* OpenAI client, etc. */ }
//!
//! #[async_trait::async_trait]
//! impl AgentRunner for MyRunner {
//! async fn run(
//! &self,
//! spec: &AgentSpec,
//! task: &str,
//! runtime: &Runtime,
//! mailbox: &Mailbox,
//! ) -> Result<AgentOutput, MultiError> {
//! // 1. Call your LLM with spec.system_prompt + task
//! // 2. Parse response into ActionProposal
//! // 3. runtime.execute(&proposal).await
//! // 4. Return AgentOutput
//! todo!()
//! }
//! }
//! ```
// Re-exports for convenience
pub use MultiError;
pub use Mailbox;
pub use AgentRunner;
pub use SharedInfra;
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;