pub struct Magi { /* private fields */ }Expand description
Main entry point for the MAGI multi-perspective analysis system.
Composes agents, validation, consensus, and reporting into a single
orchestration flow. The analyze method launches three
agents in parallel, parses and validates their responses, computes consensus,
and generates a formatted report.
§Examples
// let magi = Magi::new(provider);
// let report = magi.analyze(&Mode::CodeReview, content).await?;Implementations§
Source§impl Magi
impl Magi
Sourcepub fn new(provider: Arc<dyn LlmProvider>) -> Self
pub fn new(provider: Arc<dyn LlmProvider>) -> Self
Creates a MAGI orchestrator with a single provider and all defaults.
Equivalent to MagiBuilder::new(provider).build().unwrap().
This cannot fail because all defaults are valid.
§Parameters
provider: The LLM provider shared by all three agents.
Sourcepub fn builder(provider: Arc<dyn LlmProvider>) -> MagiBuilder
pub fn builder(provider: Arc<dyn LlmProvider>) -> MagiBuilder
Returns a builder for configuring a MAGI orchestrator.
§Parameters
provider: The default LLM provider.
Sourcepub async fn analyze(
&self,
mode: &Mode,
content: &str,
) -> Result<MagiReport, MagiError>
pub async fn analyze( &self, mode: &Mode, content: &str, ) -> Result<MagiReport, MagiError>
Runs a full multi-perspective analysis.
Launches three agents in parallel, parses their JSON responses, validates outputs, computes consensus, and generates a formatted report.
§Parameters
mode: The analysis mode (CodeReview, Design, Analysis).content: The content to analyze.
§Errors
MagiError::InputTooLargeifcontent.len()exceedsmax_input_len.MagiError::InsufficientAgentsif fewer than 2 agents succeed.MagiError::InvalidInputif nonce collision detected (probability ~2^-64 per call; fastrand effective state ~64 bits; see ADR 001 §Decision: Nonce RNG choice).
§Concurrency
The internal rng_source is guarded by a std::sync::Mutex, so concurrent
calls to analyze() from multiple tasks serialize on nonce generation. In
practice nonce generation is a single u128 read (~nanoseconds), so
contention is negligible under typical workloads. If profiling shows this
becomes a bottleneck in a multi-tenant deployment, consider wrapping Magi
in a pool of instances (one per tenant), or await v0.4 which may expose
with_rng_source publicly to allow a thread-local RNG strategy.