swarm-engine 0.1.6

High-throughput, low-latency Agent Swarm orchestration framework
Documentation
//! # SwarmEngine
//!
//! High-throughput, low-latency Agent Swarm orchestration framework.
//!
//! SwarmEngine provides a framework for orchestrating multiple AI agents working
//! together on complex tasks. It supports various LLM backends and includes
//! built-in evaluation capabilities.
//!
//! ## Quick Start
//!
//! ```rust,ignore
//! use swarm_engine::prelude::*;
//!
//! // Create an environment and orchestrator
//! let config = SwarmConfig::default();
//! let orchestrator = OrchestratorBuilder::new(config)
//!     .environment(env)
//!     .manager(manager)
//!     .build()?;
//!
//! // Run the swarm
//! orchestrator.run().await;
//! ```
//!
//! ## Feature Flags
//!
//! - `eval` - Enable evaluation framework
//! - `ollama` - Enable Ollama backend
//! - `llama-server` - Enable llama.cpp server backend
//! - `llama-cpp` - Enable llama.cpp native backend
//! - `cuda` - Enable CUDA support
//! - `metal` - Enable Metal support (Apple Silicon)
//! - `full` - Enable eval + all HTTP-based LLM backends

// Re-export core types
pub use swarm_engine_core::*;

// Re-export LLM types
pub use swarm_engine_llm as llm;

// Re-export eval types when enabled
#[cfg(feature = "eval")]
pub use swarm_engine_eval as eval;

/// Prelude module for convenient imports
pub mod prelude {
    // Re-export core prelude
    pub use swarm_engine_core::prelude::*;

    // LLM types
    pub use swarm_engine_llm::{
        LlamaCppServerConfig, LlamaCppServerDecider, LlmBatchInvoker, LlmDecider, LlmDeciderConfig,
        LoraConfig, OllamaConfig, OllamaDecider,
    };

    // Eval types (when enabled)
    #[cfg(feature = "eval")]
    pub use swarm_engine_eval::prelude::*;
}