Skip to main content

Crate swarm_engine_llm

Crate swarm_engine_llm 

Source
Expand description

LLM Integration Module

Action selection and batch inference using lightweight LLMs

§Module Structure

  • decider: LLM Decider trait and related types
  • ollama: Ollama backend (HTTP API)
  • invoker: BatchInvoker implementation (LLM batch calls)
  • prompt_builder: ResolvedContext → prompt generation
  • registry: Model registry
  • response_parser: LLM response parsing and repair (shared module)

§Design Approach

§BatchInvoker System (for Manager)

ManagerAgent batch processing is implemented in the following layers:

Core Layer
├── ManagerAgent trait (prepare / finalize)
├── DefaultBatchManagerAgent   ← Core layer default implementation
├── ContextStore / ContextView ← Normalized context
└── ContextResolver            ← Scope resolution

LLM Layer
├── PromptBuilder              ← ResolvedContext → prompt
└── BatchInvoker implementations ← LLM batch calls
    └── LlmBatchInvoker

ManagerAgent implementation uses Core layer’s DefaultBatchManagerAgent. LLM layer provides PromptBuilder (prompt generation) and BatchInvoker (LLM calls).

Re-exports§

pub use llama_cpp_server::ChatTemplate;
pub use llama_cpp_server::LlamaCppServerConfig;
pub use llama_cpp_server::LlamaCppServerDecider;
pub use batch_processor::BatchProcessError;
pub use batch_processor::BatchProcessResult;
pub use batch_processor::BatchProcessor;
pub use batch_processor::LlmBatchProcessor;
pub use batch_processor::LlmBatchProcessorConfig;
pub use debug_channel::LlmDebugChannel;
pub use debug_channel::LlmDebugEvent;
pub use debug_channel::StderrLlmSubscriber;
pub use decider::LlmDecider;
pub use decider::LlmDeciderConfig;
pub use decider::LlmError;
pub use invoker::create_llm_invoker;
pub use invoker::LlmBatchInvoker;
pub use ollama::OllamaConfig;
pub use ollama::OllamaDecider;
pub use prompt_builder::PromptBuilder;
pub use registry::ModelInfo;
pub use registry::ModelRegistry;
pub use registry::RegistryError;
pub use strategy_advisor::parse_selection_kind_fuzzy;
pub use strategy_advisor::LlmStrategyAdvisor;
pub use strategy_advisor::StrategyPromptBuilder;
pub use strategy_advisor::StrategyResponseParser;

Modules§

batch_processor
Batch Processor - ManagerAgent 向け Batch LLM 処理
debug_channel
LLM Debug Channel - LLM呼び出しのデバッグ出力
decider
LLM Decider - Action 選択のための LLM 抽象
invoker
BatchInvoker 実装 - Core の BatchInvoker trait を LLM で実装
json_prompt
JSON出力専用プロンプトテンプレート
llama_cpp_server
llama-server Decider - HTTP API 連携
ollama
Ollama Decider - Ollama HTTP API 連携
prompt_builder
PromptBuilder - ResolvedContext からプロンプトを生成
registry
Model Registry - Ollamaモデルの動的検出と管理
response_parser
LLM レスポンスのパース処理(共通モジュール)
strategy_advisor
Strategy Advisor - LLM による探索戦略アドバイス

Structs§

DecisionResponse
LLM からの判断レスポンス
LoraConfig
LoRA アダプター設定
StrategyAdvice
戦略アドバイスの結果
StrategyContext
戦略判断に必要なコンテキスト
WorkerDecisionRequest
個別 Worker への判断リクエスト

Enums§

SelectionKind
Selection アルゴリズムの種類
StrategyAdviceError
戦略アドバイスエラー

Traits§

StrategyAdvisor
戦略アドバイザー trait