Expand description
LLM Integration Module
Action selection and batch inference using lightweight LLMs
§Module Structure
decider: LLM Decider trait and related typesollama: Ollama backend (HTTP API)invoker: BatchInvoker implementation (LLM batch calls)prompt_builder: ResolvedContext → prompt generationregistry: Model registryresponse_parser: LLM response parsing and repair (shared module)
§Design Approach
§BatchInvoker System (for Manager)
ManagerAgent batch processing is implemented in the following layers:
Core Layer
├── ManagerAgent trait (prepare / finalize)
├── DefaultBatchManagerAgent ← Core layer default implementation
├── ContextStore / ContextView ← Normalized context
└── ContextResolver ← Scope resolution
LLM Layer
├── PromptBuilder ← ResolvedContext → prompt
└── BatchInvoker implementations ← LLM batch calls
└── LlmBatchInvokerManagerAgent implementation uses Core layer’s DefaultBatchManagerAgent.
LLM layer provides PromptBuilder (prompt generation) and BatchInvoker (LLM calls).
Re-exports§
pub use llama_cpp_server::ChatTemplate;pub use llama_cpp_server::LlamaCppServerConfig;pub use llama_cpp_server::LlamaCppServerDecider;pub use batch_processor::BatchProcessError;pub use batch_processor::BatchProcessResult;pub use batch_processor::BatchProcessor;pub use batch_processor::LlmBatchProcessor;pub use batch_processor::LlmBatchProcessorConfig;pub use debug_channel::LlmDebugChannel;pub use debug_channel::LlmDebugEvent;pub use debug_channel::StderrLlmSubscriber;pub use decider::LlmDecider;pub use decider::LlmDeciderConfig;pub use decider::LlmError;pub use invoker::create_llm_invoker;pub use invoker::LlmBatchInvoker;pub use ollama::OllamaConfig;pub use ollama::OllamaDecider;pub use prompt_builder::PromptBuilder;pub use registry::ModelInfo;pub use registry::ModelRegistry;pub use registry::RegistryError;pub use strategy_advisor::parse_selection_kind_fuzzy;pub use strategy_advisor::LlmStrategyAdvisor;pub use strategy_advisor::StrategyPromptBuilder;pub use strategy_advisor::StrategyResponseParser;
Modules§
- batch_
processor - Batch Processor - ManagerAgent 向け Batch LLM 処理
- debug_
channel - LLM Debug Channel - LLM呼び出しのデバッグ出力
- decider
- LLM Decider - Action 選択のための LLM 抽象
- invoker
- BatchInvoker 実装 - Core の BatchInvoker trait を LLM で実装
- json_
prompt - JSON出力専用プロンプトテンプレート
- llama_
cpp_ server - llama-server Decider - HTTP API 連携
- ollama
- Ollama Decider - Ollama HTTP API 連携
- prompt_
builder - PromptBuilder - ResolvedContext からプロンプトを生成
- registry
- Model Registry - Ollamaモデルの動的検出と管理
- response_
parser - LLM レスポンスのパース処理(共通モジュール)
- strategy_
advisor - Strategy Advisor - LLM による探索戦略アドバイス
Structs§
- Decision
Response - LLM からの判断レスポンス
- Lora
Config - LoRA アダプター設定
- Strategy
Advice - 戦略アドバイスの結果
- Strategy
Context - 戦略判断に必要なコンテキスト
- Worker
Decision Request - 個別 Worker への判断リクエスト
Enums§
- Selection
Kind - Selection アルゴリズムの種類
- Strategy
Advice Error - 戦略アドバイスエラー
Traits§
- Strategy
Advisor - 戦略アドバイザー trait