pub struct AppState {Show 18 fields
pub app_data_dir: PathBuf,
pub config: Arc<RwLock<Config>>,
pub provider: Arc<RwLock<Arc<dyn LLMProvider>>>,
pub sessions: Arc<RwLock<HashMap<String, Session>>>,
pub storage: JsonlStorage,
pub llm: Arc<dyn LLMProvider>,
pub tools: Arc<dyn ToolExecutor>,
pub cancel_tokens: Arc<RwLock<HashMap<String, CancellationToken>>>,
pub skill_manager: Arc<SkillManager>,
pub mcp_manager: Arc<McpServerManager>,
pub metrics_service: Arc<MetricsService>,
pub model_name: String,
pub agent_runners: Arc<RwLock<HashMap<String, AgentRunner>>>,
pub process_registry: Arc<ProcessRegistry>,
pub claude_cli_path: Option<String>,
pub claude_runners: Arc<RwLock<HashMap<String, AgentRunner>>>,
pub claude_session_aliases: Arc<RwLock<HashMap<String, String>>>,
pub metrics_bus: Option<MetricsBus>,
}Expand description
Unified application state consolidating web_service and agent/server state
This struct holds all the state needed to run the Bamboo server, including configuration, LLM providers, sessions, storage, tools, skills, and metrics.
§Design Goals
- Direct access: Components are directly accessible without HTTP proxies
- Hot reload: Configuration and providers can be reloaded at runtime
- Thread safety: Uses Arc
for concurrent access - Persistence: Integrates with JsonlStorage for session persistence
§Component Overview
| Component | Purpose | Thread-Safe |
|---|---|---|
config | Application configuration | Yes (RwLock) |
provider | Hot-reloadable LLM provider | Yes (RwLock) |
sessions | Active conversation sessions | Yes (RwLock) |
storage | Persistent session storage | Yes (Arc) |
tools | Tool execution (builtin + MCP) | Yes (Arc) |
skill_manager | Skill registry and execution | Yes (Arc) |
mcp_manager | MCP server lifecycle | Yes (Arc) |
metrics_service | Usage metrics collection | Yes (Arc) |
agent_runners | Active agent executions | Yes (RwLock) |
Fields§
§app_data_dir: PathBufApplication data directory (typically ~/.bamboo)
config: Arc<RwLock<Config>>Hot-reloadable application configuration
Can be reloaded from disk at runtime using reload_config().
provider: Arc<RwLock<Arc<dyn LLMProvider>>>Hot-reloadable LLM provider with direct access
This eliminates the proxy pattern where we created an AgentAppState that called back to web_service via HTTP. Now we have direct provider access.
sessions: Arc<RwLock<HashMap<String, Session>>>Active conversation sessions (in-memory cache)
Maps session IDs to Session objects. Persisted to storage
via the storage field.
storage: JsonlStoragePersistent storage backend for sessions
Uses JSONL format for append-only event logging.
llm: Arc<dyn LLMProvider>Direct LLM provider reference
This is equivalent to provider.read().await.clone(), but stored
separately for convenience and to avoid lock overhead.
tools: Arc<dyn ToolExecutor>Composite tool executor (builtin + MCP tools)
Combines built-in tools (file ops, code execution) with MCP-provided tools from configured servers.
cancel_tokens: Arc<RwLock<HashMap<String, CancellationToken>>>Cancellation tokens for in-flight requests
Maps request/session IDs to their cancellation tokens, allowing graceful shutdown of long-running operations.
skill_manager: Arc<SkillManager>Skill manager for prompt-based skill execution
Manages the skill registry and handles skill lookup, validation, and execution.
mcp_manager: Arc<McpServerManager>MCP server manager for external tool servers
Handles lifecycle of Model Context Protocol servers, including initialization, tool discovery, and shutdown.
metrics_service: Arc<MetricsService>Metrics collection and persistence service
Tracks token usage, costs, and performance metrics across all sessions.
model_name: StringDefault model name for LLM requests
Read from configuration, used as fallback when not specified in individual requests.
agent_runners: Arc<RwLock<HashMap<String, AgentRunner>>>Active agent runners indexed by session ID
Each runner manages event broadcasting and cancellation for an active agent execution.
process_registry: Arc<ProcessRegistry>Registry for tracking external processes (e.g., Claude Code CLI sessions)
claude_cli_path: Option<String>Discovered Claude Code CLI binary path (if installed)
claude_runners: Arc<RwLock<HashMap<String, AgentRunner>>>Active Claude Code CLI runners indexed by Claude session ID
These are streamed to clients via SSE under the /v1/agent/... endpoints.
claude_session_aliases: Arc<RwLock<HashMap<String, String>>>Maps client-provided session ids (aliases) to real Claude UUID session ids.
Claude Code requires session ids to be UUIDs, but some clients/tests use human-readable strings. We accept those as aliases and generate a UUID.
metrics_bus: Option<MetricsBus>Optional metrics bus for event streaming
When enabled, allows subscribing to metrics events in real-time.
Implementations§
Source§impl AppState
impl AppState
Sourcepub async fn new(bamboo_home_dir: PathBuf) -> Self
pub async fn new(bamboo_home_dir: PathBuf) -> Self
Create unified app state with direct provider access
This eliminates the proxy pattern where we created an AgentAppState that called back to web_service via HTTP. Now we have direct provider access.
§Arguments
bamboo_home_dir- Bamboo home directory containing all application data. This is the root directory (e.g., ~/.bamboo) that contains: - config.json: Configuration file - sessions/: Conversation history - skills/: Skill definitions - workflows/: Workflow definitions - cache/: Cached data - runtime/: Runtime files
§Returns
A fully initialized AppState with all components ready for use.
§Panics
Panics if storage initialization fails (critical error).
§Example
use bamboo_agent::server::app_state::AppState;
use std::path::PathBuf;
#[tokio::main]
async fn main() {
let state = AppState::new(PathBuf::from("/path/to/.bamboo")).await;
println!("Initialized with model: {}", state.model_name);
}Sourcepub async fn new_with_provider(
bamboo_home_dir: PathBuf,
config: Config,
provider: Arc<dyn LLMProvider>,
) -> Self
pub async fn new_with_provider( bamboo_home_dir: PathBuf, config: Config, provider: Arc<dyn LLMProvider>, ) -> Self
Create unified app state with a specific provider
Allows injecting a custom LLM provider instead of creating one from configuration. Useful for testing and custom deployments.
§Arguments
bamboo_home_dir- Bamboo home directory containing all application dataconfig- Application configurationprovider- Pre-configured LLM provider implementation
§Returns
A fully initialized AppState with the provided provider.
§Initialization Steps
- Initialize JSONL storage in
{bamboo_home_dir}/sessions - Load built-in tools
- Initialize MCP manager and load configured servers
- Create composite tool executor (builtin + MCP)
- Initialize skill manager
- Initialize metrics service with SQLite backend
- Start runner cleanup task (removes completed runners after 5 minutes)
§Panics
Panics if storage or metrics initialization fails.
Sourcepub async fn reload_provider(&self) -> Result<(), LLMError>
pub async fn reload_provider(&self) -> Result<(), LLMError>
Reload the provider based on current configuration
Re-reads the configuration and creates a new LLM provider instance, allowing runtime switching of providers or models.
§Returns
Ok(()) if the provider was successfully reloaded.
§Errors
Returns an error if:
- Configuration cannot be read
- Provider initialization fails (e.g., invalid API key)
§Example
use bamboo_agent::server::app_state::AppState;
use std::path::PathBuf;
#[tokio::main]
async fn main() {
let state = AppState::new(PathBuf::from("/path/to/.bamboo")).await;
// User updated config file...
state.reload_provider().await.expect("Provider reload failed");
}Sourcepub async fn reload_config(&self) -> Config
pub async fn reload_config(&self) -> Config
Reload the configuration from file
Reads the configuration file again and updates the in-memory
config. Note: This does NOT automatically reload the provider;
call reload_provider() afterwards if needed.
§Returns
The newly loaded configuration.
§Example
use bamboo_agent::server::app_state::AppState;
use std::path::PathBuf;
#[tokio::main]
async fn main() {
let state = AppState::new(PathBuf::from("/path/to/.bamboo")).await;
// Reload config from disk
let new_config = state.reload_config().await;
// Optionally reload provider with new config
state.reload_provider().await.ok();
}Sourcepub async fn persist_config(&self) -> Result<()>
pub async fn persist_config(&self) -> Result<()>
Persist the current in-memory config to disk ({app_data_dir}/config.json).
This is the single “exit” for configuration writes in the server runtime.
Sourcepub async fn get_provider(&self) -> Arc<dyn LLMProvider>
pub async fn get_provider(&self) -> Arc<dyn LLMProvider>
Get a clone of the current provider
Returns a thread-safe reference to the current LLM provider. This is the preferred way to access the provider for making requests.
§Returns
An Arc reference to the current provider implementation.
§Example
use bamboo_agent::server::app_state::AppState;
use std::path::PathBuf;
#[tokio::main]
async fn main() {
let state = AppState::new(PathBuf::from("/path/to/.bamboo")).await;
let provider = state.get_provider().await;
// Use provider to make LLM requests...
}Sourcepub async fn shutdown(&self)
pub async fn shutdown(&self)
Shutdown all MCP servers gracefully
Sends shutdown signals to all running MCP server processes and waits for them to terminate cleanly.
This should be called during application shutdown to ensure MCP servers are not left running as orphaned processes.
Sourcepub async fn save_event(&self, session_id: &str, event: &AgentEvent)
pub async fn save_event(&self, session_id: &str, event: &AgentEvent)
Save an agent event to persistent storage
Appends the event to the session’s event log in JSONL format.
§Arguments
session_id- Session identifierevent- Event to save
Sourcepub async fn save_session(&self, session: &Session)
pub async fn save_session(&self, session: &Session)
Save a complete session to persistent storage
Writes the session metadata and all events to the storage backend.
§Arguments
session- Session object to save
Sourcepub fn get_all_tool_schemas(&self) -> Vec<ToolSchema>
pub fn get_all_tool_schemas(&self) -> Vec<ToolSchema>
Get all tool schemas from the composite tool executor
Returns schemas for both built-in tools and MCP-provided tools. These schemas are used to inform the LLM about available tools.
§Returns
Vector of tool schemas in Anthropic’s tool definition format.