# bob-cli (package: cli-agent)
[](LICENSE)
CLI agent for the [Bob Agent Framework](https://github.com/longcipher/bob).
## Overview
`bob-cli` is a command-line interface for Bob, a general-purpose AI agent framework. It provides:
- **Interactive REPL**: Chat through `AgentLoop` with built-in slash commands and session reset shortcuts
- **Multi-Model Support**: Works with OpenAI, Anthropic, Google, and other LLM providers
- **Tool Integration**: Connect to MCP servers for file operations, shell commands, and more
- **Skill System**: Load and apply predefined skills for specialized tasks
## Installation
### From Source
```bash
# Clone the repository
git clone https://github.com/longcipher/bob
cd bob
# Build and run
cargo run --bin bob-cli -- --config agent.toml
```
### Binary Release
```bash
# Download from GitHub releases
# (Coming soon)
```
## Configuration
Create an `agent.toml` file in the project root:
```toml
[runtime]
default_model = "openai:gpt-4o-mini"
max_steps = 12
turn_timeout_ms = 90000
dispatch_mode = "native_preferred"
# Optional: Configure MCP servers
[mcp]
[[mcp.servers]]
id = "filesystem"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
tool_timeout_ms = 15000
# Optional: Configure skills
[skills]
max_selected = 3
token_budget_ratio = 0.1
[[skills.sources]]
type = "directory"
path = "./skills"
recursive = false
# Optional: Persist session state/checkpoints/artifacts across restarts
[store]
path = "./.bob/sessions"
# Optional: Configure policies
[policy]
deny_tools = ["local/shell_exec"]
allow_tools = ["local/read_file", "local/write_file"]
default_deny = false
# Optional: Configure approval guardrails
[approval]
mode = "allow_all"
deny_tools = ["local/shell_exec"]
# Optional: Configure per-session token budget
[cost]
session_token_budget = 10000
```
When `[store]` is set, budget accounting is also persisted, so restarting the
CLI does not reset per-session token limits.
`dispatch_mode` supports `native_preferred` and `prompt_guided`.
### Environment Variables
Set your LLM provider API key:
```bash
# For OpenAI
export OPENAI_API_KEY="sk-..."
# For Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# For Google
export GEMINI_API_KEY="..."
```
## Usage
### Starting the Agent
```bash
cargo run --bin bob-cli
```
The agent will start an interactive REPL:
```text
Bob agent ready (model: openai:gpt-4o-mini)
Type a message and press Enter. /help for commands.
> Summarize the latest meeting notes in this folder
```
### REPL Commands
- Type your message and press **Enter** to send
- `/quit` or `/exit` to exit the agent
- `/help` or `/h` to show available commands
- `/usage` to inspect cumulative session token usage
- `/tools` to list all available tools
- `/tool.describe <tool-name>` to print a tool schema
- `/tape.info`, `/tape.search <query>`, `/anchors`, and `/handoff [name]` for session tape inspection
- `/new` or `/reset` to start a fresh session context
### Example Session
```text
> Read docs/design.md and explain it in simple terms
I'll read docs/design.md for you...
[uses filesystem tool to read the document]
The design document describes...
> Translate this explanation into Chinese
[agent produces translated output]
Here is the translated explanation...
```
## Features
### Multi-Model Support
Works with any LLM provider supported by `liter-llm`:
- OpenAI: `openai:gpt-4o`, `openai:gpt-4o-mini`
- Anthropic: `anthropic:claude-3-5-sonnet-20241022`
- Google: `google:gemini-2.0-flash-exp`
- Groq: `groq:llama-3.3-70b-versatile`
- And more...
### Tool Integration
Connect to MCP servers for extended capabilities:
- **Filesystem**: Read, write, and manage files
- **Shell**: Execute shell commands
- **Database**: Query databases
- **Custom**: Build your own MCP servers
### Skill System
Apply predefined skills for specialized tasks:
- Knowledge extraction
- Summarization and transformation
- Workflow automation
- Domain-specific orchestration
### Session Persistence
Sessions are persisted in-memory (development) or can be configured for persistent storage.
## Development
```bash
# Run in development mode
cargo run --bin bob-cli
# Build release binary
cargo build --bin bob-cli --release
# Run tests
cargo test -p cli-agent
```
## Architecture
The CLI agent is the composition root that:
1. Loads configuration from `agent.toml`
2. Wires up adapters (LLM, tools, storage, events)
3. Creates the runtime
4. Runs the REPL loop
See the [main.rs](src/main.rs) for the implementation.
## Related Crates
- **[bob-core](https://crates.io/crates/bob-core)** - Domain types and ports
- **[bob-runtime](https://crates.io/crates/bob-runtime)** - Runtime orchestration
- **[bob-adapters](https://crates.io/crates/bob-adapters)** - Adapter implementations
## License
Licensed under the Apache License, Version 2.0. See [LICENSE](../../LICENSE.md) for details.