Wasmind CLI
A command-line interface and terminal application for the Wasmind library
This CLI provides an interactive terminal user interface for running and managing Wasmind actor configurations. It serves as a general-purpose interface to wasmind's actor-based AI coordination capabilities, allowing you to run any actor setup through an intuitive TUI.
Note: This is a reference implementation showing how to build user interfaces with wasmind. You can run any wasmind actor configuration - we've included some sample configurations to get you started.
What You Can Build
The wasmind_cli provides a flexible TUI for running any wasmind actor configuration. You can:
🖥️ Interactive Terminal Interface
- Chat view - communicate directly with AI agents in your configuration
- Dashboard - system overview and controls for your actor setup
- Graph view - visualize agent relationships and message flow in real-time
- Configuration management - easily switch between different actor setups
⚙️ Any Actor Configuration
- Multi-agent workflows - coordinate any number of specialized AI agents
- Custom tool integration - run actors with file interaction, bash execution, or custom capabilities
- Hierarchical systems - build manager/worker patterns or delegation networks
- Custom actors - create your own specialized actors for domain-specific tasks
- Model-agnostic - works with any LLM provider through LiteLLM proxy
Included Example Configurations
We've included sample configurations to help you get started:
💬 Basic Assistant (example_configs/assistant.toml)
A simple AI assistant configuration - perfect for getting started with Wasmind:
- Single assistant actor - minimal setup with just one AI assistant
- Ready to use - pre-configured with sensible defaults
- Easy to extend - add more actors and tools as needed
- Great starting point - understand the basics before moving to complex multi-agent systems
🔍 Code Edit Approval Workflow (example_configs/code_with_experts.toml)
A collaborative code editing system where any code edit request triggers validation by configurable expert agents:
- Type checking expert - validates Python typing standards before code changes are applied
- Best practices expert - validates PEP 8 and Python idioms before code changes are applied
- Architecture expert - validates code organization and structure before code changes are applied
- Multi-agent approval - code edits only proceed if all expert agents approve the changes
🏗️ Delegation Network (example_configs/delegation_network.toml)
A hierarchical agent coordination system demonstrating:
- Dynamic task delegation - managers spawn and coordinate specialized workers
- Multi-level communication - manager → sub-manager → worker message patterns
- Health monitoring - system-wide agent status and coordination
- Scalable architecture - easily spawn additional agents as needed
Quick Start
Prerequisites
- Rust/Cargo - Required to build and install the CLI
- Docker - Required to run the LiteLLM model proxy for AI model routing
- cargo-component - Required to build WASM actor components (
cargo install cargo-component) - wasm32-wasip1 target - Required for building WebAssembly actors (
rustup target add wasm32-wasip1)
Installation
Run Example Configurations
# Basic assistant (great for getting started!)
# Code edit approval workflow
# Delegation network
# Or use your own configuration
Create Your Own Actor Configurations
- Study
example_configs/- Ready-to-run sample configurations - Explore the actors directory - Available actor implementations you can use
- Build custom actors - see Creating Actors Guide
- See the Configuration Guide for creating custom setups
Debugging Configurations
Use the check command to validate and debug configuration files before running them:
This will:
- Validate TOML syntax and structure
- Verify actor paths and dependencies
- Check for missing or circular dependencies
- Display resolved configuration with all defaults applied
- Show any configuration errors or warnings
Debug Message Flow:
To see all messages being sent through the actor system, run with debug logging:
WASMIND_LOG=debug
This is especially helpful when:
- Debugging actor communication issues
- Understanding message routing between agents
- Troubleshooting why actors aren't responding as expected
Commands & Options
Interactive Mode (default):
Utility Commands:
# Show default config location, cache paths, and system information
# Clean the actor cache (removes compiled WASM components)
# Actors are compiled and cached on first use for faster subsequent loads
# See [wasmind_actor_loader](https://github.com/silasmarvin/wasmind/tree/main/crates/wasmind_actor_loader/) for details on caching
# Validate and debug configuration files
Environment Variables:
# Set log level (error, warn, info, debug, trace)
WASMIND_LOG=debug WASMIND_LOG=info
Default Key Bindings (in TUI):
Ctrl+a- Assist (send message to agents)Ctrl+t- Toggle expanded tool displaysesc- Cancel the Agent's current action and force it to wait for your inputCtrl+c- ExitShift+Up/Down- Navigate graph view
NOTE: The cancel feature is WIP and if the Agent is making a request it will finish making it before cancelling.
Configuration
The CLI uses TOML configuration files to define your actor setup. Configurations specify:
- Which actors to load and their settings
- TUI key bindings and interface options
- Actor-specific overrides
- LLM provider configuration via LiteLLM
The example configurations show different patterns you can use, but you're free to create any actor configuration that suits your needs. See the Configuration Guide for detailed reference.
Links
- 📚 wasmind Book - Complete user guides and concepts
- ⚙️ Configuration Guide - Detailed configuration reference
- 🎭 Actor Examples - Available actors and their capabilities