VT Code
cargo install vtcode
or brew install vinhnx/tap/vtcode
(macOS)
or npm install -g vtcode
VT Code is a Rust-based terminal coding agent with semantic code intelligence via Tree-sitter (parsers for Rust, Python, JavaScript/TypeScript, Go, Java) and ast-grep (structural pattern matching and refactoring).
It supports multiple LLM providers: OpenAI, Anthropic, xAI, DeepSeek, Gemini, OpenRouter, Z.AI, Moonshot AI, all with automatic failover, prompt caching, and token-efficient context management. Configuration occurs entirely through vtcode.toml
, sourcing constants from vtcode-core/src/config/constants.rs
and model IDs from docs/models.json
to ensure reproducibility and avoid hardcoding.
Table of Contents
- Quick Start
- Key Features
- Installation
- Configuration
- Zed IDE Integration
- Command Line Interface
- System Architecture
- Development
- References
Quick Start
# Install VT Code
# Set up your API key
# Launch VT Code
# Or run a single query
Key Features
- Multi-Provider AI: Support for OpenAI, Anthropic, Gemini, xAI, DeepSeek, Z.AI, Moonshot AI, and OpenRouter
- Smart Tools: Built-in code analysis, file operations, terminal commands, and refactoring tools
- Code Intelligence: Tree-sitter parsers for Rust, Python, JavaScript/TypeScript, Go, Java
- High Performance: Rust-based with async/await, multi-threading, and efficient context management
- Editor Integration: Native support for Zed IDE via Agent Client Protocol (ACP)
- Security First: Sandboxed execution, path validation, and configurable safety policies
- Context Engineering: Advanced token management, conversation summarization, and phase-aware curation
Technical Motivation
VT Code addresses limitations in existing coding agents by prioritizing Rust's type safety, zero-cost abstractions, and async ecosystem for reliable, high-performance execution. Motivated by agentic AI research (e.g., Anthropic's context engineering principles), it integrates Tree-sitter for precise parsing and MCP for extensible tooling. This enables long-running sessions with maintained context integrity, error resilience, and minimal token overhead. Builds on foundational work like perg while incorporating lessons from Anthropic's Context Engineering and Building effective agents guide. Inspiration from OpenAI's codex-cli.
System Architecture
The architecture divides into vtcode-core
(reusable library) and src/
(CLI executable), leveraging Tokio for multi-threaded async runtime (#[tokio::main(flavor = "multi_thread")]
for CPU-intensive tasks), anyhow for contextual error propagation, and clap for derive-based CLI parsing. Key design tenets include atomic operations, metadata-driven tool calls (to optimize context tokens), and phase-aware context curation.
Core Components (vtcode-core/
)
-
LLM Abstractions (
llm/
): Provider traits enable uniform async interfaces:Features: Streaming responses, model-specific optimizations (e.g., Anthropic's
cache_control: { ttl: "5m" }
for 5-minute TTL; OpenAI'sprompt_tokens_details.cached_tokens
reporting ~40% savings). Tokenization viatiktoken-rs
ensures accurate budgeting across models. -
Modular Tools (
tools/
): Trait-based extensibility:Built-ins include
read_file
(chunked at 2000 lines, metadata-first),ast_grep_search
(operations: search/transform/lint/refactor with preview_only=true), andrun_terminal_cmd
(modes: terminal/pty/streaming; 30s timeout default). Git integration vialist_files
useswalkdir
withignore
crate for .gitignore-aware traversal andnucleo-matcher
for fuzzy scoring. -
Configuration Engine (
config/
): Deserializesvtcode.toml
into structs with validation:[] = true = 100000 # Enforce per-provider limits = true # Auto-classify: exploration/implementation/etc.
Sections cover agents, tools (allow/deny), MCP (provider URLs), caching (quality_threshold=0.7), and safety (workspace_paths, max_file_size=1MB).
-
Context Engineering System: Implements iterative, per-turn curation based on conversation phase detection (e.g., exploration prioritizes search tools). Token budgeting: Real-time tracking with
tiktoken-rs
(~10μs/message), thresholds (0.75 warn/0.85 compact), and automatic summarization (LLM-driven, preserving decision ledger and errors; targets 30% compression ratio, saving ~29% tokens/turn). Decision ledger: Structured audit (Vec<DecisionEntry>
with status: pending/in_progress/completed, confidence: 0-1). Error recovery: Pattern matching (e.g., parse failures) with fallback strategies and context preservation. -
Code Intelligence: Tree-sitter integration for AST traversal (e.g., symbol resolution in
tools/ast_grep_search
); ast-grep for rule-based transforms:# Example pattern in tool call pattern: "fn $NAME($PARAMS) { $BODY }" replacement: "async fn $NAME($PARAMS) -> Result<()> { $BODY.await }"
Supports preview mode to avoid destructive applies.
-
MCP Integration: Client uses official Rust SDK for protocol-compliant calls:
let client = new; let docs = client.call.await?;
Discovers tools dynamically (e.g.,
mcp_resolve-library-id
for Context7 IDs,mcp_sequentialthinking
for chain-of-thought reasoning with branch/revision support,mcp_get_current_time
for timezone-aware ops). Connection pooling and failover for multi-provider setups.
CLI Execution (src/
)
- User Interface: Ratatui for reactive TUI (mouse-enabled, ANSI escape sequences for colors: e.g., \x1b[34m for blue tool banners). Real-time PTY via
vte
crate for command streaming; slash commands parsed with fuzzy matching. - Runtime: Tokio executor handles concurrent tool calls; human-in-the-loop via confirmation prompts for high-risk ops (e.g.,
rm -rf
denials). - Observability: Logs to file/console with structured format; metrics (e.g., cache hit rate, token usage) exposed via debug flags.
Performance notes: Multi-threaded Tokio reduces latency for I/O-bound tasks (~20% faster than single-thread); context compression yields 50-80% token savings in long sessions. See docs/ARCHITECTURE.md for dependency graph and profiling data.
Key Capabilities
- LLM Orchestration: Failover logic (e.g., Gemini primary, OpenAI fallback); reasoning control (low/medium/high effort via provider params); caching with quality gating (cache only >70% confidence, TTL=30 days). Supports latest models including GPT-5, Claude Sonnet 4.5, Grok 4, GLM 4.6, Kimi K2, Qwen3, and DeepSeek V3.2.
- Code Analysis & Editing: Semantic search (AST-grep similarity mode, threshold=0.7); targeted edits (exact string match in
edit_file
, preserving whitespace); multi-file patches viaapply_patch
. - Context & Session Management: Phase-adaptive tool selection (e.g., validation phase favors
run_terminal_cmd
withcargo test
); ledger injection for coherence (max 12 entries); summarization triggers at 20 turns or 85% budget. - Extensibility: Custom tools via trait impls; MCP for domain-specific extensions (e.g., library docs resolution:
resolve-library-id
→get-library-docs
with max_tokens=5000). - Security Posture: Path validation (no escapes outside WORKSPACE_DIR); sandboxed network (curl HTTPS only, no localhost); allowlists (e.g., deny
rm
, permitcargo
); env-var secrets (no file storage).
Installation
Download Binaries
Pre-built binaries are available for:
- macOS: aarch64/x86_64-apple-darwin
- Linux: x86_64/aarch64-unknown-linux-gnu
- Windows: x86_64-pc-windows-msvc
Download from GitHub Releases
Package Managers
# Cargo (recommended)
# Homebrew (macOS)
# NPM
Configuration
Environment Setup
# Set your API key (choose your provider)
# OpenAI
# Anthropic
# Google Gemini
# xAI
# DeepSeek
# Z.AI
# Moonshot AI
# OpenRouter
# Optional: Use .env file in your project root
Configuration File
Create vtcode.toml
in your project root:
[]
= "openai" # Choose your provider
= "gpt-5" # Latest model
= "OPENAI_API_KEY" # Environment variable
[]
= "prompt" # Safety: "allow", "prompt", or "deny"
[]
= "allow" # Always allow file reading
= "prompt" # Prompt before modifications
= "prompt" # Prompt before commands
Usage Examples
# Launch interactive TUI
# Single query with specific provider/model
# Debug mode
Agent Client Protocol
VT Code is now a fully functional agent client protocol client, works seamlessly with Zed through the Agent Client Protocol (ACP), a standardized protocol for communication between code editors and AI coding agents. This integration provides:
- Native Editor Integration: Access VT Code's AI assistant without leaving your editor
- File Context Awareness: AI can read and analyze files directly from your workspace using ACP's file system capabilities
- Tool Integration: Leverage VT Code's extensive tool ecosystem (code analysis, terminal commands, etc.) via ACP tool calls
- Multi-Provider Support: Use any supported LLM provider (OpenAI, Anthropic, Gemini, etc.) within Zed
- Standardized Communication: JSON-RPC over stdio ensures reliable, cross-platform compatibility
The ACP protocol decouples agents from editors, similar to how the Language Server Protocol (LSP) standardized language server integration, allowing both sides to innovate independently while giving developers the freedom to choose the best tools for their workflow.
Prerequisites
- Zed Editor: Version
v0.201
or newer with Agent Client Protocol enabled - VT Code Binary: Installed and accessible via
PATH
or absolute path - Configuration: A properly configured
vtcode.toml
with provider credentials
ACP Protocol Compliance
VT Code implements the Agent Client Protocol specification, providing:
- JSON-RPC Communication: Standardized message format over stdio transport
- Session Management: Proper initialization and session setup as per ACP spec
- Tool Call Support: Full implementation of ACP tool calling mechanisms
- File System Integration: ACP-compliant file reading and workspace access
- Content Formatting: Markdown-based content rendering for rich formatting
- Extensibility: Support for custom ACP extensions and slash commands
The protocol assumes the user is primarily in their editor and wants to use agents for specific tasks, with agents running as sub-processes of the code editor.
Quick Setup Overview
- Install VT Code (if not already installed)
- Configure ACP in your
vtcode.toml
- Register Agent in Zed's settings
- Launch & Verify the integration
Step 1: Install VT Code
Choose your preferred installation method:
# Via Cargo (recommended for development)
# Via Homebrew (macOS)
# Via NPM
Verify installation:
Step 2: Configure ACP in vtcode.toml
Create or update your vtcode.toml
configuration:
# Basic ACP configuration
[]
= true
# Zed-specific ACP settings
[]
= true
= "stdio" # Communication method
# Tool permissions for Zed integration
[]
= true # Allow reading files from workspace
= true # Allow listing directory contents
# Additional tools can be configured in [tools.policies]
# Optional: Custom provider/model for Zed integration
[]
= "openai" # OpenAI provider
= "gpt-5-codex" # GPT-5 Codex model for coding tasks
= "OPENAI_API_KEY" # Environment variable
= "high" # High reasoning for complex coding tasks
# Optional: Tool policies for ACP
[]
= "allow" # Always allow file reading
= "prompt" # Prompt before file modifications
= "prompt" # Prompt before terminal commands
Step 3: Register VT Code Agent in Zed
Add VT Code as a custom agent in Zed's settings (~/.config/zed/settings.json
):
{
"agent_servers": {
"vtcode": {
"command": "vtcode", // If vtcode is in PATH
"args": ["acp"],
"env": {
"GEMINI_API_KEY": "your_api_key_here" // Optional: override env vars
}
}
}
}
Alternative configurations:
// Using absolute path (recommended for reliability)
{
"agent_servers": {
"vtcode": {
"command": "/home/user/.cargo/bin/vtcode",
"args": ["acp", "--config", "/path/to/custom/vtcode.toml"]
}
}
}
// With custom environment variables
{
"agent_servers": {
"vtcode": {
"command": "vtcode",
"args": ["acp"],
"env": {
"OPENAI_API_KEY": "sk-...",
"ANTHROPIC_API_KEY": "sk-ant-...",
"GEMINI_API_KEY": "AIza..."
}
}
}
}
Find your vtcode path:
# Common locations:
# - $HOME/.cargo/bin/vtcode (Cargo install)
# - /usr/local/bin/vtcode (Homebrew)
# - /usr/bin/vtcode (System package)
Step 4: Launch and Use VT Code in Zed
- Open Agent Panel: Press
Cmd-?
(macOS) orCtrl-?
(Linux/Windows) - Create External Agent: Click "Create External Agent" or use the
+
button - Select VT Code: Choose the
vtcode
entry from your configured agents - Start Chatting: The ACP bridge will spawn automatically
Usage Examples:
# Reference files in your workspace
@src/main.rs Can you explain this function?
# Attach current buffer
[Attach current file] How can I optimize this code?
# Ask about project structure
What files should I modify to add authentication?
# Request code analysis
Analyze the performance bottlenecks in this codebase
Available Features:
- File Reading: AI can read and analyze any file in your workspace
- Code Analysis: Leverage VT Code's tree-sitter parsers for semantic understanding
- Tool Integration: Access to terminal commands, code search, and refactoring tools
- Multi-Provider: Switch between different LLM providers without restarting
Advanced Configuration
Custom Models per Project:
# In your project's vtcode.toml
[]
= "anthropic"
= "claude-sonnet-4-5"
= "ANTHROPIC_API_KEY"
# Or use OpenRouter for access to multiple providers
[]
= "openrouter"
= "anthropic/claude-sonnet-4.5"
= "OPENROUTER_API_KEY"
Tool-Specific Configuration:
# Enable specific tools for Zed integration
[]
= true
= true
= true
= false # Disable terminal access for security
# Configure tool policies
[]
= "allow"
= "deny" # Prevent accidental file modifications
= "deny" # Block terminal commands in Zed
Troubleshooting
Common Issues:
-
Agent Not Appearing:
- Verify
vtcode
is in your PATH:which vtcode
- Check Zed settings syntax in
settings.json
- Restart Zed after configuration changes
- Verify
-
Connection Errors:
- Ensure ACP is enabled:
[acp] enabled = true
invtcode.toml
- Check transport method:
transport = "stdio"
- Verify API keys are set in environment variables
- Ensure ACP is enabled:
-
Tool Access Issues:
- Review
[acp.zed.tools]
configuration - Check
[tools.policies]
for permission settings - Enable debug logging:
vtcode --debug acp
- Review
Debug Commands:
# Test ACP connection
# Check configuration
# View ACP logs in Zed
# Command Palette → "dev: open acp logs"
Verification Steps:
- ACP logs should show successful connection
- Agent panel should display "VT Code" as available
- File references (
@filename
) should work in chat - Tool calls should appear in ACP logs
Additional Resources:
- Agent Client Protocol Documentation - Official ACP specification and guides
- Zed ACP Integration Guide - Detailed VT Code + Zed setup
- ACP Schema Reference - Complete protocol schema documentation
Configuration validation: On load, checks TOML against schema (e.g., model in docs/models.json
); logs warnings for deprecated keys.
Command Line Interface
Basic Usage
# Interactive mode (TUI)
# Single query mode
# With specific provider and model
Command Options
)
)
)
Development Commands
# Release build and run
# Debug build and run
# Quick compilation check
# Run tests
Development
Getting Started
# Clone the repository
# Build the project
# Run tests
# Check code quality
Development Practices
- Code Quality: Use
cargo clippy
for linting andcargo fmt
for formatting - Testing: Unit tests in
#[cfg(test)]
modules, integration tests intests/
- Error Handling: Uniform
anyhow::Result<T>
with descriptive context - Documentation: Rustdoc for public APIs, Markdown documentation in
./docs/
Contributing
We welcome contributions! Please read CONTRIBUTING.md for guidelines.
Architecture Overview
The project is organized into two main components:
vtcode-core/
: Reusable library with LLM providers, tools, and configurationsrc/
: CLI executable with TUI interface and command parsing
Key design principles:
- Type Safety: Leverage Rust's type system for reliability
- Async Performance: Tokio runtime for concurrent operations
- Modular Design: Trait-based extensibility for tools and providers
- Configuration-Driven: TOML-based configuration with validation
References
- User Guides: Getting Started, Configuration.
- Technical Docs: Context Engineering, MCP Setup, Prompt Caching.
- API: vtcode-core (full crate docs).
- Changelog: CHANGELOG.
License
MIT License - LICENSE for full terms.