docs.rs failed to build vtcode-core-0.100.1
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Visit the last successful build:
vtcode-core-0.21.8
vtcode-core
Core library for VT Code — a Rust-based terminal coding agent.
vtcode-core powers the VT Code agent runtime. It provides the reusable
building blocks for multi-provider LLM orchestration, tool execution,
semantic code analysis, and configurable safety policies.
Highlights
- Provider Abstraction — unified LLM interface with adapters for OpenAI, Anthropic, xAI, DeepSeek, Gemini, OpenRouter, and Ollama, including automatic failover and spend controls.
- Prompt Caching — cross-provider caching system that leverages provider-specific mechanisms (OpenAI automatic, Anthropic
cache_control, Gemini implicit/explicit) to reduce cost and latency. - Semantic Workspace Model — LLM-native code analysis and navigation across modern programming languages.
- Bash Shell Safety —
tree-sitter-bashintegration for critical command validation and security enforcement. - Tool System — trait-driven registry for shell execution, file IO, search, and custom commands, with Tokio-powered concurrency and PTY streaming.
- Configuration-First — everything is driven by
vtcode.toml, with model, safety, and automation constants centralized inconfig::constants. - Safety & Observability — workspace boundary enforcement, command allow/deny lists, human-in-the-loop confirmation, and structured event logging.
Modules
| Module | Purpose |
|---|---|
config/ |
Configuration loader, defaults, schema validation |
llm/ |
Provider clients, request shaping, response handling |
tools/ |
Built-in tool implementations and registration utilities |
context/ |
Conversation management and memory |
exec/ |
Async orchestration for tool invocations and streaming output |
core/prompt_caching |
Cross-provider prompt caching system |
mcp/ |
Model Context Protocol client support |
safety/ |
Workspace boundary enforcement and command safety |
Public entrypoints
Agent/AgentRunner— main agent runtimeVTCodeConfig— configuration loader (vtcode.toml+ environment overrides)ToolRegistry/OptimizedToolRegistry— tool registration and executionAnyClient/make_client— provider-agnostic LLM client factoryPromptCache/PromptOptimizer— prompt caching primitivesThreadManager— thread lifecycle and event recording
Usage
use ;
async
Feature flags
| Flag | Description |
|---|---|
tui (default) |
Terminal UI via crossterm |
schema |
JSON Schema generation via schemars |
a2a-server |
Agent2Agent Protocol HTTP server |
anthropic-api |
Anthropic-compatible API server |
desktop-notifications |
Desktop notification support |
API reference
See docs.rs/vtcode-core.