cnctd_ai
A Rust abstraction layer for AI/LLM providers (Anthropic Claude, OpenAI) with integrated MCP (Model Context Protocol) support and autonomous agent framework.
Features
- Multi-Provider Support: Unified interface for Anthropic Claude and OpenAI
- Streaming & Non-Streaming: Support for both regular completions and streaming responses
- Tool Calling: Full support for function/tool calling with both providers
- Agent Framework: Autonomous task execution with tool calling loops
- MCP Integration: Native support for MCP servers (stdio and HTTP gateway)
- Error Handling: Comprehensive error types with provider-specific handling
- Type Safety: Strong typing with proper error handling throughout
Installation
Add to your Cargo.toml:
[]
= "0.1.5"
Quick Start
Agent Framework (NEW!)
The easiest way to build autonomous AI applications:
use ;
// Setup client and gateway
let client = anthropic?;
let gateway = new;
// Create agent with default settings
let agent = new.with_gateway;
// Run autonomous task - agent will use tools as needed
let trace = agent.run_simple.await?;
// View results
trace.print_summary;
For advanced configuration:
let agent = builder
.max_iterations
.max_duration
.system_prompt
.gateway
.build;
See Agent Framework Documentation for more details.
Basic Completion
use ;
let client = anthropic?;
let request = CompletionRequest ;
let response = client.complete.await?;
println!;
Streaming
use ;
let mut stream = client.complete_stream.await?;
while let Some = stream.next.await
Tool Calling
use ;
use json;
// Create a tool using the helper function
let weather_tool = create_tool?;
let mut request = CompletionRequest ;
request.add_tool;
let response = client.complete.await?;
// Check if model wants to use a tool
if let Some = response.tool_use
MCP Gateway Integration
use McpGateway;
let gateway = new;
// List available servers
let servers = gateway.list_servers.await?;
// List tools from a specific server
let tools = gateway.list_tools.await?;
// Execute a tool
let result = gateway.call_tool.await?;
Examples
The repository includes several examples:
Agent Framework:
agent_simple.rs- Minimal agent setupagent_basic.rs- Full-featured agent with configuration
Core Functionality:
basic_completion.rs- Simple completion examplestreaming.rs- Streaming responsestool_calling.rs- Function/tool callingtool_calling_streaming.rs- Tool calling with streamingconversation.rs- Multi-turn conversationserror_handling.rs- Error handling patternsmcp_gateway.rs- MCP gateway integration
Run examples with:
Environment Variables
Set these for the examples:
ANTHROPIC_API_KEY=your-anthropic-key
OPENAI_API_KEY=your-openai-key
GATEWAY_URL=https://mcp.cnctd.world # Optional
GATEWAY_TOKEN=your-token # Optional
Tool Creation Helpers
The library provides helper functions for easier tool creation:
use ;
// For owned strings (runtime data)
let tool = create_tool?;
// For static strings (compile-time constants)
let tool = create_tool_borrowed?;
OpenAI Responses API & Multi-Turn Tool Calls
cnctd_ai uses OpenAI's newer Responses API (/v1/responses) for GPT-4, GPT-4.1, GPT-5, and reasoning models (o1, o3). This provides better tool calling support but has specific requirements for multi-turn conversations:
Key Concepts
call_id: OpenAI usescall_id(format:call_...) to matchfunction_callitems with theirfunction_call_outputresponsesToolUse.call_id: The library captures this from API responses and stores it inToolUse.call_idToolResult.effective_call_id(): Returns the correct ID to use when sending tool results back
Reasoning Models (GPT-5.2-pro, o1, o3)
Reasoning models require special handling for multi-turn tool calls:
- Encrypted reasoning content: The library automatically requests
reasoning.encrypted_contentfor reasoning models - Reasoning items: Must be echoed back in continuation requests via
Message.reasoning_items
The library handles this automatically - just ensure you preserve reasoning_items when building continuation messages:
// After getting a response with tool calls
let response = client.complete.await?;
// The response.message includes reasoning_items if present
// When building the next request, include the full message
messages.push;
// Add tool results
let tool_result = new;
messages.push;
Application Considerations
When persisting and reconstructing conversations from a database:
- Store
call_id: Save bothtool_use_idandcall_idfrom tool calls - Match 1:1: Every
function_callmust have a matchingfunction_call_output - Preserve reasoning: Store and restore
reasoning_itemsfor reasoning models
Error Handling
The library provides comprehensive error types:
use Error;
match client.complete.await
Documentation
- Agent Framework Guide - Complete agent framework documentation
- API Documentation - Full API reference (coming soon)
License
MIT License - see LICENSE file for details.