Expand description
Universal LLM provider abstraction with API-specific role handling
This module provides a unified interface for different LLM providers (OpenAI, Anthropic, Gemini) while properly handling their specific requirements for message roles and tool calling.
§Message Role Mapping
Different LLM providers have varying support for message roles, especially for tool calling:
§OpenAI API
- Full Support:
system
,user
,assistant
,tool
- Tool Messages: Must include
tool_call_id
to reference the original tool call - Tool Calls: Only
assistant
messages can containtool_calls
§Anthropic API
- Standard Roles:
user
,assistant
- System Messages: Can be hoisted to system parameter or treated as user messages
- Tool Responses: Converted to
user
messages (no separate tool role) - Tool Choice: Supports
auto
,any
,tool
,none
modes
§Gemini API
- Conversation Roles: Only
user
andmodel
(notassistant
) - System Messages: Handled separately as
systemInstruction
parameter - Tool Responses: Converted to
user
messages withfunctionResponse
format - Function Calls: Uses
functionCall
inmodel
messages
§Best Practices
- Always use
MessageRole::tool_response()
constructor for tool responses - Validate messages using
validate_for_provider()
before sending - Use appropriate role mapping methods for each provider
- Handle provider-specific constraints (e.g., Gemini’s system instruction requirement)
§Example Usage
use vtcode_core::llm::provider::{Message, MessageRole};
// Create a proper tool response message
let tool_response = Message::tool_response(
"call_123".to_string(),
"Tool execution completed successfully".to_string()
);
// Validate for specific provider
tool_response.validate_for_provider("openai").unwrap();
Structs§
- Function
Call - Function call within a tool call
- Function
Definition - Function definition within a tool
- LLMRequest
- Universal LLM request structure
- LLMResponse
- Universal LLM response
- Message
- Universal message structure
- Parallel
Tool Config - Configuration for parallel tool use behavior Based on Anthropic’s parallel tool use guidelines
- Specific
Function Choice - Specific function choice details
- Specific
Tool Choice - Specific tool choice for forcing a particular function call
- Tool
Call - Universal tool call that matches the exact structure from OpenAI API Based on OpenAI Cookbook examples and official documentation
- Tool
Definition - Universal tool definition that matches OpenAI/Anthropic/Gemini specifications Based on official API documentation from Context7
- Usage
Enums§
- Finish
Reason - LLMError
- LLMStream
Event - Message
Role - Tool
Choice - Tool choice configuration that works across different providers Based on OpenAI, Anthropic, and Gemini API specifications Follows Anthropic’s tool use best practices for optimal performance
Traits§
- LLMProvider
- Universal LLM provider trait