Module provider

Module provider 

Source
Expand description

Universal LLM provider abstraction with API-specific role handling

This module provides a unified interface for different LLM providers (OpenAI, Anthropic, Gemini) while properly handling their specific requirements for message roles and tool calling.

§Message Role Mapping

Different LLM providers have varying support for message roles, especially for tool calling:

§OpenAI API

  • Full Support: system, user, assistant, tool
  • Tool Messages: Must include tool_call_id to reference the original tool call
  • Tool Calls: Only assistant messages can contain tool_calls

§Anthropic API

  • Standard Roles: user, assistant
  • System Messages: Can be hoisted to system parameter or treated as user messages
  • Tool Responses: Converted to user messages (no separate tool role)
  • Tool Choice: Supports auto, any, tool, none modes

§Gemini API

  • Conversation Roles: Only user and model (not assistant)
  • System Messages: Handled separately as systemInstruction parameter
  • Tool Responses: Converted to user messages with functionResponse format
  • Function Calls: Uses functionCall in model messages

§Best Practices

  1. Always use MessageRole::tool_response() constructor for tool responses
  2. Validate messages using validate_for_provider() before sending
  3. Use appropriate role mapping methods for each provider
  4. Handle provider-specific constraints (e.g., Gemini’s system instruction requirement)

§Example Usage

use vtcode_core::llm::provider::{Message, MessageRole};

// Create a proper tool response message
let tool_response = Message::tool_response(
    "call_123".to_string(),
    "Tool execution completed successfully".to_string()
);

// Validate for specific provider
tool_response.validate_for_provider("openai").unwrap();

Structs§

FunctionCall
Function call within a tool call
FunctionDefinition
Function definition within a tool
LLMRequest
Universal LLM request structure
LLMResponse
Universal LLM response
Message
Universal message structure
ParallelToolConfig
Configuration for parallel tool use behavior Based on Anthropic’s parallel tool use guidelines
SpecificFunctionChoice
Specific function choice details
SpecificToolChoice
Specific tool choice for forcing a particular function call
ToolCall
Universal tool call that matches the exact structure from OpenAI API Based on OpenAI Cookbook examples and official documentation
ToolDefinition
Universal tool definition that matches OpenAI/Anthropic/Gemini specifications Based on official API documentation from Context7
Usage

Enums§

FinishReason
LLMError
LLMStreamEvent
MessageRole
ToolChoice
Tool choice configuration that works across different providers Based on OpenAI, Anthropic, and Gemini API specifications Follows Anthropic’s tool use best practices for optimal performance

Traits§

LLMProvider
Universal LLM provider trait

Type Aliases§

LLMStream