Expand description
Unified message architecture for LLM interactions.
This module provides the core UnifiedMessage type that works across all LLM providers.
It’s the primary abstraction that makes multi-llm provider-agnostic.
§Overview
The unified message system provides:
- Provider-agnostic messages: Same format works with OpenAI, Anthropic, Ollama, and LM Studio
- Caching hints: Native support for Anthropic prompt caching via
MessageAttributes - Priority ordering: Control message ordering with priority-based sorting
- Rich content types: Text, JSON, tool calls, and tool results via
MessageContent
§Quick Start
use multi_llm::{UnifiedMessage, MessageRole};
// Simple messages using convenience constructors
let user_msg = UnifiedMessage::user("Hello, how are you?");
let system_msg = UnifiedMessage::system("You are a helpful assistant.");
let assistant_msg = UnifiedMessage::assistant("I'm doing well, thank you!");
// Build a conversation
let messages = vec![system_msg, user_msg, assistant_msg];§Caching Support
For Anthropic’s prompt caching (90% cost savings on cache reads):
use multi_llm::UnifiedMessage;
// Mark a system prompt for caching (5-minute TTL)
let cached_system = UnifiedMessage::system("You are a helpful assistant.")
.with_ephemeral_cache();
// For longer sessions, use extended caching (1-hour TTL)
let long_context = UnifiedMessage::system("Large context here...")
.with_extended_cache();§Message Categories
Use semantic constructors to get appropriate caching and priority defaults:
use multi_llm::UnifiedMessage;
// System instructions (cacheable, highest priority)
let system = UnifiedMessage::system_instruction(
"You are a helpful assistant.".to_string(),
Some("system-v1".to_string())
);
// Context information (cacheable, medium priority)
let context = UnifiedMessage::context(
"User preferences: dark mode, verbose output".to_string(),
None
);
// Current user input (not cached, lowest priority)
let current = UnifiedMessage::current_user("What's the weather?".to_string());Structs§
- Message
Attributes - Attributes that guide how providers handle a message.
- UnifiedLLM
Request - A complete request to an LLM provider.
- Unified
Message - A provider-agnostic message for LLM interactions.
Enums§
- Cache
Type - Cache type for prompt caching (Anthropic-specific feature)
- Message
Category - Semantic category of a message for provider-specific handling.
- Message
Content - Content of a message, supporting text, JSON, and tool interactions.
- Message
Role - Role of a message in an LLM conversation.