#[non_exhaustive]pub enum LlmError {
UnsupportedProvider {
provider: String,
},
ConfigurationError {
message: String,
},
RequestFailed {
message: String,
source: Option<Box<dyn Error + Send + Sync>>,
},
ResponseParsingError {
message: String,
},
RateLimitExceeded {
retry_after_seconds: u64,
},
Timeout {
timeout_seconds: u64,
},
AuthenticationFailed {
message: String,
},
TokenLimitExceeded {
current: usize,
max: usize,
},
ToolExecutionFailed {
tool_name: String,
message: String,
},
SchemaValidationFailed {
message: String,
},
}Expand description
Errors that can occur during LLM operations.
This enum covers all error conditions you might encounter when using multi-llm. Each variant includes relevant context and can be:
- Categorized via
category() - Assessed for severity via
severity() - Checked for retryability via
is_retryable() - Converted to user-friendly messages via
user_message()
§Creating Errors
Use the constructor methods which automatically log the error:
use multi_llm::LlmError;
// These methods log automatically
let err = LlmError::configuration_error("Missing API key");
let err = LlmError::rate_limit_exceeded(60);
let err = LlmError::timeout(30);§Error Categories
| Variant | Category | Retryable |
|---|---|---|
UnsupportedProvider | Client | No |
ConfigurationError | Client | No |
RequestFailed | External | Yes |
ResponseParsingError | External | No |
RateLimitExceeded | Transient | Yes |
Timeout | Transient | Yes |
AuthenticationFailed | Client | No |
TokenLimitExceeded | Client | No |
ToolExecutionFailed | External | No |
SchemaValidationFailed | Client | No |
Variants (Non-exhaustive)§
This enum is marked as non-exhaustive
UnsupportedProvider
The specified provider is not supported.
Supported providers: “anthropic”, “openai”, “ollama”, “lmstudio”
ConfigurationError
Provider configuration is invalid or incomplete.
Common causes:
- Missing API key for providers that require one
- Invalid base URL format
- Incompatible configuration values
RequestFailed
The HTTP request to the provider failed.
This is a general failure that may be retryable. Check the source error for more details about the underlying cause.
Fields
ResponseParsingError
Failed to parse the provider’s response.
The provider returned a response, but it couldn’t be parsed. This might indicate a provider API change or malformed response.
RateLimitExceeded
Provider rate limit exceeded.
The provider is throttling requests. Wait the indicated time before retrying. Consider implementing exponential backoff.
Timeout
Request timed out.
The provider didn’t respond within the configured timeout. This is usually retryable but may indicate an overloaded provider.
AuthenticationFailed
Authentication with the provider failed.
Check your API key or credentials. This is not retryable without fixing the authentication.
TokenLimitExceeded
Request exceeds the model’s token limit.
The combined input (messages + tools) is too large for the model’s context window. Reduce the input size or use a model with larger context.
Fields
ToolExecutionFailed
A tool execution failed.
The tool was called but couldn’t complete successfully. Check the message for details about why the tool failed.
Fields
SchemaValidationFailed
Response doesn’t match the requested JSON schema.
When using structured output, the model’s response didn’t conform to the provided JSON schema. May require a clearer prompt or different schema design.
Implementations§
Source§impl LlmError
impl LlmError
Sourcepub fn category(&self) -> ErrorCategory
pub fn category(&self) -> ErrorCategory
Get the error category for routing and handling decisions.
Use this to determine how to handle different types of errors:
Client: Fix the request (invalid input, auth, config)External: Provider issue, may need ops attentionTransient: Retry with backoff
§Example
use multi_llm::{LlmError, error::ErrorCategory};
fn handle(err: LlmError) {
match err.category() {
ErrorCategory::Transient => {
// Implement retry logic
}
ErrorCategory::Client => {
// User can fix this, show helpful message
}
_ => {
// Log for investigation
}
}
}Sourcepub fn severity(&self) -> ErrorSeverity
pub fn severity(&self) -> ErrorSeverity
Get the error severity for logging and alerting.
Use this to determine logging level and whether to alert on-call.
Sourcepub fn is_retryable(&self) -> bool
pub fn is_retryable(&self) -> bool
Whether this error is transient and should trigger a retry.
Returns true for:
- Rate limit exceeded
- Timeouts
- General request failures (may be network issues)
Implement exponential backoff when retrying these errors.
Sourcepub fn user_message(&self) -> String
pub fn user_message(&self) -> String
Convert to a user-friendly message suitable for display.
Returns a message that’s safe to show to end users - technical details and internal information are stripped or generalized.
§Example
use multi_llm::LlmError;
let err = LlmError::rate_limit_exceeded(60);
let msg = err.user_message();
// "Service is busy. Please wait 60 seconds and try again"Sourcepub fn unsupported_provider(provider: impl Into<String>) -> Self
pub fn unsupported_provider(provider: impl Into<String>) -> Self
Create an unsupported provider error (logs at ERROR level).
pub fn configuration_error(message: impl Into<String>) -> Self
pub fn request_failed( message: impl Into<String>, source: Option<Box<dyn Error + Send + Sync>>, ) -> Self
pub fn response_parsing_error(message: impl Into<String>) -> Self
pub fn rate_limit_exceeded(retry_after_seconds: u64) -> Self
pub fn timeout(timeout_seconds: u64) -> Self
pub fn authentication_failed(message: impl Into<String>) -> Self
pub fn token_limit_exceeded(current: usize, max: usize) -> Self
pub fn tool_execution_failed( tool_name: impl Into<String>, message: impl Into<String>, ) -> Self
pub fn schema_validation_failed(message: impl Into<String>) -> Self
Trait Implementations§
Source§impl Error for LlmError
impl Error for LlmError
Source§fn source(&self) -> Option<&(dyn Error + 'static)>
fn source(&self) -> Option<&(dyn Error + 'static)>
1.0.0 · Source§fn description(&self) -> &str
fn description(&self) -> &str
Auto Trait Implementations§
impl Freeze for LlmError
impl !RefUnwindSafe for LlmError
impl Send for LlmError
impl Sync for LlmError
impl Unpin for LlmError
impl !UnwindSafe for LlmError
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> PolicyExt for Twhere
T: ?Sized,
impl<T> PolicyExt for Twhere
T: ?Sized,
Source§impl<T> ToStringFallible for Twhere
T: Display,
impl<T> ToStringFallible for Twhere
T: Display,
Source§fn try_to_string(&self) -> Result<String, TryReserveError>
fn try_to_string(&self) -> Result<String, TryReserveError>
ToString::to_string, but without panic on OOM.