Expand description
Astrid LLM - LLM provider abstraction with streaming support.
This crate provides:
- LLM provider trait for abstraction
- Claude (Anthropic) implementation
OpenAI-compatible implementation (LM Studio,OpenAI, vLLM, etc.)- Streaming response support
- Tool use support
§Example with Claude
use astrid_llm::{ClaudeProvider, LlmProvider, Message, ProviderConfig};
// Create provider
let config = ProviderConfig::new("your-api-key", "claude-sonnet-4-20250514");
let provider = ClaudeProvider::new(config);
// Simple completion
let response = provider.complete_simple("What is 2+2?").await?;
println!("Response: {}", response);§Example with LM Studio
use astrid_llm::{OpenAiCompatProvider, LlmProvider, Message};
// Connect to LM Studio running locally
let provider = OpenAiCompatProvider::lm_studio();
// Or with a specific model
let provider = OpenAiCompatProvider::lm_studio_with_model("llama-3.1-8b");
let response = provider.complete_simple("Hello!").await?;
println!("Response: {}", response);§Streaming
use astrid_llm::{ClaudeProvider, LlmProvider, Message, ProviderConfig, StreamEvent};
use futures::StreamExt;
let provider = ClaudeProvider::new(ProviderConfig::new("api-key", "claude-sonnet-4-20250514"));
let messages = vec![Message::user("Tell me a story")];
let mut stream = provider.stream(&messages, &[], "").await?;
while let Some(event) = stream.next().await {
match event? {
StreamEvent::TextDelta(text) => print!("{}", text),
StreamEvent::Done => println!("\n[Done]"),
_ => {}
}
}Modules§
- prelude
- Prelude module - commonly used types for convenient import.
Structs§
- Claude
Provider - Claude LLM provider.
- LlmResponse
- LLM response (non-streaming).
- LlmTool
Definition - Tool definition for the LLM.
- Message
- A message in the conversation.
- Open
AiCompat Provider - OpenAI-compatible LLM provider.
- Provider
Config - Configuration for LLM providers.
- Tool
Call - A tool call from the assistant.
- Tool
Call Result - Result of a tool call.
- Usage
- Token usage information.
- ZaiProvider
- Z.AI (Zhipu AI) LLM provider.
Enums§
- Content
Part - A part of multi-part content.
- LlmError
- Errors that can occur with LLM operations.
- Message
Content - Message content.
- Message
Role - Message role.
- Stop
Reason - Reason the model stopped generating.
- Stream
Event - Streaming event from the LLM.
Traits§
- LlmProvider
- LLM provider trait.