Expand description
§LLM Client - Shared Logic
Runtime-agnostic LLM client logic for building requests and parsing responses. This crate has NO runtime dependencies (no async, no HTTP client).
§Supported Providers
- OpenAI (GPT-4, GPT-3.5, etc.)
- Anthropic (Claude 3, etc.)
- OpenRouter (100+ models)
§Usage
use cortexai_llm_client::{
Provider, Message, RequestBuilder, ResponseParser,
};
// Build a request
let messages = vec![
Message::system("You are a helpful assistant."),
Message::user("Hello!"),
];
let request = RequestBuilder::new(Provider::OpenAI)
.model("gpt-4o-mini")
.messages(&messages)
.api_key("sk-...")
.temperature(0.7)
.max_tokens(1024)
.stream(false)
.build()
.unwrap();
// Use your runtime's HTTP client to send request.url, request.headers, request.body
// Then parse the response:
let response_json = r#"{"choices":[{"message":{"content":"Hello!"}}]}"#;
let response = ResponseParser::parse(Provider::OpenAI, response_json).unwrap();
println!("{}", response.content);Structs§
- Http
Request - An HTTP request ready to be sent.
- LlmResponse
- Parsed LLM response
- Message
- A message in a conversation.
- Request
Builder - Builder for constructing LLM API requests.
- Response
Parser - Response parser for different providers
- Stream
Chunk - Streaming chunk from SSE response
- Tool
Call - Tool call requested by the model
- Tool
Call Chunk - Partial tool call information from streaming
- Usage
- Token usage information
Enums§
- LlmClient
Error - Errors that can occur in the LLM client.
- Provider
- Supported LLM providers.
- Role
- Role in a conversation.
Type Aliases§
- Result
- Result type for LLM client operations.