Skip to main content

Crate cortexai_llm_client

Crate cortexai_llm_client 

Source
Expand description

§LLM Client - Shared Logic

Runtime-agnostic LLM client logic for building requests and parsing responses. This crate has NO runtime dependencies (no async, no HTTP client).

§Supported Providers

  • OpenAI (GPT-4, GPT-3.5, etc.)
  • Anthropic (Claude 3, etc.)
  • OpenRouter (100+ models)

§Usage

use cortexai_llm_client::{
    Provider, Message, RequestBuilder, ResponseParser,
};

// Build a request
let messages = vec![
    Message::system("You are a helpful assistant."),
    Message::user("Hello!"),
];

let request = RequestBuilder::new(Provider::OpenAI)
    .model("gpt-4o-mini")
    .messages(&messages)
    .api_key("sk-...")
    .temperature(0.7)
    .max_tokens(1024)
    .stream(false)
    .build()
    .unwrap();

// Use your runtime's HTTP client to send request.url, request.headers, request.body
// Then parse the response:

let response_json = r#"{"choices":[{"message":{"content":"Hello!"}}]}"#;
let response = ResponseParser::parse(Provider::OpenAI, response_json).unwrap();
println!("{}", response.content);

Structs§

HttpRequest
An HTTP request ready to be sent.
LlmResponse
Parsed LLM response
Message
A message in a conversation.
RequestBuilder
Builder for constructing LLM API requests.
ResponseParser
Response parser for different providers
StreamChunk
Streaming chunk from SSE response
ToolCall
Tool call requested by the model
ToolCallChunk
Partial tool call information from streaming
Usage
Token usage information

Enums§

LlmClientError
Errors that can occur in the LLM client.
Provider
Supported LLM providers.
Role
Role in a conversation.

Type Aliases§

Result
Result type for LLM client operations.