cortexai-llm-client 0.1.0

Shared LLM client logic for multiple Cortex runtimes: browser, edge, and server
Documentation

LLM Client - Shared Logic

Runtime-agnostic LLM client logic for building requests and parsing responses. This crate has NO runtime dependencies (no async, no HTTP client).

Supported Providers

  • OpenAI (GPT-4, GPT-3.5, etc.)
  • Anthropic (Claude 3, etc.)
  • OpenRouter (100+ models)

Usage

use cortexai_llm_client::{
    Provider, Message, RequestBuilder, ResponseParser,
};

// Build a request
let messages = vec![
    Message::system("You are a helpful assistant."),
    Message::user("Hello!"),
];

let request = RequestBuilder::new(Provider::OpenAI)
    .model("gpt-4o-mini")
    .messages(&messages)
    .api_key("sk-...")
    .temperature(0.7)
    .max_tokens(1024)
    .stream(false)
    .build()
    .unwrap();

// Use your runtime's HTTP client to send request.url, request.headers, request.body
// Then parse the response:

let response_json = r#"{"choices":[{"message":{"content":"Hello!"}}]}"#;
let response = ResponseParser::parse(Provider::OpenAI, response_json).unwrap();
println!("{}", response.content);