Expand description
OpenAI GPT SDK for Miyabi LLM
This crate provides OpenAI GPT API integration for the Miyabi LLM framework.
It implements the LlmClient and LlmStreamingClient traits from miyabi-llm-core.
§Features
- GPT-4o, GPT-4o-mini, GPT-4 Turbo support
- Tool/function calling
- Streaming responses via SSE
- Environment variable configuration
§Example
use miyabi_llm_openai::OpenAIClient;
use miyabi_llm_core::{LlmClient, Message};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = OpenAIClient::from_env()?;
let messages = vec![Message::user("Hello!")];
let response = client.chat(messages).await?;
println!("Response: {}", response);
Ok(())
}Structs§
- Message
- Message in a conversation
- OpenAI
Choice - Choice in OpenAI response
- OpenAI
Client - OpenAI GPT client
- OpenAI
Function - OpenAI function definition
- OpenAI
Function Call - Function call in OpenAI tool call
- OpenAI
Message - OpenAI message format
- OpenAI
Response - OpenAI API response
- OpenAI
Response Message - Response message in OpenAI choice
- OpenAI
Tool - OpenAI tool definition
- OpenAI
Tool Call - Tool call in OpenAI response
- OpenAI
Usage - Token usage information
- Tool
Call - Tool call from LLM
- Tool
Definition - Tool definition for LLM Function Calling
Enums§
- LlmError
- Core LLM error type
- Role
- Message role
- Stream
Event - Stream event type for more detailed streaming information
- Tool
Call Response - Response from LLM with tool calling support
Traits§
- LlmClient
- LLM Client trait - unified interface for all providers
- LlmStreaming
Client - LLM Streaming Client trait
Type Aliases§
- Result
- Result type for LLM operations
- Stream
Response - Type alias for streaming response