Expand description
Provider layer for the Brainwires Agent Framework.
Contains both low-level API client structs (HTTP transport, auth, rate
limiting, serde) and high-level chat provider implementations that wrap
them with the brainwires_core::Provider trait.
Re-exports§
pub use http_client::RateLimitedClient;pub use rate_limiter::RateLimiter;pub use brainwires_http::DEFAULT_BACKEND_URL;pub use brainwires_http::DEV_BACKEND_URL;pub use brainwires_http::get_backend_from_api_key;pub use anthropic::AnthropicClient;pub use brainwires_http::BrainwiresHttpProvider;pub use gemini::GoogleClient;pub use ollama::OllamaProvider;pub use openai_chat::OpenAiClient;pub use anthropic::chat::AnthropicChatProvider;pub use gemini::chat::GoogleChatProvider;pub use ollama::chat::OllamaChatProvider;pub use openai_chat::chat::OpenAiChatProvider;pub use openai_responses::OpenAiResponsesProvider;pub use azure_speech::AzureSpeechClient;pub use cartesia::CartesiaClient;pub use deepgram::DeepgramClient;pub use elevenlabs::ElevenLabsClient;pub use fish::FishClient;pub use google_tts::GoogleTtsClient;pub use murf::MurfClient;pub use model_listing::AvailableModel;pub use model_listing::ModelCapability;pub use model_listing::ModelLister;pub use model_listing::create_model_lister;pub use chat_factory::ChatProviderFactory;pub use local_llm::*;
Modules§
- anthropic
- Anthropic Messages protocol (also used by Bedrock, Vertex AI).
- azure_
speech - Azure Cognitive Services Speech API client. Azure Cognitive Services Speech API client.
- brainwires_
http - Brainwires HTTP relay protocol.
- cartesia
- Cartesia TTS API client. Cartesia API client for text-to-speech.
- chat_
factory - Chat provider factory — registry-driven protocol dispatch. Chat provider factory — registry-driven protocol dispatch.
- deepgram
- Deepgram TTS/STT API client. Deepgram API client for text-to-speech and speech-to-text.
- elevenlabs
- ElevenLabs TTS/STT API client. ElevenLabs API client for text-to-speech and speech-to-text.
- fish
- Fish Audio TTS/ASR API client. Fish Audio API client for text-to-speech and speech recognition.
- gemini
- Google Gemini generateContent protocol.
- google_
tts - Google Cloud Text-to-Speech API client. Google Cloud Text-to-Speech API client.
- http_
client - Rate-limited HTTP client wrapper.
- local_
llm - Local LLM inference (always compiled, llama.cpp behind feature flag). Local LLM Provider Module
- model_
listing - Model listing — query available models from provider APIs. Model listing and validation for AI providers.
- murf
- Murf AI TTS API client. Murf AI API client for text-to-speech.
- ollama
- Ollama native chat protocol.
- openai_
chat - OpenAI Chat Completions protocol (also used by Groq, Together, Fireworks, Anyscale). OpenAI (and OpenAI-compatible) API client, wire types, and submodules.
- openai_
responses - OpenAI Responses API protocol (
/v1/responses). OpenAI Responses API provider (POST /v1/responses). - rate_
limiter - Token-bucket rate limiter for API request throttling.
- registry
- Provider registry — protocol, auth, and endpoint metadata for all known providers. Provider registry — connection details for all known providers.
Structs§
- Chat
Options - Chat completion options
- Provider
Config - Provider configuration
Enums§
- Provider
Type - AI provider types
Traits§
- Provider
- Base provider trait for AI providers