dioxus-ai
AI hooks for Dioxus applications - chat, completions, and streaming.
Features
use_chat- Reactive chat with message history and streaminguse_completion- Single completion requests- Built on
llm-clientfor provider support (OpenAI, Anthropic, OpenRouter) - Real-time streaming with
streaming_content()method - Stop generation support
Installation
[]
= { = "https://github.com/aiconnai/cortex" }
Usage
Chat Interface
use *;
use ;
Single Completion
use *;
use ;
API Reference
use_chat(options: ChatOptions) -> UseChatState
Creates a reactive chat interface.
ChatOptions:
provider- "openai", "anthropic", or "openrouter"api_key- Your API keymodel- Model identifier (e.g., "gpt-4o-mini")system_prompt- Optional system prompttemperature- 0.0 to 2.0 (default: 0.7)max_tokens- Maximum tokens (default: 4096)stream- Enable streaming (default: true)initial_messages- Pre-populate chat history
UseChatState:
messages()- Get chat historyis_loading()- Check if request in progresserror()- Get current errorstreaming_content()- Get real-time streaming textsend(&str)- Send a messageclear()- Clear historystop()- Stop generation
use_completion(options: CompletionOptions) -> UseCompletionState
Creates a single completion interface.
UseCompletionState:
completion()- Get the resultis_loading()- Check if request in progresserror()- Get current errorcomplete(&str)- Request completion
Providers
| Provider | Models |
|---|---|
| OpenAI | gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo |
| Anthropic | claude-3-opus, claude-3-sonnet, claude-3-haiku |
| OpenRouter | 100+ models from various providers |
Platform Support
- Web (default) - WASM with web-sys fetch
- Desktop - Coming soon (requires different HTTP client)
License
Apache-2.0