AI Question-Answering Crate
This Rust crate provides a unified way to call different Large Language Model (Framework) providers, including OpenAI, Anthropic, and Ollama, enabling the user to ask questions and interact with these models seamlessly. The crate abstracts away the complexities of interacting with different Framework APIs and offers a unified interface to query these models.
Table of Contents
- Features
- Configuration
- Usage
- Basic Example
- Customizing System Prompts
- Providing Chat History
- Environment Variables
- Error Handling
- Contributing
- License
Features
- Support for multiple Framework providers: OpenAI, Anthropic, and Ollama.
- Unified interface to interact with different APIs.
- Ease of adding system-level prompts to guide responses.
- Support for maintaining chat history (multi-turn conversations).
- Error handling for API failures, model errors, and unexpected behavior.
Configuration
Before you can use the crate, you need to configure it through the AiConfig structure. This configuration tells the system:
- Which Framework provider to use (
Framework::OpenAI,Framework::Anthropic, orFramework::Ollama). - The specific model you want to query, e.g.,
"chatgpt-4o-latest"for OpenAI or"claude-2"for Anthropic. - (Optional) Maximum tokens for the response output.
Example AiConfig
use ;
let ai_config = AiConfig ;
Usage
1. Basic Example (Ask a Single Question)
You can ask a one-off question using the following example:
use ;
async
2. Customizing System Prompts
A system-level prompt modifies the assistant's behavior. For example, you might instruct the assistant to answer concisely or role-play as an expert.
let question = Question ;
3. Multi-Turn Conversation (With Chat History)
To maintain a conversation, you can include previous messages and their respective responses.
use ;
let previous_messages = vec!;
let question = Question ;
Environment Variables
This crate requires API keys to interface with the Framework providers. Store these keys as environment variables to keep them secure. Below is a list of required variables:
| Provider | Environment Variable |
|---|---|
| OpenAI | OPENAI_API_KEY |
| Anthropic | ANTHROPIC_API_KEY |
| Ollama | No key required currently |
For security, avoid hardcoding API keys into your application code. Use a .env file or a secret storage mechanism.
Error Handling
All interactions with Framework return Result<String>. Errors are encapsulated using the AppError enum, which defines three main error types:
- ModelError: Occurs when querying a specific model fails.
- ApiError: Indicates an issue with the API key or API call.
- UnexpectedError: For any other unforeseen issues.
Example: Handling Errors Gracefully
match ask_question.await
Contributing
Contributions, bug reports, and feature requests are welcome! Feel free to open an issue or submit a pull request in GitHub.
How to Contribute:
- Fork the repository.
- Clone to your local system:
git clone https://github.com/EduardoNeville/ask_ai - Create a feature branch:
git checkout -b feature-name - Push changes and open a pull request.
License
This project is licensed under the MIT License.