Simple LLM Client
A Rust crate for interacting with Large Language Model APIs to streamline content creation, research, and information synthesis for use with RAG applications.
Features
- Perplexity AI Integration: Seamlessly connect with the Perplexity AI API for advanced research capabilities
- Markdown Output: Automatically format responses as Markdown with proper citation formatting
- Streaming Support: Option to stream responses in real-time or receive complete responses
- Citation Handling: Extract and format citations from AI responses
- Multiple Provider Support: Future updates will include additional AI providers (OpenAI, Anthropic, Google, etc.)
Installation
Add this to your Cargo.toml
:
[]
= "^0.2"
The crate is also available via its directory name for local development:
[]
= { = "path/to/llm_client" }
Usage
Basic Example
use ;
use Error;
async
Markdown Output Example
use ;
use ;
async
Configuration
The crate requires a Perplexity API key to be set in your environment or in a .env
file:
PERPLEXITY_API_KEY=your_api_key_here
OPENAI_API_KEY=your_api_key_here
For the examples to work correctly, create a .env
file in the project root with your API key.
Directory Structure
When using the file output functionality, make sure the output directories exist:
The example code creates this directory automatically.
Roadmap
- Additional Providers: Support for other AI research and generation APIs will be added in future releases:
- Anthropic (Claude models)
- Google (Gemini models)
- Others based on community demand
- Advanced Formatting Options: Customizable output formatting and templates
- Citation Style Options: Support for different citation styles (APA, MLA, etc.)
- Context Management: Tools for managing conversation context and history
- Multi-provider Research: Aggregate and compare responses from multiple providers
License
This project is licensed under the MIT License - see the LICENSE file for details.
Examples
The crate includes several examples to help you get started:
Running Examples
To run any example, use the cargo run --example
command:
# Test the Perplexity implementation
# Test the OpenAI implementation
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.