Expand description
§Simple LLM Client
A Rust crate for interacting with Large Language Model APIs to streamline content creation, research, and information synthesis.
§Features
- Perplexity AI Integration: Seamlessly connect with the Perplexity AI API for advanced research capabilities
- Markdown Output: Automatically format responses as Markdown with proper citation formatting
- Streaming Support: Option to stream responses in real-time or receive complete responses
- Citation Handling: Extract and format citations from AI responses
§Usage Example
use simple_llm_client::perplexity::{chat_completion_markdown, models::ChatMessage};
use std::path::Path;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let messages = vec![
ChatMessage {
role: "system".to_string(),
content: "Be precise and concise.".to_string(),
},
ChatMessage {
role: "user".to_string(),
content: "How many stars are there in our galaxy?".to_string(),
},
];
chat_completion_markdown(
"sonar-pro",
messages,
Some(Path::new("./output")),
"research_result.md"
).await?;
Ok(())
}
§Project Structure
perplexity
: Module for interacting with Perplexity AI APIproviders
: Module for abstracting different LLM providers (with more to be added in future releases)
§Future Development
Future versions will include support for additional LLM providers such as OpenAI, Anthropic, Google Gemini, and others based on community needs.
Modules§
- openai
- perplexity
- providers
- This module will contain implementations for various AI providers.