Expand description
Rig is a Rust library for building LLM-powered applications that focuses on ergonomics and modularity.
§Table of contents
§High-level features
- Full support for LLM completion and embedding workflows
- Simple but powerful common abstractions over LLM providers (e.g. OpenAI, Cohere) and vector stores (e.g. MongoDB, in-memory)
- Integrate LLMs in your app with minimal boilerplate
§Simple example:
use rig::{completion::Prompt, providers::openai};
#[tokio::main]
async fn main() {
// Create OpenAI client and model.
// This requires the `OPENAI_API_KEY` environment variable to be set.
let openai_client = openai::Client::from_env();
let gpt4 = openai_client.model("gpt-4").build();
// Prompt the model and print its response
let response = gpt4
.prompt("Who are you?")
.await
.expect("Failed to prompt GPT-4");
println!("GPT-4: {response}");
}
Note using #[tokio::main]
requires you enable tokio’s macros
and rt-multi-thread
features
or just full
to enable all features (cargo add tokio --features macros,rt-multi-thread
).
§Integrations
Rig currently has the following integration sub-libraries:
- MongoDB vector store:
rig-mongodb
Modules§
- This module contains the implementation of the
Agent
struct and its builder. - This module contains the implementation of the completion functionality for the LLM (Large Language Model) chat interface. It provides traits, structs, and enums for generating completion requests, handling completion responses, and defining completion models.
- This module provides functionality for working with embeddings and embedding models. Embeddings are numerical representations of documents or other objects, typically used in natural language processing (NLP) tasks such as text classification, information retrieval, and document similarity.
- This module contains the implementation of the
Agent
struct and its builder.