rig/
lib.rs

1#![cfg_attr(docsrs, feature(doc_cfg))]
2//! Rig is a Rust library for building LLM-powered applications that focuses on ergonomics and modularity.
3//!
4//! # Table of contents
5//! - [High-level features](#high-level-features)
6//! - [Simple Example](#simple-example)
7//! - [Core Concepts](#core-concepts)
8//! - [Integrations](#integrations)
9//!
10//! # High-level features
11//! - Full support for LLM completion and embedding workflows
12//! - Simple but powerful common abstractions over LLM providers (e.g. OpenAI, Cohere) and vector stores (e.g. MongoDB, in-memory)
13//! - Integrate LLMs in your app with minimal boilerplate
14//!
15//! # Simple example:
16//! ```
17//! use rig::{completion::Prompt, providers::openai};
18//!
19//! #[tokio::main]
20//! async fn main() {
21//!     // Create OpenAI client and agent.
22//!     // This requires the `OPENAI_API_KEY` environment variable to be set.
23//!     let openai_client = openai::Client::from_env();
24//!
25//!     let gpt4 = openai_client.agent("gpt-4").build();
26//!
27//!     // Prompt the model and print its response
28//!     let response = gpt4
29//!         .prompt("Who are you?")
30//!         .await
31//!         .expect("Failed to prompt GPT-4");
32//!
33//!     println!("GPT-4: {response}");
34//! }
35//! ```
36//! Note: using `#[tokio::main]` requires you enable tokio's `macros` and `rt-multi-thread` features
37//! or just `full` to enable all features (`cargo add tokio --features macros,rt-multi-thread`).
38//!
39//! # Core concepts
40//! ## Completion and embedding models
41//! Rig provides a consistent API for working with LLMs and embeddings. Specifically,
42//! each provider (e.g. OpenAI, Cohere) has a `Client` struct that can be used to initialize completion
43//! and embedding models. These models implement the [CompletionModel](crate::completion::CompletionModel)
44//! and [EmbeddingModel](crate::embeddings::EmbeddingModel) traits respectively, which provide a common,
45//! low-level interface for creating completion and embedding requests and executing them.
46//!
47//! ## Agents
48//! Rig also provides high-level abstractions over LLMs in the form of the [Agent](crate::agent::Agent) type.
49//!
50//! The [Agent](crate::agent::Agent) type can be used to create anything from simple agents that use vanilla models to full blown
51//! RAG systems that can be used to answer questions using a knowledge base.
52//!
53//! ## Vector stores and indexes
54//! Rig provides a common interface for working with vector stores and indexes. Specifically, the library
55//! provides the [VectorStoreIndex](crate::vector_store::VectorStoreIndex)
56//! trait, which can be implemented to define vector stores and indices respectively.
57//! Those can then be used as the knowledge base for a RAG enabled [Agent](crate::agent::Agent), or
58//! as a source of context documents in a custom architecture that use multiple LLMs or agents.
59//!
60//! # Integrations
61//! ## Model Providers
62//! Rig natively supports the following completion and embedding model provider integrations:
63//! - Anthropic
64//! - Azure
65//! - Cohere
66//! - Deepseek
67//! - Galadriel
68//! - Gemini
69//! - Groq
70//! - Huggingface
71//! - Hyperbolic
72//! - Mira
73//! - Mistral
74//! - Moonshot
75//! - Ollama
76//! - Openai
77//! - OpenRouter
78//! - Perplexity
79//! - Together
80//! - Voyage AI
81//! - xAI
82//!
83//! You can also implement your own model provider integration by defining types that
84//! implement the [CompletionModel](crate::completion::CompletionModel) and [EmbeddingModel](crate::embeddings::EmbeddingModel) traits.
85//!
86//! Vector stores are available as separate companion-crates:
87//!
88//! - MongoDB: [`rig-mongodb`](https://github.com/0xPlaygrounds/rig/tree/main/rig-mongodb)
89//! - LanceDB: [`rig-lancedb`](https://github.com/0xPlaygrounds/rig/tree/main/rig-lancedb)
90//! - Neo4j: [`rig-neo4j`](https://github.com/0xPlaygrounds/rig/tree/main/rig-neo4j)
91//! - Qdrant: [`rig-qdrant`](https://github.com/0xPlaygrounds/rig/tree/main/rig-qdrant)
92//! - SQLite: [`rig-sqlite`](https://github.com/0xPlaygrounds/rig/tree/main/rig-sqlite)
93//! - SurrealDB: [`rig-surrealdb`](https://github.com/0xPlaygrounds/rig/tree/main/rig-surrealdb)
94//! - Milvus: [`rig-milvus`](https://github.com/0xPlaygrounds/rig/tree/main/rig-milvus)
95//! - ScyllaDB: [`rig-scylladb`](https://github.com/0xPlaygrounds/rig/tree/main/rig-scylladb)
96//! - AWS S3Vectors: [`rig-s3vectors`](https://github.com/0xPlaygrounds/rig/tree/main/rig-s3vectors)
97//!
98//! You can also implement your own vector store integration by defining types that
99//! implement the [VectorStoreIndex](crate::vector_store::VectorStoreIndex) trait.
100//!
101//! The following providers are available as separate companion-crates:
102//!
103//! - Fastembed: [`rig-fastembed`](https://github.com/0xPlaygrounds/rig/tree/main/rig-fastembed)
104//! - Eternal AI: [`rig-eternalai`](https://github.com/0xPlaygrounds/rig/tree/main/rig-eternalai)
105//!
106
107extern crate self as rig;
108
109pub mod agent;
110#[cfg(feature = "audio")]
111#[cfg_attr(docsrs, doc(cfg(feature = "audio")))]
112pub mod audio_generation;
113pub mod cli_chatbot;
114pub mod client;
115pub mod completion;
116pub mod embeddings;
117pub mod extractor;
118#[cfg(feature = "image")]
119#[cfg_attr(docsrs, doc(cfg(feature = "image")))]
120pub mod image_generation;
121pub(crate) mod json_utils;
122pub mod loaders;
123pub mod one_or_many;
124pub mod pipeline;
125pub mod prelude;
126pub mod providers;
127pub mod streaming;
128pub mod tool;
129pub mod transcription;
130pub mod vector_store;
131
132// Re-export commonly used types and traits
133pub use completion::message;
134pub use embeddings::Embed;
135pub use one_or_many::{EmptyListError, OneOrMany};
136
137#[cfg(feature = "derive")]
138#[cfg_attr(docsrs, doc(cfg(feature = "derive")))]
139pub use rig_derive::Embed;