Skip to main content

rig_core/
lib.rs

1#![cfg_attr(docsrs, feature(doc_cfg))]
2#![cfg_attr(
3    test,
4    allow(
5        clippy::expect_used,
6        clippy::indexing_slicing,
7        clippy::panic,
8        clippy::unwrap_used,
9        clippy::unreachable
10    )
11)]
12//! Rig is a Rust library for building LLM-powered applications that focuses on ergonomics and modularity.
13//!
14//! # Table of contents
15//! - [High-level features](#high-level-features)
16//! - [Simple Example](#simple-example)
17//! - [Core Concepts](#core-concepts)
18//! - [Integrations](#integrations)
19//!
20//! # High-level features
21//! - Full support for LLM completion and embedding workflows
22//! - Simple but powerful common abstractions over LLM providers (e.g. OpenAI, Cohere) and vector stores (e.g. MongoDB, in-memory)
23//! - Integrate LLMs in your app with minimal boilerplate
24//!
25//! # Simple example
26//! ```ignore
27//! use rig_core::{
28//!     client::{CompletionClient, ProviderClient},
29//!     completion::Prompt,
30//!     providers::openai,
31//! };
32//!
33//! #[tokio::main]
34//! async fn main() -> Result<(), Box<dyn std::error::Error>> {
35//!     // Create OpenAI client and agent.
36//!     // This requires the `OPENAI_API_KEY` environment variable to be set.
37//!     let openai_client = openai::Client::from_env()?;
38//!
39//!     let agent = openai_client.agent(openai::GPT_5_2).build();
40//!
41//!     // Prompt the model and print its response
42//!     let response = agent
43//!         .prompt("Who are you?")
44//!         .await?;
45//!
46//!     println!("{response}");
47//!
48//!     Ok(())
49//! }
50//! ```
51//! Note: using `#[tokio::main]` requires you enable tokio's `macros` and `rt-multi-thread` features
52//! or just `full` to enable all features (`cargo add tokio --features macros,rt-multi-thread`).
53//!
54//! # Core concepts
55//! ## Completion and embedding models
56//! Rig provides a consistent API for working with LLMs and embeddings. Specifically,
57//! each provider (e.g. OpenAI, Cohere) has a `Client` struct that can be used to initialize completion
58//! and embedding models. These models implement the [CompletionModel](crate::completion::CompletionModel)
59//! and [EmbeddingModel](crate::embeddings::EmbeddingModel) traits respectively, which provide a common,
60//! low-level interface for creating completion and embedding requests and executing them.
61//!
62//! ## Agents
63//! Rig also provides high-level abstractions over LLMs in the form of the [Agent](crate::agent::Agent) type.
64//!
65//! The [Agent](crate::agent::Agent) type can be used to create anything from simple agents that use vanilla models to full blown
66//! RAG systems that can be used to answer questions using a knowledge base.
67//!
68//! ## Vector stores and indexes
69//! Rig provides a common interface for working with vector stores and indexes. Specifically, the library
70//! provides the [VectorStoreIndex](crate::vector_store::VectorStoreIndex)
71//! trait, which can be implemented to define vector stores and indices respectively.
72//! Those can then be used as the knowledge base for a RAG enabled [Agent](crate::agent::Agent), or
73//! as a source of context documents in a custom architecture that use multiple LLMs or agents.
74//!
75//! ## Conversation memory
76//! Rig can transparently load and persist per-conversation history through the
77//! [ConversationMemory](crate::memory::ConversationMemory) trait. Attach a backend
78//! with [`AgentBuilder::memory`](crate::agent::AgentBuilder::memory) and identify the
79//! conversation per-request via
80//! [`PromptRequest::conversation`](crate::agent::prompt_request::PromptRequest::conversation).
81//! The default in-process backend
82//! [InMemoryConversationMemory](crate::memory::InMemoryConversationMemory) is suitable
83//! for tests and single-process agents; reusable history-shaping policies (sliding
84//! window, token budget) live in the [`rig-memory`](https://crates.io/crates/rig-memory)
85//! companion crate. See [`examples/agent_with_memory.rs`](https://github.com/0xPlaygrounds/rig/blob/main/examples/agent_with_memory.rs)
86//! for a runnable end-to-end example.
87//!
88//! # Integrations
89//! ## Model Providers
90//! Rig natively supports the following completion and embedding model provider integrations:
91//! - Anthropic
92//! - Azure OpenAI
93//! - ChatGPT and GitHub Copilot auth-backed clients
94//! - Cohere
95//! - DeepSeek
96//! - Galadriel
97//! - Gemini
98//! - Groq
99//! - Hugging Face
100//! - Hyperbolic
101//! - Llamafile
102//! - MiniMax
103//! - Mira
104//! - Mistral
105//! - Moonshot
106//! - Ollama
107//! - OpenAI
108//! - OpenRouter
109//! - Perplexity
110//! - Together
111//! - Voyage AI
112//! - xAI
113//! - Xiaomi MiMo
114//! - Z.ai
115//!
116//! You can also implement your own model provider integration by defining types that
117//! implement the [CompletionModel](crate::completion::CompletionModel) and [EmbeddingModel](crate::embeddings::EmbeddingModel) traits.
118//!
119//! Vector stores are available as separate companion-crates:
120//!
121//! - MongoDB: [`rig-mongodb`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-mongodb)
122//! - LanceDB: [`rig-lancedb`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-lancedb)
123//! - Neo4j: [`rig-neo4j`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-neo4j)
124//! - Qdrant: [`rig-qdrant`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-qdrant)
125//! - SQLite: [`rig-sqlite`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-sqlite)
126//! - SurrealDB: [`rig-surrealdb`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-surrealdb)
127//! - Milvus: [`rig-milvus`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-milvus)
128//! - ScyllaDB: [`rig-scylladb`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-scylladb)
129//! - AWS S3Vectors: [`rig-s3vectors`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-s3vectors)
130//! - HelixDB: [`rig-helixdb`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-helixdb)
131//! - Cloudflare Vectorize: [`rig-vectorize`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-vectorize)
132//!
133//! You can also implement your own vector store integration by defining types that
134//! implement the [VectorStoreIndex](crate::vector_store::VectorStoreIndex) trait.
135//!
136//! The following providers are available as separate companion-crates:
137//!
138//! - AWS Bedrock: [`rig-bedrock`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-bedrock)
139//! - Fastembed: [`rig-fastembed`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-fastembed)
140//! - Google Gemini gRPC: [`rig-gemini-grpc`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-gemini-grpc)
141//! - Google Vertex AI: [`rig-vertexai`](https://github.com/0xPlaygrounds/rig/tree/main/crates/rig-vertexai)
142//!
143
144extern crate self as rig;
145
146pub mod agent;
147#[cfg(feature = "audio")]
148#[cfg_attr(docsrs, doc(cfg(feature = "audio")))]
149pub mod audio_generation;
150pub mod client;
151pub mod completion;
152pub mod embeddings;
153
154#[cfg(feature = "experimental")]
155#[cfg_attr(docsrs, doc(cfg(feature = "experimental")))]
156pub mod evals;
157pub mod extractor;
158pub mod http_client;
159#[cfg(feature = "image")]
160#[cfg_attr(docsrs, doc(cfg(feature = "image")))]
161pub mod image_generation;
162pub mod integrations;
163pub(crate) mod json_utils;
164pub mod loaders;
165pub mod markers;
166pub mod memory;
167pub mod model;
168pub mod one_or_many;
169pub mod pipeline;
170pub mod prelude;
171pub mod providers;
172
173pub mod streaming;
174#[cfg(any(test, feature = "test-utils"))]
175#[cfg_attr(docsrs, doc(cfg(feature = "test-utils")))]
176pub mod test_utils;
177pub mod tool;
178pub mod tools;
179pub mod transcription;
180pub mod vector_store;
181pub mod wasm_compat;
182
183// Re-export commonly used types and traits
184pub use completion::message;
185pub use embeddings::Embed;
186pub use extractor::ExtractionResponse;
187pub use one_or_many::{EmptyListError, OneOrMany};
188
189#[cfg(feature = "derive")]
190#[cfg_attr(docsrs, doc(cfg(feature = "derive")))]
191pub use rig_derive::{Embed, rig_tool as tool_macro};
192
193pub mod telemetry;