docs.rs failed to build cortex-mem-rig-2.5.1
Please check the
build logs for more information.
See
Builds for ideas on how to fix a failed build,
or
Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault,
open an issue.
Cortex Memory Rig Integration
cortex-mem-rig provides integration with the Rig AI framework, enabling AI agents to interact with the Cortex Memory system through tool calls.
๐ง Overview
Cortex Memory Rig implements access tools, allowing AI agents to efficiently retrieve and manipulate memories:
Three-Tier Access Architecture
| Layer |
Size |
Purpose |
Tool |
| L0 Abstract |
~100 tokens |
Quick relevance judgment |
abstract_tool |
| L1 Overview |
~2000 tokens |
Partial context understanding |
overview_tool |
| L2 Full |
Complete content |
Deep analysis and processing |
read_tool |
Tool Categories
- ๐ Tiered Access Tools:
abstract, overview, read
- ๐ Search Tools:
search, find
- ๐ Filesystem Tools:
ls, explore
- ๐พ Storage Tools:
store
๐ Quick Start
Installation
[dependencies]
cortex-mem-rig = { path = "../cortex-mem-rig" }
cortex-mem-tools = { path = "../cortex-mem-tools" }
cortex-mem-core = { path = "../cortex-mem-core" }
rig-core = "0.11"
tokio = { version = "1", features = ["full"] }
Basic Usage
use cortex_mem_rig::{MemoryTools, create_memory_tools_with_tenant_and_vector};
use cortex_mem_core::llm::{LLMClientImpl, LLMConfig};
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let llm_config = LLMConfig {
api_base_url: "https://api.openai.com/v1".to_string(),
api_key: "your-api-key".to_string(),
model_efficient: "gpt-4o-mini".to_string(),
temperature: 0.1,
max_tokens: 4096,
};
let llm_client = Arc::new(LLMClientImpl::new(llm_config)?);
let memory_tools = create_memory_tools_with_tenant_and_vector(
"./cortex-data",
"default",
llm_client,
"http://localhost:6333",
"cortex_memories",
"https://api.openai.com/v1",
"your-embedding-key",
"text-embedding-3-small",
Some(1536),
None,
).await?;
let abstract_tool = memory_tools.abstract_tool();
let overview_tool = memory_tools.overview_tool();
let read_tool = memory_tools.read_tool();
let search_tool = memory_tools.search_tool();
let store_tool = memory_tools.store_tool();
Ok(())
}
๐ API Reference
MemoryTools
Main struct providing access to all memory tools.
pub struct MemoryTools {
operations: Arc<MemoryOperations>,
}
impl MemoryTools {
pub fn new(operations: Arc<MemoryOperations>) -> Self
pub fn operations(&self) -> &Arc<MemoryOperations>
pub fn abstract_tool(&self) -> AbstractTool
pub fn overview_tool(&self) -> OverviewTool
pub fn read_tool(&self) -> ReadTool
pub fn search_tool(&self) -> SearchTool
pub fn find_tool(&self) -> FindTool
pub fn ls_tool(&self) -> LsTool
pub fn explore_tool(&self) -> ExploreTool
pub fn store_tool(&self) -> StoreTool
}
Factory Functions
pub fn create_memory_tools(operations: Arc<MemoryOperations>) -> MemoryTools
pub async fn create_memory_tools_with_tenant_and_vector(
data_dir: &str,
tenant_id: &str,
llm_client: Arc<dyn LLMClient>,
qdrant_url: &str,
qdrant_collection: &str,
embedding_api_base_url: &str,
embedding_api_key: &str,
embedding_model_name: &str,
embedding_dim: Option<usize>,
user_id: Option<String>,
) -> Result<MemoryTools, Box<dyn std::error::Error>>
๐ ๏ธ Tool Definitions
Tiered Access Tools
AbstractTool ("abstract")
Get L0 abstract (~100 tokens) for quick relevance checking.
Parameters:
pub struct AbstractArgs {
pub uri: String, }
Response:
pub struct AbstractResponse {
pub uri: String,
pub abstract_text: String,
pub layer: String, pub token_count: usize,
}
OverviewTool ("overview")
Get L1 overview (~2000 tokens) for partial context.
Parameters:
pub struct OverviewArgs {
pub uri: String, }
Response:
pub struct OverviewResponse {
pub uri: String,
pub overview_text: String,
pub layer: String, pub token_count: usize,
}
ReadTool ("read")
Get L2 full content for deep analysis.
Parameters:
pub struct ReadArgs {
pub uri: String, }
Response:
pub struct ReadResponse {
pub uri: String,
pub content: String,
pub layer: String, pub token_count: usize,
pub metadata: Option<FileMetadata>,
}
Search Tools
SearchTool ("search")
Intelligent vector search with LLM query rewriting and layered retrieval.
Parameters:
pub struct SearchArgs {
pub query: String, pub recursive: Option<bool>, pub return_layers: Option<Vec<String>>, pub scope: Option<String>, pub limit: Option<usize>, }
Response:
pub struct SearchResponse {
pub query: String,
pub results: Vec<SearchResult>,
pub total: usize,
pub engine_used: String,
}
pub struct SearchResult {
pub uri: String,
pub score: f32,
pub snippet: String,
pub content: Option<String>,
}
FindTool ("find")
Quick search returning only L0 abstracts.
Parameters:
pub struct FindArgs {
pub query: String, pub scope: Option<String>, pub limit: Option<usize>, }
Response:
pub struct FindResponse {
pub query: String,
pub results: Vec<FindResult>,
pub total: usize,
}
pub struct FindResult {
pub uri: String,
pub score: f32,
pub abstract_text: String,
}
Filesystem Tools
LsTool ("ls")
List directory contents.
Parameters:
pub struct LsArgs {
pub uri: String, pub recursive: Option<bool>, pub include_abstracts: Option<bool>, }
Response:
pub struct LsResponse {
pub uri: String,
pub entries: Vec<LsEntry>,
pub total: usize,
}
pub struct LsEntry {
pub name: String,
pub uri: String,
pub is_directory: bool,
pub size: Option<u64>,
pub abstract_text: Option<String>,
}
ExploreTool ("explore")
Intelligent memory exploration.
Parameters:
pub struct ExploreArgs {
pub query: String, pub start_uri: Option<String>, pub max_depth: Option<usize>, pub return_layers: Option<Vec<String>>, }
Response:
pub struct ExploreResponse {
pub query: String,
pub exploration_path: Vec<String>,
pub matches: Vec<ExploreMatch>,
pub total_explored: usize,
pub total_matches: usize,
}
Storage Tool
StoreTool ("store")
Store content with automatic L0/L1 layer generation.
Parameters:
pub struct StoreArgs {
pub content: String, pub thread_id: String, pub metadata: Option<Value>, pub auto_generate_layers: Option<bool>, pub scope: String, pub user_id: Option<String>, pub agent_id: Option<String>, }
Response:
pub struct StoreResponse {
pub uri: String,
pub layers_generated: Vec<String>,
pub success: bool,
}
๐ง Rig Framework Integration
Tool Trait Implementation
Each tool implements the rig::tool::Tool trait:
impl Tool for AbstractTool {
const NAME: &'static str = "abstract";
type Error = ToolsError;
type Args = AbstractArgs;
type Output = AbstractResponse;
fn definition(&self, _prompt: String) -> impl Future<Output = ToolDefinition> + Send + Sync {
async {
ToolDefinition {
name: Self::NAME.to_string(),
description: "...".to_string(),
parameters: json!({ }),
}
}
}
async fn call(&self, args: Self::Args) -> Result<Self::Output, Self::Error> {
Ok(self.operations.get_abstract(&args.uri).await?)
}
}
Agent Integration Example
use rig::providers::openai::{Client, GPT_4O_MINI};
use cortex_mem_rig::MemoryTools;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::from_env();
let memory_tools = create_memory_tools_with_tenant_and_vector(
"./cortex-data",
"default",
llm_client,
"http://localhost:6333",
"cortex_memories",
"https://api.openai.com/v1",
"your-key",
"text-embedding-3-small",
Some(1536),
None,
).await?;
let agent = client
.agent(GPT_4O_MINI)
.preamble("You are an AI assistant with persistent memory capabilities.")
.tool(memory_tools.abstract_tool())
.tool(memory_tools.overview_tool())
.tool(memory_tools.read_tool())
.tool(memory_tools.search_tool())
.tool(memory_tools.store_tool())
.build();
let response = agent.prompt(
"Search for user preferences and store that they like dark theme."
).await?;
println!("Agent response: {}", response);
Ok(())
}
๐ฏ Best Practices
Tiered Access Pattern
- Use
abstract first for quick relevance checking
- Use
overview if relevant for more context
- Use
read only when necessary for complete content
Search Optimization
agent.prompt("Search 'error handling' in session 'rust-discussion'").await?;
agent.prompt("Find memories about 'OAuth' and show abstracts").await?;
agent.prompt(
"Search 'async programming', get abstracts for top 3, then read the most relevant one"
).await?;
Memory Storage
agent.prompt("Store 'User is learning Rust async' in current session").await?;
agent.prompt("Store 'User prefers dark mode' as a user preference").await?;
๐งช Testing
cargo test -p cortex-mem-rig
cargo test --all
๐จ Common Issues
Tool Call Failed
Ensure:
- Cortex Memory Core is properly initialized
- Data directory has write permissions
- Search index is built
Empty Abstract Content
Possible causes:
- File does not exist
- Content too short to generate summary
- LLM service unavailable
Inaccurate Search Results
Optimization tips:
- Use more specific queries
- Limit search scope
- Use
search instead of find for comprehensive results
๐ฆ Dependencies
cortex-mem-tools - High-level memory operations
cortex-mem-core - Core library
rig-core - Rig AI framework
tokio - Async runtime
serde / serde_json - Serialization
anyhow / thiserror - Error handling
๐ License
MIT License - see the LICENSE file for details.
๐ Related Resources
Built with โค๏ธ using Rust and Rig AI Framework