# recall-graph
Knowledge graph with semantic search for AI memory systems.
A structured graph layer (Layer 0) that sits underneath flat-file memory systems like [recall-echo](https://github.com/dnacenta/recall-echo). Provides entity storage, relationship tracking, episode memory, LLM-powered entity extraction, and semantic search — all backed by SurrealDB with local FastEmbed embeddings.
## Features
- **Entity CRUD** — create, read, update, delete entities with typed classifications (person, project, tool, concept, etc.)
- **Relationships** — directional edges between entities with supersede semantics for evolving knowledge
- **Episodes** — temporal memory chunks tied to conversation sessions
- **Semantic search** — vector similarity via locally-generated embeddings (FastEmbed, no API calls)
- **Hybrid queries** — combines semantic search with graph traversal and optional episode retrieval
- **LLM extraction** — extract entities and relationships from conversation archives using any LLM provider
- **Deduplication** — LLM-powered entity dedup with skip/create/merge decisions
- **Parallel ingestion** — concurrent chunk extraction with sequential dedup for correctness
## Architecture
```
archive text
|
v
chunk_conversation (500-line chunks)
|
v
[Phase 1] extract_from_chunk (parallel, 10 concurrent LLM calls)
|
v
[Phase 2] local_merge_entities (merge same-name candidates, no LLM)
|
v
[Phase 3] resolve_entity (sequential dedup against DB)
|
v
[Phase 4] create relationships (sequential, no LLM)
```
## Usage
```rust
use recall_graph::{GraphMemory, types::*};
// Open or create a graph store
let gm = GraphMemory::open(Path::new("./graph")).await?;
// Add an entity
let entity = gm.add_entity(NewEntity {
name: "Rust".into(),
entity_type: EntityType::Tool,
abstract_text: "Systems programming language".into(),
overview: None,
content: None,
attributes: None,
source: Some("manual".into()),
}).await?;
// Semantic search
let results = gm.search("programming languages", 5).await?;
// Hybrid query with graph expansion
let result = gm.query("Rust", &QueryOptions {
limit: 10,
graph_depth: 2,
include_episodes: true,
..Default::default()
}).await?;
// Ingest a conversation archive (episodes only)
let report = gm.ingest_archive(&text, "session-id", Some(1), None).await?;
// Extract entities with an LLM provider
let report = gm.extract_from_archive(&text, "session-id", Some(1), &llm).await?;
```
## LLM Provider
Implement the `LlmProvider` trait to plug in any LLM backend:
```rust
#[async_trait]
pub trait LlmProvider: Send + Sync {
async fn complete(
&self,
system_prompt: &str,
user_message: &str,
max_tokens: u32,
) -> Result<String, GraphError>;
}
```
## Storage
Uses [SurrealDB](https://surrealdb.com/) (embedded, file-backed via SurrealKV) for graph storage and [FastEmbed](https://github.com/Anush008/fastembed-rs) for local embedding generation. No external services required.
## License
AGPL-3.0-only