pub struct GraphConfig {Show 30 fields
pub enabled: bool,
pub extract_model: String,
pub max_entities_per_message: usize,
pub max_edges_per_message: usize,
pub community_refresh_interval: usize,
pub entity_similarity_threshold: f32,
pub extraction_timeout_secs: u64,
pub use_embedding_resolution: bool,
pub entity_ambiguous_threshold: f32,
pub max_hops: u32,
pub recall_limit: usize,
pub expired_edge_retention_days: u32,
pub max_entities: usize,
pub community_summary_max_prompt_bytes: usize,
pub community_summary_concurrency: usize,
pub lpa_edge_chunk_size: usize,
pub temporal_decay_rate: f64,
pub edge_history_limit: usize,
pub note_linking: NoteLinkingConfig,
pub spreading_activation: SpreadingActivationConfig,
pub retrieval_strategy: GraphRetrievalStrategy,
pub strategy_classifier_provider: Option<ProviderName>,
pub beam_search: BeamSearchConfig,
pub watercircles: WaterCirclesConfig,
pub experience: ExperienceConfig,
pub link_weight_decay_lambda: f64,
pub link_weight_decay_interval_secs: u64,
pub belief_revision: BeliefRevisionConfig,
pub rpe: RpeConfig,
pub pool_size: u32,
}Expand description
Configuration for the knowledge graph memory subsystem ([memory.graph] TOML section).
§Security
Entity names, relation labels, and fact strings extracted by the LLM are stored verbatim without PII redaction. This is a known pre-1.0 MVP limitation. Do not enable graph memory when processing conversations that may contain personal, medical, or sensitive data until a redaction pass is implemented on the write path.
Fields§
§enabled: bool§extract_model: String§max_entities_per_message: usize§max_edges_per_message: usize§community_refresh_interval: usize§entity_similarity_threshold: f32§extraction_timeout_secs: u64§use_embedding_resolution: bool§entity_ambiguous_threshold: f32§max_hops: u32§recall_limit: usize§expired_edge_retention_days: u32Days to retain expired (superseded) edges before deletion. Default: 90.
max_entities: usizeMaximum entities to retain in the graph. 0 = unlimited.
community_summary_max_prompt_bytes: usizeMaximum prompt size in bytes for community summary generation. Default: 8192.
community_summary_concurrency: usizeMaximum concurrent LLM calls during community summarization. Default: 4.
lpa_edge_chunk_size: usizeNumber of edges fetched per chunk during community detection. Default: 10000. Set to 0 to disable chunking and load all edges at once (legacy behavior).
temporal_decay_rate: f64Temporal recency decay rate for graph recall scoring (units: 1/day).
When > 0, recent edges receive a small additive score boost over older edges.
The boost formula is 1 / (1 + age_days * rate), blended additively with the base
composite score. Default 0.0 preserves existing scoring behavior exactly.
edge_history_limit: usizeMaximum number of historical edge versions returned by edge_history(). Default: 100.
Caps the result set returned for a given source entity + predicate pair. Prevents unbounded memory usage for high-churn predicates when this method is exposed via TUI or API endpoints.
note_linking: NoteLinkingConfigA-MEM dynamic note linking configuration.
When note_linking.enabled = true, entities extracted from each message are linked to
semantically similar entities via similar_to edges. Requires an embedding store
(qdrant or sqlite vector backend) to be configured.
spreading_activation: SpreadingActivationConfigSYNAPSE spreading activation retrieval configuration.
When spreading_activation.enabled = true, graph recall uses spreading activation
with lateral inhibition and temporal decay instead of BFS.
retrieval_strategy: GraphRetrievalStrategyGraph retrieval strategy. Default: synapse (preserves existing behavior).
When spreading_activation.enabled = true and retrieval_strategy is synapse,
SYNAPSE spreading activation is used. Set to bfs to revert to hop-limited BFS.
strategy_classifier_provider: Option<ProviderName>Named LLM provider for hybrid strategy classification.
Falls back to the default provider when None.
beam_search: BeamSearchConfigBeam search configuration.
watercircles: WaterCirclesConfigWaterCircles BFS configuration.
experience: ExperienceConfigExperience memory configuration.
link_weight_decay_lambda: f64A-MEM link weight decay: multiplicative factor applied to retrieval_count
for un-retrieved edges each decay pass. Range: (0.0, 1.0]. Default: 0.95.
link_weight_decay_interval_secs: u64Seconds between link weight decay passes. Default: 86400 (24 hours).
belief_revision: BeliefRevisionConfigKumiho AGM-inspired belief revision configuration.
When belief_revision.enabled = true, new edges that semantically contradict existing
edges for the same entity pair trigger revision: the old edge is invalidated with a
superseded_by pointer and the new edge becomes the current belief.
rpe: RpeConfigD-MEM RPE-based tiered graph extraction routing.
When rpe.enabled = true, low-surprise turns skip the expensive MAGMA LLM extraction
pipeline. A consecutive-skip safety valve ensures no turn is silently skipped indefinitely.
pool_size: u32SQLite connection pool size dedicated to graph operations.
Graph tables share the same database file as messages/embeddings but use a
separate pool to prevent pool starvation when community detection or spreading
activation runs concurrently with regular memory operations. Default: 3.
Trait Implementations§
Source§impl Clone for GraphConfig
impl Clone for GraphConfig
Source§fn clone(&self) -> GraphConfig
fn clone(&self) -> GraphConfig
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more