Skip to main content

Crate frigg

Crate frigg 

Source
Expand description

Frigg is organized as a pipeline: shared domain and settings types describe the contract, indexing and storage build durable repository artifacts, search and graph layers answer retrieval questions from those artifacts, and MCP plus watch turn the whole system into a long-lived agent-facing service.

Modules§

domain
Shared domain vocabulary reused across indexing, retrieval, provenance, and MCP responses so each layer can exchange the same evidence model without translation glue. Shared domain vocabulary used across indexing, search, storage, provenance, and MCP delivery. These types stay intentionally neutral so higher layers can change behavior without redefining the core concepts they exchange.
embeddings
Embedding providers and vector-readiness checks used when semantic search is enabled so indexing and serving can gate semantic work through one capability boundary. Embedding provider abstractions and readiness checks used by both indexing and runtime startup. Keeping semantic transport concerns here lets the rest of the crate treat embeddings as a capability boundary instead of vendor-specific HTTP code.
graph
Symbol and relation graph primitives that power navigation-style retrieval on top of heuristic and precise artifact ingestion. Symbol and relation graph facilities used to power navigation-style retrieval. The graph combines heuristic repository analysis with precise SCIP ingest so MCP tools and search flows can ask structure-aware questions through one reusable substrate.
indexer
Repository artifact construction, including manifests, reindex planning, symbol extraction, and semantic chunk generation that feed the search and MCP layers. Indexing and artifact construction for repository snapshots. The indexer turns a workspace into manifests, symbol inventories, semantic chunks, and retrieval projections that the search, graph, and MCP layers can reuse instead of rescanning the filesystem on every request.
mcp
MCP delivery surface that exposes Frigg’s repository tooling as stable agent-facing methods and schemas. MCP delivery layer that packages Frigg’s retrieval and indexing capabilities as a stable tool surface for agents. This is where runtime state, schemas, and transport-facing orchestration meet the lower-level search and storage subsystems.
playbooks
Playbook parsing and regression helpers used to turn retrieval expectations into executable probes. Playbook parsing and regression tracing for repeatable search evaluation. This layer turns retrieval expectations into executable probes so ranking changes can be measured without coupling that logic to the MCP runtime.
searcher
Retrieval orchestration that blends lexical, graph, and semantic evidence into stable ranked results. Search orchestration that turns manifests, projections, lexical scans, graph edges, and optional embeddings into stable retrieval results. This layer sits between raw repository artifacts and delivery surfaces such as MCP tools or playbook probes.
settings
Runtime configuration shared by CLI, indexing, watch, and MCP startup so every entry point resolves the same operating profile. Runtime configuration types that decide what Frigg can serve and which background services should be active. Centralizing these switches keeps CLI, indexing, watch, and MCP startup on the same operating profile.
storage
Durable repository state for manifests, retrieval projections, semantic artifacts, and provenance. Durable storage for manifests, semantic state, retrieval projections, and provenance. Storage is the handoff point that lets indexing, search, and MCP runtime share consistent repository state across process boundaries and refresh cycles.
test_support
Shared helpers for exercising the production wiring from tests without rebuilding fixture setup in every suite.
watch
Incremental freshness runtime that keeps attached workspaces reindexed without pushing watch logic into request handlers. Watch runtime orchestration that keeps indexed state fresh while request handlers stay focused on serving search and navigation work. This isolates filesystem supervision from the MCP and search surfaces, while still letting them share one background freshness loop.