pub struct ReadonlyRepo { /* private fields */ }Expand description
A view of the repository pinned to a single OperationId.
ReadonlyRepo does not mutate state. To make changes, call
start_transaction and then Transaction::commit - which returns
a fresh ReadonlyRepo pinned to the new op.
All fields are behind Arc, so clone() is cheap. Sharing across
threads is safe.
Implementations§
Source§impl ReadonlyRepo
impl ReadonlyRepo
Sourcepub fn init(
blockstore: Arc<dyn Blockstore>,
op_heads: Arc<dyn OpHeadsStore>,
) -> Result<Self, Error>
pub fn init( blockstore: Arc<dyn Blockstore>, op_heads: Arc<dyn OpHeadsStore>, ) -> Result<Self, Error>
Initialize a fresh repository per SPEC §7.5.
Writes one root View (empty heads, empty refs) and one root
Operation into the blockstore, registers the op as the sole
op-head, and returns a ReadonlyRepo pinned to that op.
§Errors
Returns a store or codec error if blockstore writes fail.
Sourcepub fn open(
blockstore: Arc<dyn Blockstore>,
op_heads: Arc<dyn OpHeadsStore>,
) -> Result<Self, Error>
pub fn open( blockstore: Arc<dyn Blockstore>, op_heads: Arc<dyn OpHeadsStore>, ) -> Result<Self, Error>
Open an existing repository pinned to the current op-head.
If the op-heads store has more than one current head (concurrent
writers landed against the same base), the 3-way merge from
crate::repo::merge runs transparently: it finds the op-DAG
common ancestor, 3-way merges each head’s View (emitting
RefTarget::Conflicted for divergent refs), writes a synthetic
merge Operation, and advances op-heads. The returned
ReadonlyRepo is pinned to that merge op.
§Errors
RepoError::Uninitializedif the op-heads store is empty- call
ReadonlyRepo::initfirst. RepoError::NoCommonAncestorif the op-DAG is malformed.- Store / codec errors if loading objects fails.
Sourcepub fn load_at(
blockstore: Arc<dyn Blockstore>,
op_heads: Arc<dyn OpHeadsStore>,
op_id: Cid,
) -> Result<Self, Error>
pub fn load_at( blockstore: Arc<dyn Blockstore>, op_heads: Arc<dyn OpHeadsStore>, op_id: Cid, ) -> Result<Self, Error>
Load a repository view pinned to a specific OperationId.
Does not consult the op-heads store. Used internally by
open and Transaction::commit.
§Errors
Store / codec errors if loading objects fails.
Sourcepub fn head_commit(&self) -> Option<&Commit>
pub fn head_commit(&self) -> Option<&Commit>
The head Commit of the current view. None on a freshly-
initialized repository that hasn’t yet received any commits.
Sourcepub fn blockstore(&self) -> &Arc<dyn Blockstore>
pub fn blockstore(&self) -> &Arc<dyn Blockstore>
Access the underlying blockstore (borrowed Arc).
Sourcepub fn op_heads_store(&self) -> &Arc<dyn OpHeadsStore>
pub fn op_heads_store(&self) -> &Arc<dyn OpHeadsStore>
Access the underlying op-heads store (borrowed Arc).
Sourcepub fn embedding_for(
&self,
node_cid: &Cid,
model: &str,
) -> Result<Option<Embedding>, Error>
pub fn embedding_for( &self, node_cid: &Cid, model: &str, ) -> Result<Option<Embedding>, Error>
Look up the embedding for a node by its content-addressed
NodeCid and a model identifier, walking the
Commit::embeddings
Prolly sidecar. Returns None when:
- the repo has no commits yet,
- the head commit has no embedding sidecar (
embeddings = None), - the sidecar tree has no entry for this
NodeCid, or - the bucket exists but does not carry a vector under the
requested
modelstring.
The Prolly key is derived via the same helper
(embedding_key_for_node_cid) the write side uses, so a
Transaction::set_embedding write and a subsequent
embedding_for read are guaranteed to agree on the bucket
location.
§Why not on Node?
The same trade documented on
Commit::embeddings:
dense vector bytes drift in the last bit across ORT thread
counts, so storing them on the Node would couple NodeCid
to thread count. The sidecar separates identity (Node) from
derived bytes (Embedding) so NodeCid stays stable.
§Errors
Store or codec errors while walking the Prolly tree or
decoding the bucket. A missing key is Ok(None), not an error.
Sourcepub fn outgoing_edges(
&self,
src: &NodeId,
etype_filter: Option<&[&str]>,
) -> Result<Vec<Edge>, Error>
pub fn outgoing_edges( &self, src: &NodeId, etype_filter: Option<&[&str]>, ) -> Result<Vec<Edge>, Error>
All outgoing edges from src in the current commit, optionally
filtered by edge-type label. Returns an empty vec if the node
has no adjacency bucket (no authored out-edges), or if the repo
has no commits yet.
Used by graph-aware retrieval (Retriever::with_graph_expand)
to expand a seed set via 1-hop neighborhood traversal.
§Errors
Store or codec errors while walking the adjacency index or decoding Edge blocks.
Sourcepub fn incoming_edges(
&self,
dst: &NodeId,
etype_filter: Option<&[&str]>,
) -> Result<Vec<Edge>, Error>
pub fn incoming_edges( &self, dst: &NodeId, etype_filter: Option<&[&str]>, ) -> Result<Vec<Edge>, Error>
All incoming edges pointing at dst in the current commit,
optionally filtered by edge-type label. Returns an empty vec if
the node has no incoming-adjacency bucket, if the commit’s
IndexSet has no incoming tree (pre-0.3 repos), or if the
repo has no commits yet.
Symmetric mirror of Self::outgoing_edges. Use this from
agent-side callers that want “who points at this node” without
constructing a full crate::index::Query.
§Errors
Store or codec errors while walking the incoming-adjacency index or decoding Edge blocks.
Sourcepub fn incoming_edges_capped(
&self,
dst: &NodeId,
etype_filter: Option<&[&str]>,
cap: usize,
) -> Result<Vec<Edge>, Error>
pub fn incoming_edges_capped( &self, dst: &NodeId, etype_filter: Option<&[&str]>, cap: usize, ) -> Result<Vec<Edge>, Error>
Explicit-cap variant of Self::incoming_edges. Use this
when a caller is prepared to handle truncation (e.g. an MCP
tool that streams the bucket and renders its own
“clipped at N” marker). Default Self::incoming_edges
applies crate::index::Query::DEFAULT_ADJACENCY_CAP so a single
high-fan-in dst can’t DoS the agent-side caller.
§Errors
Store or codec errors while walking the incoming-adjacency index or decoding Edge blocks.
Sourcepub fn is_tombstoned(&self, id: &NodeId) -> bool
pub fn is_tombstoned(&self, id: &NodeId) -> bool
Whether id is listed in the current View’s tombstone map.
true means a prior commit on this view recorded a
Tombstone against the node -
retrieval paths filter it out by default. The underlying Node
block may still exist in the node Prolly tree and remains
addressable by CID; only the “show this to an agent” decision
changes.
Sourcepub fn tombstone_for(&self, id: &NodeId) -> Option<&Tombstone>
pub fn tombstone_for(&self, id: &NodeId) -> Option<&Tombstone>
Fetch the tombstone record for id, if any.
Sourcepub fn start_transaction(&self) -> Transaction
pub fn start_transaction(&self) -> Transaction
Start a transaction. The returned Transaction holds a cheap
clone of the current repo state; multiple transactions can be
started concurrently but only the first to commit wins (subsequent
commits against stale heads will land on a concurrent op-head in
M8.5’s merge model).
Sourcepub const fn query(&self) -> Query<'_>
pub const fn query(&self) -> Query<'_>
Convenience: Query::new(self). One-liner entry point for the
agent-facing retrieval API.
let hits = repo.query().label("Person").where_eq("name", "Alice").execute()?;Sourcepub fn build_vector_index(
&self,
model: &str,
) -> Result<BruteForceVectorIndex, Error>
pub fn build_vector_index( &self, model: &str, ) -> Result<BruteForceVectorIndex, Error>
Build a full-corpus vector index over every node whose
crate::objects::Embedding::model equals model. Dimensions
are inferred from the first matching embedding; subsequent
embeddings with a different dim are silently skipped.
Each index binds to a single (model, dim) - agents who use
multiple embedding models build one index per model.
§Errors
RepoError::Uninitializedif the repo has no head commit.- Store / codec errors from walking the node Prolly tree.
crate::error::ObjectError::EmbeddingSizeMismatchon a corrupted embedding (vector length disagrees withdim * bytes_per_dtype).
Sourcepub fn retrieve(&self) -> Retriever<'_>
pub fn retrieve(&self) -> Retriever<'_>
Start an agent-facing retrieval builder that composes the
structured query, dense vector similarity, and learned-sparse
retrieval under a token budget. See crate::retrieve for the
full model.
let result = repo
.retrieve()
.label("Document")
.vector("openai:text-embedding-3-small", embedding)
.token_budget(2000)
.execute()?;Sourcepub fn update_ref(
&self,
name: &str,
expected_prev: Option<&RefTarget>,
new: Option<RefTarget>,
author: &str,
) -> Result<Self, Error>
pub fn update_ref( &self, name: &str, expected_prev: Option<&RefTarget>, new: Option<RefTarget>, author: &str, ) -> Result<Self, Error>
Atomically update a named ref, subject to an expected-previous check (SPEC §6.4).
Semantics:
- Read the current value of
namein the current view’srefs. - If the current value does not
==-compare toexpected_prev(structurally equal, not byte-exact - ourRefTargetderivesPartialEqand constructs canonical form), returnRepoError::Stale. - Otherwise, build a new View with the ref updated (insert if
newisSome, remove ifnewisNone), a new Operation wrapping it, advance op-heads, and return a fresh repo.
Per SPEC §6.4, CAS guarantees no lost update - two
concurrent CAS attempts against the same base both succeed at
the op-log layer, and the next read sees a conflicted refs state.
For exactly-one-winner semantics, combine with
Transaction::commit_opts’s linearize: true or with an
out-of-process coordinator.
§Errors
RepoError::Staleon mismatch withexpected_prev.- Codec / store errors on write.
Trait Implementations§
Source§impl Clone for ReadonlyRepo
impl Clone for ReadonlyRepo
Source§fn clone(&self) -> ReadonlyRepo
fn clone(&self) -> ReadonlyRepo
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more