pub struct MutationEngine { /* private fields */ }Expand description
Coordinates all columnar mutations for a single collection.
Owns the memtable, PK index, and per-segment delete bitmaps. Produces WAL records for each mutation that the caller must persist.
Implementations§
Source§impl MutationEngine
impl MutationEngine
Sourcepub fn new(collection: String, schema: ColumnarSchema) -> Self
pub fn new(collection: String, schema: ColumnarSchema) -> Self
Create a new mutation engine for a collection.
Sourcepub fn memtable(&self) -> &ColumnarMemtable
pub fn memtable(&self) -> &ColumnarMemtable
Access the memtable.
Sourcepub fn memtable_mut(&mut self) -> &mut ColumnarMemtable
pub fn memtable_mut(&mut self) -> &mut ColumnarMemtable
Mutable access to the memtable (for drain on flush).
Sourcepub fn pk_index_mut(&mut self) -> &mut PkIndex
pub fn pk_index_mut(&mut self) -> &mut PkIndex
Mutable access to the PK index (for cold-start rebuild).
Sourcepub fn delete_bitmap(&self, segment_id: u64) -> Option<&DeleteBitmap>
pub fn delete_bitmap(&self, segment_id: u64) -> Option<&DeleteBitmap>
Access a segment’s delete bitmap.
Sourcepub fn delete_bitmap_mut(&mut self, segment_id: u64) -> &mut DeleteBitmap
pub fn delete_bitmap_mut(&mut self, segment_id: u64) -> &mut DeleteBitmap
Mutable access to a segment’s delete bitmap. Creates an empty one
on first access so callers can mark_deleted_batch unconditionally.
Used by temporal-purge paths that tombstone superseded row positions
without going through the single-row insert / delete paths.
Sourcepub fn memtable_segment_id(&self) -> u64
pub fn memtable_segment_id(&self) -> u64
The virtual segment id used for rows still in the memtable.
Sourcepub fn pk_col_indices(&self) -> &[usize]
pub fn pk_col_indices(&self) -> &[usize]
The schema’s primary-key column indices, in schema order.
Sourcepub fn delete_bitmaps(&self) -> &HashMap<u64, DeleteBitmap>
pub fn delete_bitmaps(&self) -> &HashMap<u64, DeleteBitmap>
Access all delete bitmaps.
Sourcepub fn collection(&self) -> &str
pub fn collection(&self) -> &str
The collection name.
Sourcepub fn schema(&self) -> &ColumnarSchema
pub fn schema(&self) -> &ColumnarSchema
The schema.
Sourcepub fn should_flush(&self) -> bool
pub fn should_flush(&self) -> bool
Whether the memtable should be flushed.
Sourcepub fn memtable_surrogates(&self) -> &[Option<Surrogate>]
pub fn memtable_surrogates(&self) -> &[Option<Surrogate>]
Access the per-row surrogate table for the memtable.
Index matches memtable row order; None entries indicate rows
inserted without a surrogate (test fixtures, legacy paths).
Sourcepub fn scan_memtable_rows(&self) -> impl Iterator<Item = Vec<Value>> + '_
pub fn scan_memtable_rows(&self) -> impl Iterator<Item = Vec<Value>> + '_
Iterate non-deleted rows in the memtable as Vec<Value>.
Skips rows marked as deleted in the memtable’s virtual segment
delete bitmap. For rows in flushed segments, use SegmentReader.
Sourcepub fn scan_memtable_rows_with_surrogates(
&self,
) -> impl Iterator<Item = (Option<Surrogate>, Vec<Value>)> + '_
pub fn scan_memtable_rows_with_surrogates( &self, ) -> impl Iterator<Item = (Option<Surrogate>, Vec<Value>)> + '_
Iterate non-deleted rows paired with their surrogate identity.
Yields (Option<Surrogate>, Vec<Value>). The surrogate is None
for rows inserted without one (test fixtures, legacy paths). Deleted
rows are filtered out exactly as in Self::scan_memtable_rows.
Sourcepub fn get_memtable_row(&self, row_idx: usize) -> Option<Vec<Value>>
pub fn get_memtable_row(&self, row_idx: usize) -> Option<Vec<Value>>
Get a single row from the memtable by index (None if deleted).
Sourcepub fn rollback_memtable_inserts(
&mut self,
row_count_before: usize,
inserted_pks: &[Vec<u8>],
displaced: &[(Vec<u8>, RowLocation)],
)
pub fn rollback_memtable_inserts( &mut self, row_count_before: usize, inserted_pks: &[Vec<u8>], displaced: &[(Vec<u8>, RowLocation)], )
Roll back in-memory inserts to row_count_before.
Undoes the effect of one or more inserts that appended rows starting
at row_count_before. For each inserted row:
- The corresponding PK entry is removed from the PK index.
- If the insert displaced a prior row (upsert tombstone), that prior row’s PK index entry is restored and its tombstone bit cleared.
The memtable is then truncated to row_count_before. Used exclusively
by the transaction undo log; never called on the normal write path.
Sourcepub fn next_segment_id(&self) -> u64
pub fn next_segment_id(&self) -> u64
The segment ID that will be assigned to the next flushed segment.
Use this to obtain the ID to pass to on_memtable_flushed.
Sourcepub fn should_compact(&self, segment_id: u64, total_rows: u64) -> bool
pub fn should_compact(&self, segment_id: u64, total_rows: u64) -> bool
Whether a segment should be compacted based on its delete ratio.
Sourcepub fn encode_pk_from_row(
&self,
values: &[Value],
) -> Result<Vec<u8>, ColumnarError>
pub fn encode_pk_from_row( &self, values: &[Value], ) -> Result<Vec<u8>, ColumnarError>
Encode a PK value as index bytes. Exposed for callers that need
to probe the PK index (e.g. ON CONFLICT DO UPDATE routing).
Source§impl MutationEngine
impl MutationEngine
Sourcepub fn on_memtable_flushed(
&mut self,
new_segment_id: u64,
) -> Result<MutationResult, ColumnarError>
pub fn on_memtable_flushed( &mut self, new_segment_id: u64, ) -> Result<MutationResult, ColumnarError>
Notify the engine that the memtable was flushed to a new segment.
Updates the PK index to remap memtable entries to the new segment.
Returns the WAL record for the flush event, or SegmentIdExhausted
if the u64 segment ID counter has wrapped past its maximum.
Sourcepub fn on_compaction_complete(
&mut self,
old_segment_ids: &[u64],
new_segment_id: u64,
row_mapping: &HashMap<(u64, u32), u32>,
) -> MutationResult
pub fn on_compaction_complete( &mut self, old_segment_ids: &[u64], new_segment_id: u64, row_mapping: &HashMap<(u64, u32), u32>, ) -> MutationResult
Notify the engine that compaction completed.
Remaps PK index entries and removes old delete bitmaps.
Source§impl MutationEngine
impl MutationEngine
Sourcepub fn insert(
&mut self,
values: &[Value],
) -> Result<MutationResult, ColumnarError>
pub fn insert( &mut self, values: &[Value], ) -> Result<MutationResult, ColumnarError>
Insert a row with upsert-on-duplicate semantics. Returns WAL records to persist.
Validates schema. If the PK already exists, the prior row is
tombstoned via the segment’s delete bitmap (a single positional
delete) before the new row is appended to the memtable. The PK
index is rebound to the new row location. This matches the
ClickHouse / Iceberg “sparse PK + positional delete” model and
keeps SELECT WHERE pk = X linearizable on one row without a
read-time merge pass.
Callers that want strict INSERT (error on duplicate) should check
pk_index().contains() themselves before calling; callers that
want ON CONFLICT DO NOTHING semantics should use
Self::insert_if_absent.
Sourcepub fn insert_with_surrogate(
&mut self,
values: &[Value],
surrogate: Surrogate,
) -> Result<MutationResult, ColumnarError>
pub fn insert_with_surrogate( &mut self, values: &[Value], surrogate: Surrogate, ) -> Result<MutationResult, ColumnarError>
Insert with a stable cross-engine surrogate identity.
Identical to Self::insert but also records surrogate in the
per-row side-table so scan prefilters can perform bitmap membership
checks without a separate lookup pass.
Sourcepub fn insert_if_absent(
&mut self,
values: &[Value],
) -> Result<MutationResult, ColumnarError>
pub fn insert_if_absent( &mut self, values: &[Value], ) -> Result<MutationResult, ColumnarError>
INSERT ... ON CONFLICT DO NOTHING semantics: append only if the
PK is absent; silently skip on duplicate.
Returns Ok(MutationResult { wal_records }) with an empty vector
when the row was skipped, so callers that batch WAL appends can
detect no-ops by checking wal_records.is_empty().
Sourcepub fn lookup_memtable_row_by_pk(&self, pk_bytes: &[u8]) -> Option<Vec<Value>>
pub fn lookup_memtable_row_by_pk(&self, pk_bytes: &[u8]) -> Option<Vec<Value>>
Look up the current row for a PK in the memtable, if present.
Returns None if the PK is not in the index, or if the PK points
to a flushed segment (callers needing cross-segment lookup must
go through a segment reader separately). This is the fast path
used by ON CONFLICT DO UPDATE to read the would-be-merged row
when the duplicate hits the memtable — the common case under
back-to-back inserts.
Sourcepub fn delete(
&mut self,
pk_value: &Value,
) -> Result<MutationResult, ColumnarError>
pub fn delete( &mut self, pk_value: &Value, ) -> Result<MutationResult, ColumnarError>
Delete a row by PK value. Returns WAL record to persist.
Looks up PK in the index to find the segment + row, then marks the row in the segment’s delete bitmap.
Sourcepub fn update(
&mut self,
old_pk: &Value,
new_values: &[Value],
) -> Result<MutationResult, ColumnarError>
pub fn update( &mut self, old_pk: &Value, new_values: &[Value], ) -> Result<MutationResult, ColumnarError>
Update a row by PK: DELETE old + INSERT new.
updates maps column names to new values. Columns not in the map
retain their existing values from the old row.
Returns WAL records for both the delete and the insert.
NOTE: The caller must provide the full old row values for the re-insert. This method takes the complete new row (already merged with old values).
Auto Trait Implementations§
impl Freeze for MutationEngine
impl RefUnwindSafe for MutationEngine
impl Send for MutationEngine
impl Sync for MutationEngine
impl Unpin for MutationEngine
impl UnsafeUnpin for MutationEngine
impl UnwindSafe for MutationEngine
Blanket Implementations§
Source§impl<T> ArchivePointee for T
impl<T> ArchivePointee for T
Source§type ArchivedMetadata = ()
type ArchivedMetadata = ()
Source§fn pointer_metadata(
_: &<T as ArchivePointee>::ArchivedMetadata,
) -> <T as Pointee>::Metadata
fn pointer_metadata( _: &<T as ArchivePointee>::ArchivedMetadata, ) -> <T as Pointee>::Metadata
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> LayoutRaw for T
impl<T> LayoutRaw for T
Source§fn layout_raw(_: <T as Pointee>::Metadata) -> Result<Layout, LayoutError>
fn layout_raw(_: <T as Pointee>::Metadata) -> Result<Layout, LayoutError>
Source§impl<T, N1, N2> Niching<NichedOption<T, N1>> for N2
impl<T, N1, N2> Niching<NichedOption<T, N1>> for N2
Source§unsafe fn is_niched(niched: *const NichedOption<T, N1>) -> bool
unsafe fn is_niched(niched: *const NichedOption<T, N1>) -> bool
Source§fn resolve_niched(out: Place<NichedOption<T, N1>>)
fn resolve_niched(out: Place<NichedOption<T, N1>>)
out indicating that a T is niched.Source§impl<SS, SP> SupersetOf<SS> for SPwhere
SS: SubsetOf<SP>,
impl<SS, SP> SupersetOf<SS> for SPwhere
SS: SubsetOf<SP>,
Source§fn to_subset(&self) -> Option<SS>
fn to_subset(&self) -> Option<SS>
self from the equivalent element of its
superset. Read moreSource§fn is_in_subset(&self) -> bool
fn is_in_subset(&self) -> bool
self is actually part of its subset T (and can be converted to it).Source§fn to_subset_unchecked(&self) -> SS
fn to_subset_unchecked(&self) -> SS
self.to_subset but without any property checks. Always succeeds.Source§fn from_subset(element: &SS) -> SP
fn from_subset(element: &SS) -> SP
self to the equivalent element of its superset.