pub struct MutationEngine { /* private fields */ }Expand description
Coordinates all columnar mutations for a single collection.
Owns the memtable, PK index, and per-segment delete bitmaps. Produces WAL records for each mutation that the caller must persist.
Implementations§
Source§impl MutationEngine
impl MutationEngine
Sourcepub fn new(collection: String, schema: ColumnarSchema) -> Self
pub fn new(collection: String, schema: ColumnarSchema) -> Self
Create a new mutation engine for a collection.
Sourcepub fn insert(
&mut self,
values: &[Value],
) -> Result<MutationResult, ColumnarError>
pub fn insert( &mut self, values: &[Value], ) -> Result<MutationResult, ColumnarError>
Insert a row with upsert-on-duplicate semantics. Returns WAL records to persist.
Validates schema. If the PK already exists, the prior row is
tombstoned via the segment’s delete bitmap (a single positional
delete) before the new row is appended to the memtable. The PK
index is rebound to the new row location. This matches the
ClickHouse / Iceberg “sparse PK + positional delete” model and
keeps SELECT WHERE pk = X linearizable on one row without a
read-time merge pass.
Callers that want strict INSERT (error on duplicate) should check
pk_index().contains() themselves before calling; callers that
want ON CONFLICT DO NOTHING semantics should use
Self::insert_if_absent.
Sourcepub fn insert_if_absent(
&mut self,
values: &[Value],
) -> Result<MutationResult, ColumnarError>
pub fn insert_if_absent( &mut self, values: &[Value], ) -> Result<MutationResult, ColumnarError>
INSERT ... ON CONFLICT DO NOTHING semantics: append only if the
PK is absent; silently skip on duplicate.
Returns Ok(MutationResult { wal_records }) with an empty vector
when the row was skipped, so callers that batch WAL appends can
detect no-ops by checking wal_records.is_empty().
Sourcepub fn lookup_memtable_row_by_pk(&self, pk_bytes: &[u8]) -> Option<Vec<Value>>
pub fn lookup_memtable_row_by_pk(&self, pk_bytes: &[u8]) -> Option<Vec<Value>>
Look up the current row for a PK in the memtable, if present.
Returns None if the PK is not in the index, or if the PK points
to a flushed segment (callers needing cross-segment lookup must
go through a segment reader separately). This is the fast path
used by ON CONFLICT DO UPDATE to read the would-be-merged row
when the duplicate hits the memtable — the common case under
back-to-back inserts.
Sourcepub fn encode_pk_from_row(
&self,
values: &[Value],
) -> Result<Vec<u8>, ColumnarError>
pub fn encode_pk_from_row( &self, values: &[Value], ) -> Result<Vec<u8>, ColumnarError>
Encode a PK value as index bytes. Exposed for callers that need
to probe the PK index (e.g. ON CONFLICT DO UPDATE routing).
Sourcepub fn delete(
&mut self,
pk_value: &Value,
) -> Result<MutationResult, ColumnarError>
pub fn delete( &mut self, pk_value: &Value, ) -> Result<MutationResult, ColumnarError>
Delete a row by PK value. Returns WAL record to persist.
Looks up PK in the index to find the segment + row, then marks the row in the segment’s delete bitmap.
Sourcepub fn update(
&mut self,
old_pk: &Value,
new_values: &[Value],
) -> Result<MutationResult, ColumnarError>
pub fn update( &mut self, old_pk: &Value, new_values: &[Value], ) -> Result<MutationResult, ColumnarError>
Update a row by PK: DELETE old + INSERT new.
updates maps column names to new values. Columns not in the map
retain their existing values from the old row.
Returns WAL records for both the delete and the insert.
NOTE: The caller must provide the full old row values for the re-insert. This method takes the complete new row (already merged with old values).
Sourcepub fn on_memtable_flushed(&mut self, new_segment_id: u32) -> MutationResult
pub fn on_memtable_flushed(&mut self, new_segment_id: u32) -> MutationResult
Notify the engine that the memtable was flushed to a new segment.
Updates the PK index to remap memtable entries to the new segment. Returns the WAL record for the flush event.
Sourcepub fn on_compaction_complete(
&mut self,
old_segment_ids: &[u32],
new_segment_id: u32,
row_mapping: &HashMap<(u32, u32), u32>,
) -> MutationResult
pub fn on_compaction_complete( &mut self, old_segment_ids: &[u32], new_segment_id: u32, row_mapping: &HashMap<(u32, u32), u32>, ) -> MutationResult
Notify the engine that compaction completed.
Remaps PK index entries and removes old delete bitmaps.
Sourcepub fn memtable(&self) -> &ColumnarMemtable
pub fn memtable(&self) -> &ColumnarMemtable
Access the memtable.
Sourcepub fn memtable_mut(&mut self) -> &mut ColumnarMemtable
pub fn memtable_mut(&mut self) -> &mut ColumnarMemtable
Mutable access to the memtable (for drain on flush).
Sourcepub fn pk_index_mut(&mut self) -> &mut PkIndex
pub fn pk_index_mut(&mut self) -> &mut PkIndex
Mutable access to the PK index (for cold-start rebuild).
Sourcepub fn delete_bitmap(&self, segment_id: u32) -> Option<&DeleteBitmap>
pub fn delete_bitmap(&self, segment_id: u32) -> Option<&DeleteBitmap>
Access a segment’s delete bitmap.
Sourcepub fn delete_bitmaps(&self) -> &HashMap<u32, DeleteBitmap>
pub fn delete_bitmaps(&self) -> &HashMap<u32, DeleteBitmap>
Access all delete bitmaps.
Sourcepub fn collection(&self) -> &str
pub fn collection(&self) -> &str
The collection name.
Sourcepub fn schema(&self) -> &ColumnarSchema
pub fn schema(&self) -> &ColumnarSchema
The schema.
Sourcepub fn should_flush(&self) -> bool
pub fn should_flush(&self) -> bool
Whether the memtable should be flushed.
Sourcepub fn scan_memtable_rows(&self) -> impl Iterator<Item = Vec<Value>> + '_
pub fn scan_memtable_rows(&self) -> impl Iterator<Item = Vec<Value>> + '_
Iterate non-deleted rows in the memtable as Vec<Value>.
Skips rows marked as deleted in the memtable’s virtual segment
delete bitmap. For rows in flushed segments, use SegmentReader.
Sourcepub fn get_memtable_row(&self, row_idx: usize) -> Option<Vec<Value>>
pub fn get_memtable_row(&self, row_idx: usize) -> Option<Vec<Value>>
Get a single row from the memtable by index (None if deleted).
Sourcepub fn next_segment_id(&self) -> u32
pub fn next_segment_id(&self) -> u32
The segment ID that will be assigned to the next flushed segment.
Use this to obtain the ID to pass to on_memtable_flushed.
Sourcepub fn should_compact(&self, segment_id: u32, total_rows: u64) -> bool
pub fn should_compact(&self, segment_id: u32, total_rows: u64) -> bool
Whether a segment should be compacted based on its delete ratio.
Auto Trait Implementations§
impl Freeze for MutationEngine
impl RefUnwindSafe for MutationEngine
impl Send for MutationEngine
impl Sync for MutationEngine
impl Unpin for MutationEngine
impl UnsafeUnpin for MutationEngine
impl UnwindSafe for MutationEngine
Blanket Implementations§
Source§impl<T> ArchivePointee for T
impl<T> ArchivePointee for T
Source§type ArchivedMetadata = ()
type ArchivedMetadata = ()
Source§fn pointer_metadata(
_: &<T as ArchivePointee>::ArchivedMetadata,
) -> <T as Pointee>::Metadata
fn pointer_metadata( _: &<T as ArchivePointee>::ArchivedMetadata, ) -> <T as Pointee>::Metadata
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> LayoutRaw for T
impl<T> LayoutRaw for T
Source§fn layout_raw(_: <T as Pointee>::Metadata) -> Result<Layout, LayoutError>
fn layout_raw(_: <T as Pointee>::Metadata) -> Result<Layout, LayoutError>
Source§impl<T, N1, N2> Niching<NichedOption<T, N1>> for N2
impl<T, N1, N2> Niching<NichedOption<T, N1>> for N2
Source§unsafe fn is_niched(niched: *const NichedOption<T, N1>) -> bool
unsafe fn is_niched(niched: *const NichedOption<T, N1>) -> bool
Source§fn resolve_niched(out: Place<NichedOption<T, N1>>)
fn resolve_niched(out: Place<NichedOption<T, N1>>)
out indicating that a T is niched.