pub struct Database {
pub catalog: Catalog,
pub tables: HashMap<String, Table>,
/* private fields */
}Expand description
In-memory database - manages catalog and tables through focused modules
The Database struct coordinates between multiple internal modules to provide a complete database implementation. Each aspect of database functionality is organized into a focused module:
- Transaction management:
begin_transaction(),commit_transaction(),rollback_transaction(),create_savepoint(),rollback_to_savepoint() - Table operations:
create_table(),drop_table(),get_table(),insert_row(),insert_rows_batch(),update_row_by_pk() - Point lookups:
get_row_by_pk(),get_column_by_pk(),get_row_by_composite_pk() - Change events:
enable_change_events(),subscribe_changes(),notify_update(),notify_deletes() - Persistence:
enable_persistence(),sync_persistence(),last_insert_rowid() - Caching:
get_columnar(),invalidate_columnar_cache(),columnar_cache_stats() - Session:
set_sql_mode(),sql_mode(),get_session_variable(),set_session_variable()
Fields§
§catalog: CatalogPublic catalog access for backward compatibility
tables: HashMap<String, Table>Implementations§
Source§impl Database
impl Database
Sourcepub fn get_columnar(
&self,
table_name: &str,
) -> Result<Option<Arc<ColumnarTable>>, StorageError>
pub fn get_columnar( &self, table_name: &str, ) -> Result<Option<Arc<ColumnarTable>>, StorageError>
Get columnar representation of a table, using cache if available
This method provides an Arc-wrapped columnar representation of the table, enabling zero-copy sharing between queries. The cache automatically manages memory via LRU eviction.
§Arguments
table_name- Name of the table to get columnar representation for
§Returns
Ok(Some(Arc<ColumnarTable>))- Cached or newly converted columnar dataOk(None)- Table not foundErr(StorageError)- Conversion failed
§Example
if let Some(columnar) = db.get_columnar("lineitem")? {
// Use columnar data for SIMD operations
}Sourcepub fn invalidate_columnar_cache(&self, table_name: &str)
pub fn invalidate_columnar_cache(&self, table_name: &str)
Invalidate columnar cache entry for a table
Called automatically when a table is modified (INSERT/UPDATE/DELETE) to ensure the cache doesn’t serve stale data.
Sourcepub fn clear_columnar_cache(&self)
pub fn clear_columnar_cache(&self)
Clear all columnar cache entries
Sourcepub fn columnar_cache_stats(&self) -> CacheStats
pub fn columnar_cache_stats(&self) -> CacheStats
Get columnar cache statistics
Returns statistics about cache hits, misses, evictions, and conversions. Useful for monitoring cache effectiveness and tuning the cache budget.
Sourcepub fn columnar_cache_memory_usage(&self) -> usize
pub fn columnar_cache_memory_usage(&self) -> usize
Get current columnar cache memory usage in bytes
Sourcepub fn columnar_cache_budget(&self) -> usize
pub fn columnar_cache_budget(&self) -> usize
Get columnar cache memory budget in bytes
Sourcepub fn set_columnar_cache_budget(&mut self, max_bytes: usize)
pub fn set_columnar_cache_budget(&mut self, max_bytes: usize)
Set the columnar cache memory budget
Note: This creates a new cache, discarding all cached data. Call this before loading data for best results.
Sourcepub fn pre_warm_columnar_cache(
&self,
table_names: &[&str],
) -> Result<usize, StorageError>
pub fn pre_warm_columnar_cache( &self, table_names: &[&str], ) -> Result<usize, StorageError>
Pre-warm the columnar cache for specific tables
This method eagerly converts row data to columnar format and populates the cache. Call this after data loading to avoid conversion overhead during query execution.
§Arguments
table_names- Names of tables to pre-warm
§Returns
Ok(count)- Number of tables successfully pre-warmedErr(StorageError)- Conversion failed for a table
§Example
// After loading TPC-H data
let warmed = db.pre_warm_columnar_cache(&["lineitem", "orders"])?;
eprintln!("Pre-warmed {} tables", warmed);§Performance
This method performs the row-to-columnar conversion once, eliminating the ~31% overhead that would otherwise occur on the first query. For a 600K row LINEITEM table, this saves ~40ms per query session.
Sourcepub fn pre_warm_all_columnar(&self) -> Result<usize, StorageError>
pub fn pre_warm_all_columnar(&self) -> Result<usize, StorageError>
Pre-warm the columnar cache for all tables in the database
This method eagerly converts all tables to columnar format. Useful for benchmark scenarios where all tables will be queried.
§Returns
Ok(count)- Number of tables successfully pre-warmedErr(StorageError)- Conversion failed for a table
§Example
// After loading all benchmark data
let warmed = db.pre_warm_all_columnar()?;
eprintln!("Pre-warmed {} tables", warmed);Source§impl Database
impl Database
Sourcepub fn enable_change_events(&mut self, capacity: usize) -> ChangeEventReceiver
pub fn enable_change_events(&mut self, capacity: usize) -> ChangeEventReceiver
Enable change event broadcasting
Creates a broadcast channel for notifying subscribers when data changes. Returns a receiver for the channel.
§Arguments
capacity- Maximum number of events to buffer before old events are overwritten
§Example
let mut db = Database::new();
let mut rx = db.enable_change_events(1024);
// Insert some data
db.insert_row("users", row)?;
// Receive change events
for event in rx.recv_all() {
println!("Change: {:?}", event);
}Sourcepub fn subscribe_changes(&self) -> Option<ChangeEventReceiver>
pub fn subscribe_changes(&self) -> Option<ChangeEventReceiver>
Subscribe to change events
Returns a new receiver for change events if broadcasting is enabled,
or None if enable_change_events() has not been called.
§Example
// Enable broadcasting
db.enable_change_events(1024);
// Create additional subscribers
let rx1 = db.subscribe_changes().unwrap();
let rx2 = db.subscribe_changes().unwrap();Sourcepub fn change_events_enabled(&self) -> bool
pub fn change_events_enabled(&self) -> bool
Check if change event broadcasting is enabled
Sourcepub fn notify_update(&self, table_name: &str, row_index: usize)
pub fn notify_update(&self, table_name: &str, row_index: usize)
Notify subscribers of an update event
This should be called by the executor after successfully updating a row. The storage layer broadcasts the event to any subscribers.
§Arguments
table_name- Name of the table that was modifiedrow_index- Index of the row that was updated
Sourcepub fn notify_deletes(&self, table_name: &str, row_indices: &[usize])
pub fn notify_deletes(&self, table_name: &str, row_indices: &[usize])
Notify subscribers of a delete event
This should be called by the executor after successfully deleting rows. The storage layer broadcasts the event to any subscribers.
§Arguments
table_name- Name of the table that was modifiedrow_indices- Indices of rows that were deleted (before deletion)
Source§impl Database
impl Database
Sourcepub fn new() -> Self
pub fn new() -> Self
Create a new empty database
Note: Security is disabled by default for backward compatibility with existing code.
Call enable_security() to turn on access control enforcement.
Sourcepub fn with_path(path: PathBuf) -> Self
pub fn with_path(path: PathBuf) -> Self
Create a new database with a specific storage path
The provided path will be used as the root directory for database files.
Index files will be stored in <path>/data/indexes/.
§Example
use std::path::PathBuf;
use vibesql_storage::Database;
let db = Database::with_path(PathBuf::from("/var/lib/myapp/db"));
// Index files will be stored in /var/lib/myapp/db/data/indexes/Sourcepub fn with_config(config: DatabaseConfig) -> Self
pub fn with_config(config: DatabaseConfig) -> Self
Create a new database with a specific configuration
Allows setting memory budgets, disk budgets, and spill policy for adaptive index management.
§Example
use vibesql_storage::{Database, DatabaseConfig};
// Browser environment with limited memory
let db = Database::with_config(DatabaseConfig::browser_default());
// Server environment with abundant memory
let db = Database::with_config(DatabaseConfig::server_default());Sourcepub fn with_path_and_config(path: PathBuf, config: DatabaseConfig) -> Self
pub fn with_path_and_config(path: PathBuf, config: DatabaseConfig) -> Self
Create a new database with both path and configuration
§Example
use std::path::PathBuf;
use vibesql_storage::{Database, DatabaseConfig};
let db = Database::with_path_and_config(
PathBuf::from("/var/lib/myapp/db"),
DatabaseConfig::server_default(),
);Sourcepub fn reset(&mut self)
pub fn reset(&mut self)
Reset the database to empty state (more efficient than creating a new instance).
Clears all tables, resets catalog to default state, and clears all indexes and transactions. Useful for test scenarios where you need to reuse a Database instance. Preserves database configuration (path, storage backend, memory budgets) across resets. Note: Persistence engine is preserved (WAL remains active if enabled).
Source§impl Database
impl Database
Sourcepub fn query_buffer_pool(&self) -> &QueryBufferPool
pub fn query_buffer_pool(&self) -> &QueryBufferPool
Get a reference to the query buffer pool for reusing allocations
Sourcepub fn get_cached_procedure_body(
&mut self,
name: &str,
) -> Result<&ProcedureBody, StorageError>
pub fn get_cached_procedure_body( &mut self, name: &str, ) -> Result<&ProcedureBody, StorageError>
Get cached procedure body or cache it on first access
Sourcepub fn invalidate_procedure_cache(&mut self, name: &str)
pub fn invalidate_procedure_cache(&mut self, name: &str)
Invalidate cached procedure body (call when procedure is dropped or replaced)
Sourcepub fn clear_routine_cache(&mut self)
pub fn clear_routine_cache(&mut self)
Clear all cached procedure/function bodies
Source§impl Database
impl Database
Sourcepub fn debug_info(&self) -> String
pub fn debug_info(&self) -> String
Get debug information about database state
Sourcepub fn dump_tables(&self) -> String
pub fn dump_tables(&self) -> String
Dump all table contents in readable format
Sourcepub fn dump_table(&self, name: &str) -> Result<String, StorageError>
pub fn dump_table(&self, name: &str) -> Result<String, StorageError>
Dump a specific table’s contents
Source§impl Database
impl Database
Sourcepub fn create_index(
&mut self,
index_name: String,
table_name: String,
unique: bool,
columns: Vec<IndexColumn>,
) -> Result<(), StorageError>
pub fn create_index( &mut self, index_name: String, table_name: String, unique: bool, columns: Vec<IndexColumn>, ) -> Result<(), StorageError>
Create an index
Sourcepub fn create_index_with_keys(
&mut self,
index_name: String,
table_name: String,
unique: bool,
columns: Vec<IndexColumn>,
keys: Vec<(Vec<SqlValue>, usize)>,
) -> Result<(), StorageError>
pub fn create_index_with_keys( &mut self, index_name: String, table_name: String, unique: bool, columns: Vec<IndexColumn>, keys: Vec<(Vec<SqlValue>, usize)>, ) -> Result<(), StorageError>
Create an index with pre-computed keys (for expression indexes)
This method is used when the caller has already evaluated the expressions and computed the key values for each row. This is necessary for expression indexes where the key values are derived from evaluating expressions on rows.
§Arguments
index_name- Name of the index to createtable_name- Name of the table this index is onunique- Whether this is a unique indexcolumns- The index column definitions (for metadata storage)keys- Pre-computed (key_values, row_id) pairs
Sourcepub fn index_exists(&self, index_name: &str) -> bool
pub fn index_exists(&self, index_name: &str) -> bool
Check if an index exists
Sourcepub fn get_index(&self, index_name: &str) -> Option<&IndexMetadata>
pub fn get_index(&self, index_name: &str) -> Option<&IndexMetadata>
Get index metadata
Sourcepub fn get_index_data(&self, index_name: &str) -> Option<&IndexData>
pub fn get_index_data(&self, index_name: &str) -> Option<&IndexData>
Get index data
Sourcepub fn update_indexes_for_update(
&mut self,
table_name: &str,
old_row: &Row,
new_row: &Row,
row_index: usize,
changed_columns: Option<&HashSet<usize>>,
)
pub fn update_indexes_for_update( &mut self, table_name: &str, old_row: &Row, new_row: &Row, row_index: usize, changed_columns: Option<&HashSet<usize>>, )
Update user-defined indexes for update operation
§Arguments
table_name- Name of the table being updatedold_row- Row data before the updatenew_row- Row data after the updaterow_index- Index of the row in the tablechanged_columns- Optional set of column indices that were modified. If provided, indexes that don’t involve any changed columns will be skipped.
Sourcepub fn update_indexes_for_delete(
&mut self,
table_name: &str,
row: &Row,
row_index: usize,
)
pub fn update_indexes_for_delete( &mut self, table_name: &str, row: &Row, row_index: usize, )
Update user-defined indexes for delete operation
Sourcepub fn batch_update_indexes_for_delete(
&mut self,
table_name: &str,
rows_to_delete: &[(usize, &Row)],
)
pub fn batch_update_indexes_for_delete( &mut self, table_name: &str, rows_to_delete: &[(usize, &Row)], )
Batch update user-defined indexes for delete operation
This is significantly more efficient than calling update_indexes_for_delete in a loop
because it pre-computes column indices once per index rather than once per row.
Sourcepub fn rebuild_indexes(&mut self, table_name: &str)
pub fn rebuild_indexes(&mut self, table_name: &str)
Rebuild user-defined indexes after bulk operations that change row indices
Sourcepub fn adjust_indexes_after_delete(
&mut self,
table_name: &str,
deleted_indices: &[usize],
)
pub fn adjust_indexes_after_delete( &mut self, table_name: &str, deleted_indices: &[usize], )
Adjust user-defined indexes after row deletions
This is more efficient than rebuild_indexes when only a few rows are deleted, as it adjusts row indices in place rather than rebuilding from scratch.
§Arguments
table_name- Name of the table whose indexes need adjustmentdeleted_indices- Sorted list of deleted row indices (ascending order)
Sourcepub fn drop_index(&mut self, index_name: &str) -> Result<(), StorageError>
pub fn drop_index(&mut self, index_name: &str) -> Result<(), StorageError>
Drop an index
Sourcepub fn list_indexes(&self) -> Vec<String>
pub fn list_indexes(&self) -> Vec<String>
List all indexes
Sourcepub fn list_indexes_for_table(&self, table_name: &str) -> Vec<String>
pub fn list_indexes_for_table(&self, table_name: &str) -> Vec<String>
List all indexes for a specific table
Sourcepub fn has_index_on_column(&self, table_name: &str, column_name: &str) -> bool
pub fn has_index_on_column(&self, table_name: &str, column_name: &str) -> bool
Check if a column has any user-defined index
This is used to determine if updates to a column require index maintenance. Returns true if any user-defined index (B-tree or spatial) includes this column.
Sourcepub fn add_to_expression_indexes_for_insert(
&mut self,
table_name: &str,
row_index: usize,
expression_keys: &HashMap<String, Vec<SqlValue>>,
)
pub fn add_to_expression_indexes_for_insert( &mut self, table_name: &str, row_index: usize, expression_keys: &HashMap<String, Vec<SqlValue>>, )
Add row to expression indexes after insert with pre-computed keys
This method handles expression indexes which require pre-computed key values since the storage layer cannot evaluate expressions.
Sourcepub fn update_expression_indexes_for_update(
&mut self,
table_name: &str,
row_index: usize,
old_expression_keys: &HashMap<String, Vec<SqlValue>>,
new_expression_keys: &HashMap<String, Vec<SqlValue>>,
)
pub fn update_expression_indexes_for_update( &mut self, table_name: &str, row_index: usize, old_expression_keys: &HashMap<String, Vec<SqlValue>>, new_expression_keys: &HashMap<String, Vec<SqlValue>>, )
Update expression indexes for update operation with pre-computed keys
Sourcepub fn update_expression_indexes_for_delete(
&mut self,
table_name: &str,
row_index: usize,
expression_keys: &HashMap<String, Vec<SqlValue>>,
)
pub fn update_expression_indexes_for_delete( &mut self, table_name: &str, row_index: usize, expression_keys: &HashMap<String, Vec<SqlValue>>, )
Update expression indexes for delete operation with pre-computed keys
Sourcepub fn get_expression_indexes_for_table(
&self,
table_name: &str,
) -> Vec<(String, &IndexMetadata)>
pub fn get_expression_indexes_for_table( &self, table_name: &str, ) -> Vec<(String, &IndexMetadata)>
Get expression indexes for a specific table
Returns metadata for all expression indexes on the table. Used by executor to determine which indexes need expression evaluation during DML operations.
Sourcepub fn has_expression_indexes(&self, table_name: &str) -> bool
pub fn has_expression_indexes(&self, table_name: &str) -> bool
Check if a table has any expression indexes
Sourcepub fn clear_expression_index_data(&mut self, table_name: &str)
pub fn clear_expression_index_data(&mut self, table_name: &str)
Clear expression index data for a table (for rebuilding after compaction)
Sourcepub fn create_spatial_index(
&mut self,
metadata: SpatialIndexMetadata,
spatial_index: SpatialIndex,
) -> Result<(), StorageError>
pub fn create_spatial_index( &mut self, metadata: SpatialIndexMetadata, spatial_index: SpatialIndex, ) -> Result<(), StorageError>
Create a spatial index
Sourcepub fn create_ivfflat_index(
&mut self,
index_name: String,
table_name: String,
column_name: String,
col_idx: usize,
dimensions: usize,
lists: usize,
metric: VectorDistanceMetric,
) -> Result<(), StorageError>
pub fn create_ivfflat_index( &mut self, index_name: String, table_name: String, column_name: String, col_idx: usize, dimensions: usize, lists: usize, metric: VectorDistanceMetric, ) -> Result<(), StorageError>
Create an IVFFlat index for approximate nearest neighbor search on vector columns
This method creates an IVFFlat (Inverted File with Flat quantization) index for efficient approximate nearest neighbor search on vector data.
§Arguments
index_name- Name for the new indextable_name- Name of the table containing the vector columncolumn_name- Name of the vector column to indexcol_idx- Column index in the table schemadimensions- Number of dimensions in the vectorslists- Number of clusters for the IVFFlat algorithmmetric- Distance metric to use (L2, Cosine, InnerProduct)
Sourcepub fn search_ivfflat_index(
&self,
index_name: &str,
query_vector: &[f64],
k: usize,
) -> Result<Vec<(usize, f64)>, StorageError>
pub fn search_ivfflat_index( &self, index_name: &str, query_vector: &[f64], k: usize, ) -> Result<Vec<(usize, f64)>, StorageError>
Search an IVFFlat index for approximate nearest neighbors
§Arguments
index_name- Name of the IVFFlat indexquery_vector- The query vector (f64)k- Maximum number of nearest neighbors to return
§Returns
Ok(Vec<(usize, f64)>)- Vector of (row_id, distance) pairs, ordered by distanceErr(StorageError)- If index not found or not an IVFFlat index
Sourcepub fn get_ivfflat_indexes_for_table(
&self,
table_name: &str,
) -> Vec<(&IndexMetadata, &IVFFlatIndex)>
pub fn get_ivfflat_indexes_for_table( &self, table_name: &str, ) -> Vec<(&IndexMetadata, &IVFFlatIndex)>
Get all IVFFlat indexes for a specific table
Sourcepub fn set_ivfflat_probes(
&mut self,
index_name: &str,
probes: usize,
) -> Result<(), StorageError>
pub fn set_ivfflat_probes( &mut self, index_name: &str, probes: usize, ) -> Result<(), StorageError>
Set the number of probes for an IVFFlat index
Sourcepub fn create_hnsw_index(
&mut self,
index_name: String,
table_name: String,
column_name: String,
col_idx: usize,
dimensions: usize,
m: u32,
ef_construction: u32,
metric: VectorDistanceMetric,
) -> Result<(), StorageError>
pub fn create_hnsw_index( &mut self, index_name: String, table_name: String, column_name: String, col_idx: usize, dimensions: usize, m: u32, ef_construction: u32, metric: VectorDistanceMetric, ) -> Result<(), StorageError>
Create an HNSW index for approximate nearest neighbor search on vector columns
This method creates an HNSW (Hierarchical Navigable Small World) index for efficient approximate nearest neighbor search on vector data.
§Arguments
index_name- Name for the new indextable_name- Name of the table containing the vector columncolumn_name- Name of the vector column to indexcol_idx- Column index in the table schemadimensions- Number of dimensions in the vectorsm- Maximum number of connections per node (default 16)ef_construction- Size of dynamic candidate list during construction (default 64)metric- Distance metric to use (L2, Cosine, InnerProduct)
Sourcepub fn search_hnsw_index(
&self,
index_name: &str,
query_vector: &[f64],
k: usize,
) -> Result<Vec<(usize, f64)>, StorageError>
pub fn search_hnsw_index( &self, index_name: &str, query_vector: &[f64], k: usize, ) -> Result<Vec<(usize, f64)>, StorageError>
Search an HNSW index for approximate nearest neighbors
§Arguments
index_name- Name of the HNSW indexquery_vector- The query vector (f64)k- Maximum number of nearest neighbors to return
§Returns
Ok(Vec<(usize, f64)>)- Vector of (row_id, distance) pairs, ordered by distanceErr(StorageError)- If index not found or not an HNSW index
Sourcepub fn get_hnsw_indexes_for_table(
&self,
table_name: &str,
) -> Vec<(&IndexMetadata, &HnswIndex)>
pub fn get_hnsw_indexes_for_table( &self, table_name: &str, ) -> Vec<(&IndexMetadata, &HnswIndex)>
Get all HNSW indexes for a specific table
Sourcepub fn set_hnsw_ef_search(
&mut self,
index_name: &str,
ef_search: usize,
) -> Result<(), StorageError>
pub fn set_hnsw_ef_search( &mut self, index_name: &str, ef_search: usize, ) -> Result<(), StorageError>
Set the ef_search parameter for an HNSW index
Sourcepub fn spatial_index_exists(&self, index_name: &str) -> bool
pub fn spatial_index_exists(&self, index_name: &str) -> bool
Check if a spatial index exists
Sourcepub fn get_spatial_index_metadata(
&self,
index_name: &str,
) -> Option<&SpatialIndexMetadata>
pub fn get_spatial_index_metadata( &self, index_name: &str, ) -> Option<&SpatialIndexMetadata>
Get spatial index metadata
Sourcepub fn get_spatial_index(&self, index_name: &str) -> Option<&SpatialIndex>
pub fn get_spatial_index(&self, index_name: &str) -> Option<&SpatialIndex>
Get spatial index (immutable)
Sourcepub fn get_spatial_index_mut(
&mut self,
index_name: &str,
) -> Option<&mut SpatialIndex>
pub fn get_spatial_index_mut( &mut self, index_name: &str, ) -> Option<&mut SpatialIndex>
Get spatial index (mutable)
Sourcepub fn get_spatial_indexes_for_table(
&self,
table_name: &str,
) -> Vec<(&SpatialIndexMetadata, &SpatialIndex)>
pub fn get_spatial_indexes_for_table( &self, table_name: &str, ) -> Vec<(&SpatialIndexMetadata, &SpatialIndex)>
Get all spatial indexes for a specific table
Sourcepub fn get_spatial_indexes_for_table_mut(
&mut self,
table_name: &str,
) -> Vec<(&SpatialIndexMetadata, &mut SpatialIndex)>
pub fn get_spatial_indexes_for_table_mut( &mut self, table_name: &str, ) -> Vec<(&SpatialIndexMetadata, &mut SpatialIndex)>
Get all spatial indexes for a specific table (mutable)
Sourcepub fn drop_spatial_index(
&mut self,
index_name: &str,
) -> Result<(), StorageError>
pub fn drop_spatial_index( &mut self, index_name: &str, ) -> Result<(), StorageError>
Drop a spatial index
Sourcepub fn drop_spatial_indexes_for_table(
&mut self,
table_name: &str,
) -> Vec<String>
pub fn drop_spatial_indexes_for_table( &mut self, table_name: &str, ) -> Vec<String>
Drop all spatial indexes associated with a table (CASCADE behavior)
Sourcepub fn list_spatial_indexes(&self) -> Vec<String>
pub fn list_spatial_indexes(&self) -> Vec<String>
List all spatial indexes
Sourcepub fn lookup_by_index(
&self,
index_name: &str,
key_values: &[SqlValue],
) -> Result<Option<Vec<&Row>>, StorageError>
pub fn lookup_by_index( &self, index_name: &str, key_values: &[SqlValue], ) -> Result<Option<Vec<&Row>>, StorageError>
Look up rows by index name and key values - bypasses SQL parsing for maximum performance
This method provides direct B+ tree index lookups, completely bypassing SQL parsing and the query execution pipeline. Use this for performance-critical OLTP workloads where you know the exact index and key values.
§Arguments
index_name- Name of the index (as created with CREATE INDEX)key_values- Key values to look up (must match index column order)
§Returns
Ok(Some(Vec<&Row>))- The rows matching the keyOk(None)- No rows match the keyErr(StorageError)- Index not found or other error
§Performance
This is ~100-300x faster than executing a SQL SELECT query because it:
- Skips SQL parsing (~300µs saved)
- Skips query planning and optimization
- Uses direct B+ tree lookup on the index
§Example
// Single-column index lookup
let rows = db.lookup_by_index("idx_users_pk", &[SqlValue::Integer(42)])?;
// Composite key lookup
let rows = db.lookup_by_index("idx_orders_pk", &[
SqlValue::Integer(warehouse_id),
SqlValue::Integer(district_id),
SqlValue::Integer(order_id),
])?;Sourcepub fn lookup_one_by_index(
&self,
index_name: &str,
key_values: &[SqlValue],
) -> Result<Option<&Row>, StorageError>
pub fn lookup_one_by_index( &self, index_name: &str, key_values: &[SqlValue], ) -> Result<Option<&Row>, StorageError>
Look up the first row by index - optimized for unique indexes
This is a convenience method for unique indexes where you expect exactly one row. Returns only the first matching row.
§Arguments
index_name- Name of the indexkey_values- Key values to look up
§Returns
Ok(Some(&Row))- The first matching rowOk(None)- No row matches the keyErr(StorageError)- Index not found or other error
Sourcepub fn lookup_by_index_batch<'a>(
&'a self,
index_name: &str,
keys: &[Vec<SqlValue>],
) -> Result<Vec<Option<Vec<&'a Row>>>, StorageError>
pub fn lookup_by_index_batch<'a>( &'a self, index_name: &str, keys: &[Vec<SqlValue>], ) -> Result<Vec<Option<Vec<&'a Row>>>, StorageError>
Batch lookup by index - look up multiple keys in a single call
This method is optimized for batch point lookups where you need to retrieve
multiple rows by their index keys. It’s more efficient than calling
lookup_by_index in a loop.
§Arguments
index_name- Name of the indexkeys- List of key value tuples to look up
§Returns
Ok(Vec<Option<Vec<&Row>>>)- For each key, the matching rows (or None if not found)Err(StorageError)- Index not found or other error
§Example
// Batch lookup multiple items
let results = db.lookup_by_index_batch("idx_items_pk", &[
vec![SqlValue::Integer(1)],
vec![SqlValue::Integer(2)],
vec![SqlValue::Integer(3)],
])?;
for (key_idx, rows) in results.iter().enumerate() {
if let Some(rows) = rows {
println!("Key {} matched {} rows", key_idx, rows.len());
}
}Sourcepub fn lookup_one_by_index_batch<'a>(
&'a self,
index_name: &str,
keys: &[Vec<SqlValue>],
) -> Result<Vec<Option<&'a Row>>, StorageError>
pub fn lookup_one_by_index_batch<'a>( &'a self, index_name: &str, keys: &[Vec<SqlValue>], ) -> Result<Vec<Option<&'a Row>>, StorageError>
Batch lookup returning first row only - optimized for unique indexes
Like lookup_by_index_batch but returns only the first matching row for each key.
More efficient when you know the index is unique.
§Arguments
index_name- Name of the indexkeys- List of key value tuples to look up
§Returns
Ok(Vec<Option<&Row>>)- For each key, the first matching row (or None)
Sourcepub fn lookup_by_index_prefix(
&self,
index_name: &str,
prefix: &[SqlValue],
) -> Result<Vec<&Row>, StorageError>
pub fn lookup_by_index_prefix( &self, index_name: &str, prefix: &[SqlValue], ) -> Result<Vec<&Row>, StorageError>
Look up rows by index using prefix matching - for multi-column indexes
This method performs prefix matching on multi-column indexes. For example, with an index on (a, b, c), you can look up all rows where (a, b) match a specific value, regardless of c.
§Arguments
index_name- Name of the index (as created with CREATE INDEX)prefix- Prefix key values to match (must be a prefix of index columns)
§Returns
Ok(Vec<&Row>)- The rows matching the prefix (empty if none found)Err(StorageError)- Index not found or other error
§Performance
Uses efficient B+ tree range scan: O(log n + k) where n is total keys, k is matches.
§Example
// Index on (warehouse_id, district_id, order_id) - 3 columns
// Find all orders for warehouse 1, district 5 (2-column prefix)
let rows = db.lookup_by_index_prefix("idx_orders_pk", &[
SqlValue::Integer(1), // warehouse_id
SqlValue::Integer(5), // district_id
])?;Sourcepub fn lookup_by_index_prefix_batch<'a>(
&'a self,
index_name: &str,
prefixes: &[Vec<SqlValue>],
) -> Result<Vec<Vec<&'a Row>>, StorageError>
pub fn lookup_by_index_prefix_batch<'a>( &'a self, index_name: &str, prefixes: &[Vec<SqlValue>], ) -> Result<Vec<Vec<&'a Row>>, StorageError>
Batch prefix lookup - look up multiple prefixes in a single call
This method is optimized for batch prefix lookups on multi-column indexes. For each prefix, returns all rows where the key prefix matches.
§Arguments
index_name- Name of the indexprefixes- List of prefix key tuples to look up
§Returns
Ok(Vec<Vec<&Row>>)- For each prefix, the matching rows (empty vec if none)Err(StorageError)- Index not found or other error
§Example
// Index on (w_id, d_id, o_id) - find new orders for all 10 districts
let prefixes: Vec<Vec<SqlValue>> = (1..=10)
.map(|d| vec![SqlValue::Integer(w_id), SqlValue::Integer(d)])
.collect();
let results = db.lookup_by_index_prefix_batch("idx_new_order_pk", &prefixes)?;
// results[0] = rows for district 1, results[1] = rows for district 2, etc.Sourcepub fn delete_by_pk_fast(
&mut self,
table_name: &str,
pk_values: &[SqlValue],
) -> Result<bool, StorageError>
pub fn delete_by_pk_fast( &mut self, table_name: &str, pk_values: &[SqlValue], ) -> Result<bool, StorageError>
Delete a single row by PK value - fast path that skips unnecessary overhead
This method provides a highly optimized DELETE path for single-row PK deletes. It bypasses the full DELETE executor overhead when:
- There are no triggers on the table
- There are no foreign key constraints referencing this table
- The WHERE clause is a simple PK equality (
id = ?)
§Arguments
table_name- Name of the tablepk_values- Primary key values to match
§Returns
Ok(true)- Row was deletedOk(false)- No row found with this PKErr(StorageError)- Table not found or other error
§Performance
This is ~2-3x faster than the full DELETE executor because it:
- Uses direct PK index lookup (O(1))
- Avoids cloning row data
- Skips ExpressionEvaluator creation
- Performs minimal index maintenance
§Profiling
Set environment variables to enable profiling:
DELETE_PROFILE=1- Enable timing collection and auto-print summary on thread exitDELETE_PROFILE_VERBOSE=1- Also print per-delete breakdown to stderr
Use print_delete_profile_summary() to manually print aggregate stats.
Use reset_delete_profile_stats() to reset the stats before a benchmark.
§Safety
Caller must ensure:
- No triggers exist on this table for DELETE
- No foreign key constraints reference this table
Note: WAL logging is handled internally by this method.
§Example
// Fast delete by PK
let deleted = db.delete_by_pk_fast("users", &[SqlValue::Integer(42)])?;
if deleted {
println!("User 42 deleted");
}Sourcepub fn get_table_index_info(&self, table_name: &str) -> Option<TableIndexInfo>
pub fn get_table_index_info(&self, table_name: &str) -> Option<TableIndexInfo>
Get table index information for DML cost estimation
This method collects all the metadata needed by CostEstimator::estimate_insert(),
estimate_update(), and estimate_delete() to compute accurate DML operation costs.
§Arguments
table_name- Name of the table to get index info for
§Returns
Some(TableIndexInfo)- Index information if table existsNone- If table doesn’t exist
§Example
let info = db.get_table_index_info("users")?;
let insert_cost = cost_estimator.estimate_insert(&info);Source§impl Database
impl Database
Sourcepub fn enable_persistence(&mut self, engine: PersistenceEngine)
pub fn enable_persistence(&mut self, engine: PersistenceEngine)
Enable WAL-based async persistence
Creates a persistence engine that writes changes to a WAL file in the background. All subsequent DML and DDL operations will be logged to the WAL for durability.
§Arguments
engine- A pre-configured PersistenceEngine instance
§Example
use vibesql_storage::{Database, PersistenceEngine, PersistenceConfig};
let mut db = Database::new();
let engine = PersistenceEngine::new("/path/to/wal.log", PersistenceConfig::default())?;
db.enable_persistence(engine);Sourcepub fn persistence_enabled(&self) -> bool
pub fn persistence_enabled(&self) -> bool
Check if WAL persistence is enabled
Sourcepub fn persistence_stats(&self) -> Option<PersistenceStats>
pub fn persistence_stats(&self) -> Option<PersistenceStats>
Get persistence statistics (if enabled)
Sourcepub fn sync_persistence(&self) -> Result<(), StorageError>
pub fn sync_persistence(&self) -> Result<(), StorageError>
Sync all pending WAL entries to disk
Blocks until all pending entries have been written and flushed. This is useful for ensuring durability before returning to the user.
Sourcepub fn emit_wal_delete(
&self,
table_name: &str,
row_id: u64,
old_values: Vec<SqlValue>,
)
pub fn emit_wal_delete( &self, table_name: &str, row_id: u64, old_values: Vec<SqlValue>, )
Emit a WAL delete entry for persistence
Called by the DELETE executor before rows are removed. Captures old_values for recovery replay.
Sourcepub fn emit_wal_create_index(
&self,
index_id: u32,
index_name: &str,
table_name: &str,
column_indices: Vec<u32>,
is_unique: bool,
)
pub fn emit_wal_create_index( &self, index_id: u32, index_name: &str, table_name: &str, column_indices: Vec<u32>, is_unique: bool, )
Emit a WAL create index entry for persistence
Called by the CREATE INDEX executor after index is created.
Sourcepub fn emit_wal_drop_index(&self, index_id: u32, index_name: &str)
pub fn emit_wal_drop_index(&self, index_id: u32, index_name: &str)
Emit a WAL drop index entry for persistence
Called by the DROP INDEX executor before index is dropped.
Sourcepub fn last_insert_rowid(&self) -> i64
pub fn last_insert_rowid(&self) -> i64
Get the last auto-generated ID from an INSERT operation
Returns the most recent value generated by AUTO_INCREMENT during an INSERT. This is used to implement LAST_INSERT_ROWID() and LAST_INSERT_ID() functions.
Returns 0 if no auto-generated values have been produced yet.
§Example
// Create table with AUTO_INCREMENT
db.execute("CREATE TABLE users (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100))")?;
// Insert a row (ID is auto-generated)
db.execute("INSERT INTO users (name) VALUES ('Alice')")?;
// Get the generated ID
let id = db.last_insert_rowid();
assert_eq!(id, 1);Sourcepub fn set_last_insert_rowid(&mut self, id: i64)
pub fn set_last_insert_rowid(&mut self, id: i64)
Set the last auto-generated ID
This is called internally by the INSERT executor when a sequence value is generated for an AUTO_INCREMENT column.
For multi-row inserts, this will be the ID of the first row inserted (following MySQL semantics for batch inserts).
Sourcepub fn last_changes_count(&self) -> usize
pub fn last_changes_count(&self) -> usize
Get the number of rows changed by the last INSERT/UPDATE/DELETE statement
Returns the count of rows affected by the most recent DML operation. This is used to implement the SQLite changes() function.
Returns 0 if no DML operations have been performed yet.
§Example
// Insert multiple rows
db.execute("INSERT INTO users (name) VALUES ('Alice'), ('Bob'), ('Carol')")?;
// Get the number of rows inserted
let changes = db.last_changes_count();
assert_eq!(changes, 3);
// Delete some rows
db.execute("DELETE FROM users WHERE name = 'Alice'")?;
assert_eq!(db.last_changes_count(), 1);Sourcepub fn set_last_changes_count(&mut self, count: usize)
pub fn set_last_changes_count(&mut self, count: usize)
Set the number of rows changed by the last DML statement
This is called internally by INSERT, UPDATE, and DELETE executors after completing their operations.
Sourcepub fn total_changes_count(&self) -> usize
pub fn total_changes_count(&self) -> usize
Get the total number of rows changed since the database connection was opened
Returns the cumulative count of rows affected by all INSERT, UPDATE, and DELETE operations since the database was created. This is used to implement the SQLite total_changes() function.
Returns 0 for a new database connection.
§Example
// Insert rows
db.execute("INSERT INTO users (name) VALUES ('Alice'), ('Bob')")?;
assert_eq!(db.last_changes_count(), 2); // Last operation: 2 rows
// Delete a row
db.execute("DELETE FROM users WHERE name = 'Alice'")?;
assert_eq!(db.last_changes_count(), 1); // Last operation: 1 row
// Total changes accumulates
assert_eq!(db.total_changes_count(), 3); // 2 + 1 = 3 rows totalSourcepub fn increment_total_changes_count(&mut self, count: usize)
pub fn increment_total_changes_count(&mut self, count: usize)
Increment the total changes count by the specified amount
This is called internally by INSERT, UPDATE, and DELETE executors after completing their operations, in addition to set_last_changes_count().
Sourcepub fn search_count(&self) -> u64
pub fn search_count(&self) -> u64
Get the current search count
Returns the number of rows examined during query execution. This is used to implement sqlite_search_count() for TCL test compatibility.
In SQLite, this tracks “MoveTo” and “Next” VDBE operations. In VibeSQL, this tracks rows read during table/index scans.
§Example
// Reset before query
db.reset_search_count();
// Execute query...
db.execute("SELECT * FROM users WHERE id = 1")?;
// Get count of rows examined
let count = db.search_count();Sourcepub fn reset_search_count(&self)
pub fn reset_search_count(&self)
Reset the search count to zero
Call this before executing a query to measure how many rows were examined by that specific query.
Sourcepub fn increment_search_count(&self, count: u64)
pub fn increment_search_count(&self, count: u64)
Increment the search count by a specified amount
Called internally by the executor when rows are examined during table scans, index scans, or other row-reading operations.
§Arguments
count- Number of rows examined (typically 1 for row-by-row, or batch size for columnar)
Source§impl Database
impl Database
Sourcepub fn get_row_by_pk(
&self,
table_name: &str,
pk_value: &SqlValue,
) -> Result<Option<&Row>, StorageError>
pub fn get_row_by_pk( &self, table_name: &str, pk_value: &SqlValue, ) -> Result<Option<&Row>, StorageError>
Get a row by primary key value - bypasses SQL parsing for maximum performance
This method provides O(1) point lookups directly using the primary key index, completely bypassing SQL parsing and the query execution pipeline.
§Arguments
table_name- Name of the tablepk_value- Primary key value to look up
§Returns
Ok(Some(&Row))- The row if foundOk(None)- If no row matches the primary keyErr(StorageError)- If table doesn’t exist or has no primary key
§Performance
This is ~100-300x faster than executing a SQL point SELECT query because it:
- Skips SQL parsing (~300µs)
- Skips query planning and optimization
- Uses direct HashMap lookup on the PK index
§Example
let row = db.get_row_by_pk("users", &SqlValue::Integer(42))?;
if let Some(row) = row {
let name = &row.values[1];
}Sourcepub fn get_column_by_pk(
&self,
table_name: &str,
pk_value: &SqlValue,
column_index: usize,
) -> Result<Option<&SqlValue>, StorageError>
pub fn get_column_by_pk( &self, table_name: &str, pk_value: &SqlValue, column_index: usize, ) -> Result<Option<&SqlValue>, StorageError>
Get a specific column value by primary key - bypasses SQL parsing for maximum performance
This is even faster than get_row_by_pk when you only need one column value,
as it avoids returning the entire row.
§Arguments
table_name- Name of the tablepk_value- Primary key value to look upcolumn_index- Index of the column to retrieve (0-based)
§Returns
Ok(Some(&SqlValue))- The column value if foundOk(None)- If no row matches the primary keyErr(StorageError)- If table doesn’t exist or column index is out of bounds
Sourcepub fn get_row_by_composite_pk(
&self,
table_name: &str,
pk_values: &[SqlValue],
) -> Result<Option<&Row>, StorageError>
pub fn get_row_by_composite_pk( &self, table_name: &str, pk_values: &[SqlValue], ) -> Result<Option<&Row>, StorageError>
Get a row by composite primary key - for tables with multi-column primary keys
§Arguments
table_name- Name of the tablepk_values- Primary key values in column order
§Returns
Ok(Some(&Row))- The row if foundOk(None)- If no row matches the primary keyErr(StorageError)- If table doesn’t exist or has no primary key
Source§impl Database
impl Database
Sourcepub fn set_role(&mut self, role: Option<String>)
pub fn set_role(&mut self, role: Option<String>)
Set the current session role for privilege checks
Sourcepub fn get_current_role(&self) -> String
pub fn get_current_role(&self) -> String
Get the current session role (defaults to “PUBLIC” if not set)
Sourcepub fn is_security_enabled(&self) -> bool
pub fn is_security_enabled(&self) -> bool
Check if security enforcement is enabled
Sourcepub fn disable_security(&mut self)
pub fn disable_security(&mut self)
Disable security checks (for testing)
Sourcepub fn enable_security(&mut self)
pub fn enable_security(&mut self)
Enable security checks
Sourcepub fn set_session_variable(&mut self, name: &str, value: SqlValue)
pub fn set_session_variable(&mut self, name: &str, value: SqlValue)
Set a session variable (MySQL-style @variable)
Sourcepub fn get_session_variable(&self, name: &str) -> Option<&SqlValue>
pub fn get_session_variable(&self, name: &str) -> Option<&SqlValue>
Get a session variable value
Sourcepub fn clear_session_variables(&mut self)
pub fn clear_session_variables(&mut self)
Clear all session variables
Sourcepub fn full_column_names(&self) -> bool
pub fn full_column_names(&self) -> bool
Get the full_column_names PRAGMA setting
When ON, column names in result sets use “table.column” format
Sourcepub fn set_full_column_names(&mut self, value: bool)
pub fn set_full_column_names(&mut self, value: bool)
Set the full_column_names PRAGMA setting
Sourcepub fn short_column_names(&self) -> bool
pub fn short_column_names(&self) -> bool
Get the short_column_names PRAGMA setting
When ON (default), column names use just the column name (e.g., “f1”) When OFF, column names may include expression text
Sourcepub fn set_short_column_names(&mut self, value: bool)
pub fn set_short_column_names(&mut self, value: bool)
Set the short_column_names PRAGMA setting
Sourcepub fn case_sensitive_like(&self) -> bool
pub fn case_sensitive_like(&self) -> bool
Get the case_sensitive_like PRAGMA setting
When OFF (default), LIKE comparisons are case-insensitive for ASCII letters (A-Z = a-z). When ON, LIKE comparisons are case-sensitive (strict byte-for-byte matching).
This matches SQLite’s default behavior where LIKE is case-insensitive for ASCII.
Sourcepub fn set_case_sensitive_like(&mut self, value: bool)
pub fn set_case_sensitive_like(&mut self, value: bool)
Set the case_sensitive_like PRAGMA setting
Sourcepub fn reverse_unordered_selects(&self) -> bool
pub fn reverse_unordered_selects(&self) -> bool
Get the reverse_unordered_selects PRAGMA setting
When ON, the order of output rows from SELECT statements that do not have an ORDER BY clause is reversed. This is useful for testing to ensure that applications do not depend on an implicit row ordering.
Sourcepub fn set_reverse_unordered_selects(&mut self, value: bool)
pub fn set_reverse_unordered_selects(&mut self, value: bool)
Set the reverse_unordered_selects PRAGMA setting
Sourcepub fn insert_sqlite_stat1(
&mut self,
table_name: String,
index_name: Option<String>,
stat: String,
)
pub fn insert_sqlite_stat1( &mut self, table_name: String, index_name: Option<String>, stat: String, )
Insert a sqlite_stat1 entry
This allows manual insertion of statistics for query optimizer tuning, matching SQLite’s behavior where users can INSERT INTO sqlite_stat1.
Sourcepub fn get_sqlite_stat1(
&self,
table_name: &str,
index_name: Option<&str>,
) -> Option<&String>
pub fn get_sqlite_stat1( &self, table_name: &str, index_name: Option<&str>, ) -> Option<&String>
Get a sqlite_stat1 entry
Sourcepub fn get_all_sqlite_stat1(&self) -> &HashMap<(String, Option<String>), String>
pub fn get_all_sqlite_stat1(&self) -> &HashMap<(String, Option<String>), String>
Get all sqlite_stat1 entries
Sourcepub fn delete_sqlite_stat1(
&mut self,
table_name: &str,
index_name: Option<&str>,
)
pub fn delete_sqlite_stat1( &mut self, table_name: &str, index_name: Option<&str>, )
Delete a sqlite_stat1 entry
Sourcepub fn clear_sqlite_stat1(&mut self)
pub fn clear_sqlite_stat1(&mut self)
Clear all sqlite_stat1 entries
Sourcepub fn reserve_rowid(&mut self, table_name: &str, rowid: u64, is_explicit: bool)
pub fn reserve_rowid(&mut self, table_name: &str, rowid: u64, is_explicit: bool)
Reserve a rowid for a table during REPLACE operations
During REPLACE INTO, SQLite allocates the rowid for the new row BEFORE firing BEFORE DELETE triggers. Any INSERT within those triggers that tries to allocate the same rowid will fail with a UNIQUE constraint violation on rowid.
§Arguments
table_name- The table name (case-insensitive)rowid- The rowid to reserveis_explicit- True if the rowid comes from an explicit INTEGER PRIMARY KEY value, false if it’s auto-allocated. This affects how conflicts are handled in AFTER DELETE triggers.
Sourcepub fn release_reserved_rowid(&mut self, table_name: &str)
pub fn release_reserved_rowid(&mut self, table_name: &str)
Release a reserved rowid after REPLACE completes
Sourcepub fn get_reserved_rowid_info(&self, table_name: &str) -> Option<(u64, bool)>
pub fn get_reserved_rowid_info(&self, table_name: &str) -> Option<(u64, bool)>
Check if a rowid is reserved for a table and get the reservation details
Returns Some((rowid, is_explicit)) if a rowid is reserved, None otherwise.
Sourcepub fn is_rowid_reserved(&self, table_name: &str, rowid: u64) -> bool
pub fn is_rowid_reserved(&self, table_name: &str, rowid: u64) -> bool
Check if a rowid is reserved for a table
Sourcepub fn get_reserved_rowid(&self, table_name: &str) -> Option<u64>
pub fn get_reserved_rowid(&self, table_name: &str) -> Option<u64>
Get the reserved rowid for a table, if any
Sourcepub fn set_sql_mode(&mut self, mode: SqlMode)
pub fn set_sql_mode(&mut self, mode: SqlMode)
Set the SQL compatibility mode at runtime
This allows changing the SQL dialect (MySQL, SQLite, etc.) during a session.
The @@sql_mode session variable is automatically updated to reflect the change.
§Example
use vibesql_storage::Database;
use vibesql_types::{MySqlModeFlags, SqlMode};
let mut db = Database::new();
// Default is MySQL (for SQLLogicTest compatibility)
assert!(matches!(db.sql_mode(), SqlMode::MySQL { .. }));
db.set_sql_mode(SqlMode::SQLite);
assert!(matches!(db.sql_mode(), SqlMode::SQLite));Source§impl Database
impl Database
Sourcepub fn create_table_with_identifier(
&mut self,
schema: TableSchema,
identifier: TableIdentifier,
) -> Result<(), StorageError>
pub fn create_table_with_identifier( &mut self, schema: TableSchema, identifier: TableIdentifier, ) -> Result<(), StorageError>
Create a table with SQL:1999 identifier semantics.
The identifier parameter determines how the table name is stored:
- Quoted identifiers: stored with exact case
- Unquoted identifiers: stored with lowercase canonical form
- Qualified identifiers: schema and table have independent case handling
Temporary tables (in the “temp” schema) are not persisted to WAL.
Sourcepub fn create_table(&mut self, schema: TableSchema) -> Result<(), StorageError>
pub fn create_table(&mut self, schema: TableSchema) -> Result<(), StorageError>
Create a table Legacy method - uses global case_sensitive_identifiers setting
Sourcepub fn get_table_by_identifier(
&self,
identifier: &TableIdentifier,
) -> Option<&Table>
pub fn get_table_by_identifier( &self, identifier: &TableIdentifier, ) -> Option<&Table>
Get a table by identifier using SQL:1999 case semantics.
Uses the canonical form of the identifier for direct lookup without fallbacks. Supports both simple and schema-qualified identifiers.
Sourcepub fn get_table(&self, name: &str) -> Option<&Table>
pub fn get_table(&self, name: &str) -> Option<&Table>
Get a table for reading Legacy method with fallback lookups for backward compatibility
For unqualified names, checks temp schema first (SQLite semantics).
SQLite Compatibility: The “temp” schema name is mapped to the session’s
temp schema, allowing temp.tablename syntax.
Sourcepub fn get_table_mut(&mut self, name: &str) -> Option<&mut Table>
pub fn get_table_mut(&mut self, name: &str) -> Option<&mut Table>
Get a table for writing
For unqualified names, checks temp schema first (SQLite semantics).
SQLite Compatibility: The “temp” schema name is mapped to the session’s
temp schema, allowing temp.tablename syntax.
Sourcepub fn drop_table(&mut self, name: &str) -> Result<(), StorageError>
pub fn drop_table(&mut self, name: &str) -> Result<(), StorageError>
Drop a table
Temporary tables (in the “temp” schema) are not persisted to WAL.
Sourcepub fn insert_row(
&mut self,
table_name: &str,
row: Row,
) -> Result<(), StorageError>
pub fn insert_row( &mut self, table_name: &str, row: Row, ) -> Result<(), StorageError>
Insert a row into a table
Temporary tables (in the “temp” schema) are not persisted to WAL.
Sourcepub fn insert_rows_batch(
&mut self,
table_name: &str,
rows: Vec<Row>,
) -> Result<usize, StorageError>
pub fn insert_rows_batch( &mut self, table_name: &str, rows: Vec<Row>, ) -> Result<usize, StorageError>
Insert multiple rows into a table in a single batch
This method is optimized for bulk data loading and provides significant
performance improvements over repeated insert_row calls:
- Pre-allocation: Vector capacity reserved upfront
- Batch validation: All rows validated before any insertion
- Deferred index rebuild: Indexes rebuilt once after all inserts
- Single cache invalidation: Columnar cache invalidated once at end
§Arguments
table_name- Name of the table to insert intorows- Vector of rows to insert
§Returns
Ok(usize)- Number of rows successfully insertedErr(StorageError)- If validation fails (no rows inserted on error)
§Performance
For large batches (1000+ rows), expect 10-50x speedup vs single-row inserts.
§Example
let rows = vec![
Row::new(vec![SqlValue::Integer(1), SqlValue::Varchar(arcstr::ArcStr::from("Alice"))]),
Row::new(vec![SqlValue::Integer(2), SqlValue::Varchar(arcstr::ArcStr::from("Bob"))]),
];
let count = db.insert_rows_batch("users", rows)?;Sourcepub fn insert_rows_iter<I>(
&mut self,
table_name: &str,
rows: I,
batch_size: usize,
) -> Result<usize, StorageError>
pub fn insert_rows_iter<I>( &mut self, table_name: &str, rows: I, batch_size: usize, ) -> Result<usize, StorageError>
Insert rows from an iterator in a streaming fashion
This method is optimized for very large datasets that may not fit in memory all at once. Rows are processed in configurable batch sizes, balancing memory usage with performance.
§Arguments
table_name- Name of the table to insert intorows- Iterator yielding rows to insertbatch_size- Number of rows per batch (0 defaults to 1000)
§Returns
Ok(usize)- Total number of rows successfully insertedErr(StorageError)- If any batch fails validation
§Note
Unlike insert_rows_batch, this method commits rows batch-by-batch.
A failure partway through will leave previously committed batches
in the table. Use insert_rows_batch for all-or-nothing semantics.
§Example
// Stream 100K rows in batches of 5000
let rows = (0..100_000).map(|i| Row::new(vec![SqlValue::Integer(i)]));
let count = db.insert_rows_iter("numbers", rows, 5000)?;Sourcepub fn update_row_by_pk(
&mut self,
table_name: &str,
pk_value: SqlValue,
column_updates: Vec<(&str, SqlValue)>,
) -> Result<bool, StorageError>
pub fn update_row_by_pk( &mut self, table_name: &str, pk_value: SqlValue, column_updates: Vec<(&str, SqlValue)>, ) -> Result<bool, StorageError>
Update a single row by primary key value (direct API, no SQL parsing)
This method provides a high-performance update path that bypasses SQL parsing, making it suitable for benchmarking and performance-critical code paths.
§Arguments
table_name- Name of the tablepk_value- Primary key value to match (single column PK only)column_updates- List of (column_name, new_value) pairs to update
§Returns
Ok(true)- Row was found and updatedOk(false)- Row was not found (no error)Err(StorageError)- Table not found, column not found, or constraint violation
§Example
// Update column 'name' for row with id=5
let updated = db.update_row_by_pk(
"users",
SqlValue::Integer(5),
vec![("name", SqlValue::Varchar(arcstr::ArcStr::from("Alice")))],
)?;Sourcepub fn list_tables(&self) -> Vec<String>
pub fn list_tables(&self) -> Vec<String>
List all table names
Source§impl Database
impl Database
Sourcepub fn record_change(&mut self, change: TransactionChange)
pub fn record_change(&mut self, change: TransactionChange)
Record a change in the current transaction (if any)
Sourcepub fn begin_transaction(&mut self) -> Result<(), StorageError>
pub fn begin_transaction(&mut self) -> Result<(), StorageError>
Begin a new transaction
Sourcepub fn begin_transaction_with_durability(
&mut self,
durability: TransactionDurability,
) -> Result<(), StorageError>
pub fn begin_transaction_with_durability( &mut self, durability: TransactionDurability, ) -> Result<(), StorageError>
Begin a new transaction with a specific durability hint
The durability hint controls how the transaction’s changes are persisted.
See TransactionDurability for available options.
Sourcepub fn commit_transaction(&mut self) -> Result<(), StorageError>
pub fn commit_transaction(&mut self) -> Result<(), StorageError>
Commit the current transaction
Sourcepub fn rollback_transaction(&mut self) -> Result<(), StorageError>
pub fn rollback_transaction(&mut self) -> Result<(), StorageError>
Rollback the current transaction
Sourcepub fn in_transaction(&self) -> bool
pub fn in_transaction(&self) -> bool
Check if we’re currently in a transaction
Sourcepub fn transaction_id(&self) -> Option<u64>
pub fn transaction_id(&self) -> Option<u64>
Get current transaction ID (for debugging)
Sourcepub fn create_savepoint(&mut self, name: String) -> Result<(), StorageError>
pub fn create_savepoint(&mut self, name: String) -> Result<(), StorageError>
Create a savepoint within the current transaction
Sourcepub fn rollback_to_savepoint(
&mut self,
name: String,
) -> Result<(), StorageError>
pub fn rollback_to_savepoint( &mut self, name: String, ) -> Result<(), StorageError>
Rollback to a named savepoint
Sourcepub fn release_savepoint(&mut self, name: String) -> Result<(), StorageError>
pub fn release_savepoint(&mut self, name: String) -> Result<(), StorageError>
Release (destroy) a named savepoint
Source§impl Database
impl Database
Sourcepub fn save_binary<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>
pub fn save_binary<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>
Save database in efficient binary format
Binary format is faster and more compact than SQL dumps.
Use .vbsql extension to indicate binary format.
§Example
let db = Database::new();
db.save_binary("database.vbsql").unwrap();Sourcepub fn load_binary<P: AsRef<Path>>(path: P) -> Result<Self, StorageError>
pub fn load_binary<P: AsRef<Path>>(path: P) -> Result<Self, StorageError>
Load database from binary format
Reads a binary .vbsql file and reconstructs the database.
§Example
let db = Database::load_binary("database.vbsql").unwrap();Sourcepub fn save<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>
pub fn save<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>
Save database in default format
Uses compressed format when compression feature is enabled (default),
otherwise falls back to uncompressed binary format.
§Example
let db = Database::new();
db.save("database.vbsql").unwrap();Sourcepub fn save_uncompressed<P: AsRef<Path>>(
&self,
path: P,
) -> Result<(), StorageError>
pub fn save_uncompressed<P: AsRef<Path>>( &self, path: P, ) -> Result<(), StorageError>
Save database in uncompressed binary format
Use this if you need uncompressed .vbsql files (e.g., for debugging
or when compression overhead is not desired).
§Example
let db = Database::new();
db.save_uncompressed("database.vbsql").unwrap();Sourcepub fn save_compressed<P: AsRef<Path>>(
&self,
path: P,
) -> Result<(), StorageError>
pub fn save_compressed<P: AsRef<Path>>( &self, path: P, ) -> Result<(), StorageError>
Save database in compressed binary format (zstd compression)
Creates a .vbsqlz file containing zstd-compressed binary data.
Typically 50-70% smaller than uncompressed .vbsql files.
Note: This method requires the compression feature to be enabled.
§Example
let db = Database::new();
db.save_compressed("database.vbsqlz").unwrap();Sourcepub fn load_compressed<P: AsRef<Path>>(path: P) -> Result<Self, StorageError>
pub fn load_compressed<P: AsRef<Path>>(path: P) -> Result<Self, StorageError>
Load database from compressed binary format
Reads a zstd-compressed .vbsqlz file and reconstructs the database.
Note: This method requires the compression feature to be enabled.
§Example
let db = Database::load_compressed("database.vbsqlz").unwrap();Source§impl Database
impl Database
Sourcepub fn save_json<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>
pub fn save_json<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>
Save database in JSON format with default options
§Example
let db = Database::new();
db.save_json("database.json").unwrap();Sourcepub fn save_json_with_options<P: AsRef<Path>>(
&self,
path: P,
options: JsonOptions,
) -> Result<(), StorageError>
pub fn save_json_with_options<P: AsRef<Path>>( &self, path: P, options: JsonOptions, ) -> Result<(), StorageError>
Save database in JSON format with custom options
§Example
let db = Database::new();
let options = JsonOptions { pretty: true, include_metadata: true };
db.save_json_with_options("database.json", options).unwrap();Source§impl Database
impl Database
Sourcepub fn save_sql_dump<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>
pub fn save_sql_dump<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>
Save database state as SQL dump (human-readable, portable)
Generates SQL statements that recreate the database state including:
- Schemas
- Tables with column definitions
- Indexes
- Data (INSERT statements)
- Roles and privileges
§Atomicity
This function uses atomic writes to prevent corruption:
- Writes to a temporary file in the same directory
- Flushes and syncs the buffer to ensure all data is on disk
- Atomically renames the temp file to the target path
This ensures the database file is never in a partial/corrupt state, even if the process crashes or is interrupted mid-write.
§Example
let db = Database::new();
db.save_sql_dump("database.sql").unwrap();Source§impl Database
Persistence format detection and auto-loading
impl Database
Persistence format detection and auto-loading
Sourcepub fn load<P: AsRef<Path>>(path: P) -> Result<Self, StorageError>
pub fn load<P: AsRef<Path>>(path: P) -> Result<Self, StorageError>
Load database from file with automatic format detection
Detects format based on:
- File extension (.vbsql for binary, .vbsqlz for compressed, .json for JSON, .sql for SQL dump)
- Magic number in file header (if extension is ambiguous)
§Example
// Auto-detects format from extension and content
let db = Database::load("database.vbsql").unwrap();
let db2 = Database::load("database.vbsqlz").unwrap();
let db3 = Database::load("database.json").unwrap();
let db4 = Database::load("database.sql").unwrap();