Database

Struct Database 

Source
pub struct Database {
    pub catalog: Catalog,
    pub tables: HashMap<String, Table>,
    /* private fields */
}
Expand description

In-memory database - manages catalog and tables through focused modules

The Database struct coordinates between multiple internal modules to provide a complete database implementation. Each aspect of database functionality is organized into a focused module:

  • Transaction management: begin_transaction(), commit_transaction(), rollback_transaction(), create_savepoint(), rollback_to_savepoint()
  • Table operations: create_table(), drop_table(), get_table(), insert_row(), insert_rows_batch(), update_row_by_pk()
  • Point lookups: get_row_by_pk(), get_column_by_pk(), get_row_by_composite_pk()
  • Change events: enable_change_events(), subscribe_changes(), notify_update(), notify_deletes()
  • Persistence: enable_persistence(), sync_persistence(), last_insert_rowid()
  • Caching: get_columnar(), invalidate_columnar_cache(), columnar_cache_stats()
  • Session: set_sql_mode(), sql_mode(), get_session_variable(), set_session_variable()

Fields§

§catalog: Catalog

Public catalog access for backward compatibility

§tables: HashMap<String, Table>

Implementations§

Source§

impl Database

Source

pub fn get_columnar( &self, table_name: &str, ) -> Result<Option<Arc<ColumnarTable>>, StorageError>

Get columnar representation of a table, using cache if available

This method provides an Arc-wrapped columnar representation of the table, enabling zero-copy sharing between queries. The cache automatically manages memory via LRU eviction.

§Arguments
  • table_name - Name of the table to get columnar representation for
§Returns
  • Ok(Some(Arc<ColumnarTable>)) - Cached or newly converted columnar data
  • Ok(None) - Table not found
  • Err(StorageError) - Conversion failed
§Example
if let Some(columnar) = db.get_columnar("lineitem")? {
    // Use columnar data for SIMD operations
}
Source

pub fn invalidate_columnar_cache(&self, table_name: &str)

Invalidate columnar cache entry for a table

Called automatically when a table is modified (INSERT/UPDATE/DELETE) to ensure the cache doesn’t serve stale data.

Source

pub fn clear_columnar_cache(&self)

Clear all columnar cache entries

Source

pub fn columnar_cache_stats(&self) -> CacheStats

Get columnar cache statistics

Returns statistics about cache hits, misses, evictions, and conversions. Useful for monitoring cache effectiveness and tuning the cache budget.

Source

pub fn columnar_cache_memory_usage(&self) -> usize

Get current columnar cache memory usage in bytes

Source

pub fn columnar_cache_budget(&self) -> usize

Get columnar cache memory budget in bytes

Source

pub fn set_columnar_cache_budget(&mut self, max_bytes: usize)

Set the columnar cache memory budget

Note: This creates a new cache, discarding all cached data. Call this before loading data for best results.

Source

pub fn pre_warm_columnar_cache( &self, table_names: &[&str], ) -> Result<usize, StorageError>

Pre-warm the columnar cache for specific tables

This method eagerly converts row data to columnar format and populates the cache. Call this after data loading to avoid conversion overhead during query execution.

§Arguments
  • table_names - Names of tables to pre-warm
§Returns
  • Ok(count) - Number of tables successfully pre-warmed
  • Err(StorageError) - Conversion failed for a table
§Example
// After loading TPC-H data
let warmed = db.pre_warm_columnar_cache(&["lineitem", "orders"])?;
eprintln!("Pre-warmed {} tables", warmed);
§Performance

This method performs the row-to-columnar conversion once, eliminating the ~31% overhead that would otherwise occur on the first query. For a 600K row LINEITEM table, this saves ~40ms per query session.

Source

pub fn pre_warm_all_columnar(&self) -> Result<usize, StorageError>

Pre-warm the columnar cache for all tables in the database

This method eagerly converts all tables to columnar format. Useful for benchmark scenarios where all tables will be queried.

§Returns
  • Ok(count) - Number of tables successfully pre-warmed
  • Err(StorageError) - Conversion failed for a table
§Example
// After loading all benchmark data
let warmed = db.pre_warm_all_columnar()?;
eprintln!("Pre-warmed {} tables", warmed);
Source§

impl Database

Source

pub fn enable_change_events(&mut self, capacity: usize) -> ChangeEventReceiver

Enable change event broadcasting

Creates a broadcast channel for notifying subscribers when data changes. Returns a receiver for the channel.

§Arguments
  • capacity - Maximum number of events to buffer before old events are overwritten
§Example
let mut db = Database::new();
let mut rx = db.enable_change_events(1024);

// Insert some data
db.insert_row("users", row)?;

// Receive change events
for event in rx.recv_all() {
    println!("Change: {:?}", event);
}
Source

pub fn subscribe_changes(&self) -> Option<ChangeEventReceiver>

Subscribe to change events

Returns a new receiver for change events if broadcasting is enabled, or None if enable_change_events() has not been called.

§Example
// Enable broadcasting
db.enable_change_events(1024);

// Create additional subscribers
let rx1 = db.subscribe_changes().unwrap();
let rx2 = db.subscribe_changes().unwrap();
Source

pub fn change_events_enabled(&self) -> bool

Check if change event broadcasting is enabled

Source

pub fn notify_update(&self, table_name: &str, row_index: usize)

Notify subscribers of an update event

This should be called by the executor after successfully updating a row. The storage layer broadcasts the event to any subscribers.

§Arguments
  • table_name - Name of the table that was modified
  • row_index - Index of the row that was updated
Source

pub fn notify_deletes(&self, table_name: &str, row_indices: &[usize])

Notify subscribers of a delete event

This should be called by the executor after successfully deleting rows. The storage layer broadcasts the event to any subscribers.

§Arguments
  • table_name - Name of the table that was modified
  • row_indices - Indices of rows that were deleted (before deletion)
Source§

impl Database

Source

pub fn new() -> Self

Create a new empty database

Note: Security is disabled by default for backward compatibility with existing code. Call enable_security() to turn on access control enforcement.

Source

pub fn with_path(path: PathBuf) -> Self

Create a new database with a specific storage path

The provided path will be used as the root directory for database files. Index files will be stored in <path>/data/indexes/.

§Example
use std::path::PathBuf;

use vibesql_storage::Database;

let db = Database::with_path(PathBuf::from("/var/lib/myapp/db"));
// Index files will be stored in /var/lib/myapp/db/data/indexes/
Source

pub fn with_config(config: DatabaseConfig) -> Self

Create a new database with a specific configuration

Allows setting memory budgets, disk budgets, and spill policy for adaptive index management.

§Example
use vibesql_storage::{Database, DatabaseConfig};

// Browser environment with limited memory
let db = Database::with_config(DatabaseConfig::browser_default());

// Server environment with abundant memory
let db = Database::with_config(DatabaseConfig::server_default());
Source

pub fn with_path_and_config(path: PathBuf, config: DatabaseConfig) -> Self

Create a new database with both path and configuration

§Example
use std::path::PathBuf;

use vibesql_storage::{Database, DatabaseConfig};

let db = Database::with_path_and_config(
    PathBuf::from("/var/lib/myapp/db"),
    DatabaseConfig::server_default(),
);
Source

pub fn reset(&mut self)

Reset the database to empty state (more efficient than creating a new instance).

Clears all tables, resets catalog to default state, and clears all indexes and transactions. Useful for test scenarios where you need to reuse a Database instance. Preserves database configuration (path, storage backend, memory budgets) across resets. Note: Persistence engine is preserved (WAL remains active if enabled).

Source§

impl Database

Source

pub fn query_buffer_pool(&self) -> &QueryBufferPool

Get a reference to the query buffer pool for reusing allocations

Source

pub fn get_cached_procedure_body( &mut self, name: &str, ) -> Result<&ProcedureBody, StorageError>

Get cached procedure body or cache it on first access

Source

pub fn invalidate_procedure_cache(&mut self, name: &str)

Invalidate cached procedure body (call when procedure is dropped or replaced)

Source

pub fn clear_routine_cache(&mut self)

Clear all cached procedure/function bodies

Source§

impl Database

Source

pub fn debug_info(&self) -> String

Get debug information about database state

Source

pub fn dump_tables(&self) -> String

Dump all table contents in readable format

Source

pub fn dump_table(&self, name: &str) -> Result<String, StorageError>

Dump a specific table’s contents

Source§

impl Database

Source

pub fn create_index( &mut self, index_name: String, table_name: String, unique: bool, columns: Vec<IndexColumn>, ) -> Result<(), StorageError>

Create an index

Source

pub fn create_index_with_keys( &mut self, index_name: String, table_name: String, unique: bool, columns: Vec<IndexColumn>, keys: Vec<(Vec<SqlValue>, usize)>, ) -> Result<(), StorageError>

Create an index with pre-computed keys (for expression indexes)

This method is used when the caller has already evaluated the expressions and computed the key values for each row. This is necessary for expression indexes where the key values are derived from evaluating expressions on rows.

§Arguments
  • index_name - Name of the index to create
  • table_name - Name of the table this index is on
  • unique - Whether this is a unique index
  • columns - The index column definitions (for metadata storage)
  • keys - Pre-computed (key_values, row_id) pairs
Source

pub fn index_exists(&self, index_name: &str) -> bool

Check if an index exists

Source

pub fn get_index(&self, index_name: &str) -> Option<&IndexMetadata>

Get index metadata

Source

pub fn get_index_data(&self, index_name: &str) -> Option<&IndexData>

Get index data

Source

pub fn update_indexes_for_update( &mut self, table_name: &str, old_row: &Row, new_row: &Row, row_index: usize, changed_columns: Option<&HashSet<usize>>, )

Update user-defined indexes for update operation

§Arguments
  • table_name - Name of the table being updated
  • old_row - Row data before the update
  • new_row - Row data after the update
  • row_index - Index of the row in the table
  • changed_columns - Optional set of column indices that were modified. If provided, indexes that don’t involve any changed columns will be skipped.
Source

pub fn update_indexes_for_delete( &mut self, table_name: &str, row: &Row, row_index: usize, )

Update user-defined indexes for delete operation

Source

pub fn batch_update_indexes_for_delete( &mut self, table_name: &str, rows_to_delete: &[(usize, &Row)], )

Batch update user-defined indexes for delete operation

This is significantly more efficient than calling update_indexes_for_delete in a loop because it pre-computes column indices once per index rather than once per row.

Source

pub fn rebuild_indexes(&mut self, table_name: &str)

Rebuild user-defined indexes after bulk operations that change row indices

Source

pub fn adjust_indexes_after_delete( &mut self, table_name: &str, deleted_indices: &[usize], )

Adjust user-defined indexes after row deletions

This is more efficient than rebuild_indexes when only a few rows are deleted, as it adjusts row indices in place rather than rebuilding from scratch.

§Arguments
  • table_name - Name of the table whose indexes need adjustment
  • deleted_indices - Sorted list of deleted row indices (ascending order)
Source

pub fn drop_index(&mut self, index_name: &str) -> Result<(), StorageError>

Drop an index

Source

pub fn list_indexes(&self) -> Vec<String>

List all indexes

Source

pub fn list_indexes_for_table(&self, table_name: &str) -> Vec<String>

List all indexes for a specific table

Source

pub fn has_index_on_column(&self, table_name: &str, column_name: &str) -> bool

Check if a column has any user-defined index

This is used to determine if updates to a column require index maintenance. Returns true if any user-defined index (B-tree or spatial) includes this column.

Source

pub fn add_to_expression_indexes_for_insert( &mut self, table_name: &str, row_index: usize, expression_keys: &HashMap<String, Vec<SqlValue>>, )

Add row to expression indexes after insert with pre-computed keys

This method handles expression indexes which require pre-computed key values since the storage layer cannot evaluate expressions.

Source

pub fn update_expression_indexes_for_update( &mut self, table_name: &str, row_index: usize, old_expression_keys: &HashMap<String, Vec<SqlValue>>, new_expression_keys: &HashMap<String, Vec<SqlValue>>, )

Update expression indexes for update operation with pre-computed keys

Source

pub fn update_expression_indexes_for_delete( &mut self, table_name: &str, row_index: usize, expression_keys: &HashMap<String, Vec<SqlValue>>, )

Update expression indexes for delete operation with pre-computed keys

Source

pub fn get_expression_indexes_for_table( &self, table_name: &str, ) -> Vec<(String, &IndexMetadata)>

Get expression indexes for a specific table

Returns metadata for all expression indexes on the table. Used by executor to determine which indexes need expression evaluation during DML operations.

Source

pub fn has_expression_indexes(&self, table_name: &str) -> bool

Check if a table has any expression indexes

Source

pub fn clear_expression_index_data(&mut self, table_name: &str)

Clear expression index data for a table (for rebuilding after compaction)

Source

pub fn create_spatial_index( &mut self, metadata: SpatialIndexMetadata, spatial_index: SpatialIndex, ) -> Result<(), StorageError>

Create a spatial index

Source

pub fn create_ivfflat_index( &mut self, index_name: String, table_name: String, column_name: String, col_idx: usize, dimensions: usize, lists: usize, metric: VectorDistanceMetric, ) -> Result<(), StorageError>

Create an IVFFlat index for approximate nearest neighbor search on vector columns

This method creates an IVFFlat (Inverted File with Flat quantization) index for efficient approximate nearest neighbor search on vector data.

§Arguments
  • index_name - Name for the new index
  • table_name - Name of the table containing the vector column
  • column_name - Name of the vector column to index
  • col_idx - Column index in the table schema
  • dimensions - Number of dimensions in the vectors
  • lists - Number of clusters for the IVFFlat algorithm
  • metric - Distance metric to use (L2, Cosine, InnerProduct)
Source

pub fn search_ivfflat_index( &self, index_name: &str, query_vector: &[f64], k: usize, ) -> Result<Vec<(usize, f64)>, StorageError>

Search an IVFFlat index for approximate nearest neighbors

§Arguments
  • index_name - Name of the IVFFlat index
  • query_vector - The query vector (f64)
  • k - Maximum number of nearest neighbors to return
§Returns
  • Ok(Vec<(usize, f64)>) - Vector of (row_id, distance) pairs, ordered by distance
  • Err(StorageError) - If index not found or not an IVFFlat index
Source

pub fn get_ivfflat_indexes_for_table( &self, table_name: &str, ) -> Vec<(&IndexMetadata, &IVFFlatIndex)>

Get all IVFFlat indexes for a specific table

Source

pub fn set_ivfflat_probes( &mut self, index_name: &str, probes: usize, ) -> Result<(), StorageError>

Set the number of probes for an IVFFlat index

Source

pub fn create_hnsw_index( &mut self, index_name: String, table_name: String, column_name: String, col_idx: usize, dimensions: usize, m: u32, ef_construction: u32, metric: VectorDistanceMetric, ) -> Result<(), StorageError>

Create an HNSW index for approximate nearest neighbor search on vector columns

This method creates an HNSW (Hierarchical Navigable Small World) index for efficient approximate nearest neighbor search on vector data.

§Arguments
  • index_name - Name for the new index
  • table_name - Name of the table containing the vector column
  • column_name - Name of the vector column to index
  • col_idx - Column index in the table schema
  • dimensions - Number of dimensions in the vectors
  • m - Maximum number of connections per node (default 16)
  • ef_construction - Size of dynamic candidate list during construction (default 64)
  • metric - Distance metric to use (L2, Cosine, InnerProduct)
Source

pub fn search_hnsw_index( &self, index_name: &str, query_vector: &[f64], k: usize, ) -> Result<Vec<(usize, f64)>, StorageError>

Search an HNSW index for approximate nearest neighbors

§Arguments
  • index_name - Name of the HNSW index
  • query_vector - The query vector (f64)
  • k - Maximum number of nearest neighbors to return
§Returns
  • Ok(Vec<(usize, f64)>) - Vector of (row_id, distance) pairs, ordered by distance
  • Err(StorageError) - If index not found or not an HNSW index
Source

pub fn get_hnsw_indexes_for_table( &self, table_name: &str, ) -> Vec<(&IndexMetadata, &HnswIndex)>

Get all HNSW indexes for a specific table

Set the ef_search parameter for an HNSW index

Source

pub fn spatial_index_exists(&self, index_name: &str) -> bool

Check if a spatial index exists

Source

pub fn get_spatial_index_metadata( &self, index_name: &str, ) -> Option<&SpatialIndexMetadata>

Get spatial index metadata

Source

pub fn get_spatial_index(&self, index_name: &str) -> Option<&SpatialIndex>

Get spatial index (immutable)

Source

pub fn get_spatial_index_mut( &mut self, index_name: &str, ) -> Option<&mut SpatialIndex>

Get spatial index (mutable)

Source

pub fn get_spatial_indexes_for_table( &self, table_name: &str, ) -> Vec<(&SpatialIndexMetadata, &SpatialIndex)>

Get all spatial indexes for a specific table

Source

pub fn get_spatial_indexes_for_table_mut( &mut self, table_name: &str, ) -> Vec<(&SpatialIndexMetadata, &mut SpatialIndex)>

Get all spatial indexes for a specific table (mutable)

Source

pub fn drop_spatial_index( &mut self, index_name: &str, ) -> Result<(), StorageError>

Drop a spatial index

Source

pub fn drop_spatial_indexes_for_table( &mut self, table_name: &str, ) -> Vec<String>

Drop all spatial indexes associated with a table (CASCADE behavior)

Source

pub fn list_spatial_indexes(&self) -> Vec<String>

List all spatial indexes

Source

pub fn lookup_by_index( &self, index_name: &str, key_values: &[SqlValue], ) -> Result<Option<Vec<&Row>>, StorageError>

Look up rows by index name and key values - bypasses SQL parsing for maximum performance

This method provides direct B+ tree index lookups, completely bypassing SQL parsing and the query execution pipeline. Use this for performance-critical OLTP workloads where you know the exact index and key values.

§Arguments
  • index_name - Name of the index (as created with CREATE INDEX)
  • key_values - Key values to look up (must match index column order)
§Returns
  • Ok(Some(Vec<&Row>)) - The rows matching the key
  • Ok(None) - No rows match the key
  • Err(StorageError) - Index not found or other error
§Performance

This is ~100-300x faster than executing a SQL SELECT query because it:

  • Skips SQL parsing (~300µs saved)
  • Skips query planning and optimization
  • Uses direct B+ tree lookup on the index
§Example
// Single-column index lookup
let rows = db.lookup_by_index("idx_users_pk", &[SqlValue::Integer(42)])?;

// Composite key lookup
let rows = db.lookup_by_index("idx_orders_pk", &[
    SqlValue::Integer(warehouse_id),
    SqlValue::Integer(district_id),
    SqlValue::Integer(order_id),
])?;
Source

pub fn lookup_one_by_index( &self, index_name: &str, key_values: &[SqlValue], ) -> Result<Option<&Row>, StorageError>

Look up the first row by index - optimized for unique indexes

This is a convenience method for unique indexes where you expect exactly one row. Returns only the first matching row.

§Arguments
  • index_name - Name of the index
  • key_values - Key values to look up
§Returns
  • Ok(Some(&Row)) - The first matching row
  • Ok(None) - No row matches the key
  • Err(StorageError) - Index not found or other error
Source

pub fn lookup_by_index_batch<'a>( &'a self, index_name: &str, keys: &[Vec<SqlValue>], ) -> Result<Vec<Option<Vec<&'a Row>>>, StorageError>

Batch lookup by index - look up multiple keys in a single call

This method is optimized for batch point lookups where you need to retrieve multiple rows by their index keys. It’s more efficient than calling lookup_by_index in a loop.

§Arguments
  • index_name - Name of the index
  • keys - List of key value tuples to look up
§Returns
  • Ok(Vec<Option<Vec<&Row>>>) - For each key, the matching rows (or None if not found)
  • Err(StorageError) - Index not found or other error
§Example
// Batch lookup multiple items
let results = db.lookup_by_index_batch("idx_items_pk", &[
    vec![SqlValue::Integer(1)],
    vec![SqlValue::Integer(2)],
    vec![SqlValue::Integer(3)],
])?;

for (key_idx, rows) in results.iter().enumerate() {
    if let Some(rows) = rows {
        println!("Key {} matched {} rows", key_idx, rows.len());
    }
}
Source

pub fn lookup_one_by_index_batch<'a>( &'a self, index_name: &str, keys: &[Vec<SqlValue>], ) -> Result<Vec<Option<&'a Row>>, StorageError>

Batch lookup returning first row only - optimized for unique indexes

Like lookup_by_index_batch but returns only the first matching row for each key. More efficient when you know the index is unique.

§Arguments
  • index_name - Name of the index
  • keys - List of key value tuples to look up
§Returns
  • Ok(Vec<Option<&Row>>) - For each key, the first matching row (or None)
Source

pub fn lookup_by_index_prefix( &self, index_name: &str, prefix: &[SqlValue], ) -> Result<Vec<&Row>, StorageError>

Look up rows by index using prefix matching - for multi-column indexes

This method performs prefix matching on multi-column indexes. For example, with an index on (a, b, c), you can look up all rows where (a, b) match a specific value, regardless of c.

§Arguments
  • index_name - Name of the index (as created with CREATE INDEX)
  • prefix - Prefix key values to match (must be a prefix of index columns)
§Returns
  • Ok(Vec<&Row>) - The rows matching the prefix (empty if none found)
  • Err(StorageError) - Index not found or other error
§Performance

Uses efficient B+ tree range scan: O(log n + k) where n is total keys, k is matches.

§Example
// Index on (warehouse_id, district_id, order_id) - 3 columns
// Find all orders for warehouse 1, district 5 (2-column prefix)
let rows = db.lookup_by_index_prefix("idx_orders_pk", &[
    SqlValue::Integer(1),  // warehouse_id
    SqlValue::Integer(5),  // district_id
])?;
Source

pub fn lookup_by_index_prefix_batch<'a>( &'a self, index_name: &str, prefixes: &[Vec<SqlValue>], ) -> Result<Vec<Vec<&'a Row>>, StorageError>

Batch prefix lookup - look up multiple prefixes in a single call

This method is optimized for batch prefix lookups on multi-column indexes. For each prefix, returns all rows where the key prefix matches.

§Arguments
  • index_name - Name of the index
  • prefixes - List of prefix key tuples to look up
§Returns
  • Ok(Vec<Vec<&Row>>) - For each prefix, the matching rows (empty vec if none)
  • Err(StorageError) - Index not found or other error
§Example
// Index on (w_id, d_id, o_id) - find new orders for all 10 districts
let prefixes: Vec<Vec<SqlValue>> = (1..=10)
    .map(|d| vec![SqlValue::Integer(w_id), SqlValue::Integer(d)])
    .collect();
let results = db.lookup_by_index_prefix_batch("idx_new_order_pk", &prefixes)?;
// results[0] = rows for district 1, results[1] = rows for district 2, etc.
Source

pub fn delete_by_pk_fast( &mut self, table_name: &str, pk_values: &[SqlValue], ) -> Result<bool, StorageError>

Delete a single row by PK value - fast path that skips unnecessary overhead

This method provides a highly optimized DELETE path for single-row PK deletes. It bypasses the full DELETE executor overhead when:

  • There are no triggers on the table
  • There are no foreign key constraints referencing this table
  • The WHERE clause is a simple PK equality (id = ?)
§Arguments
  • table_name - Name of the table
  • pk_values - Primary key values to match
§Returns
  • Ok(true) - Row was deleted
  • Ok(false) - No row found with this PK
  • Err(StorageError) - Table not found or other error
§Performance

This is ~2-3x faster than the full DELETE executor because it:

  • Uses direct PK index lookup (O(1))
  • Avoids cloning row data
  • Skips ExpressionEvaluator creation
  • Performs minimal index maintenance
§Profiling

Set environment variables to enable profiling:

  • DELETE_PROFILE=1 - Enable timing collection and auto-print summary on thread exit
  • DELETE_PROFILE_VERBOSE=1 - Also print per-delete breakdown to stderr

Use print_delete_profile_summary() to manually print aggregate stats. Use reset_delete_profile_stats() to reset the stats before a benchmark.

§Safety

Caller must ensure:

  • No triggers exist on this table for DELETE
  • No foreign key constraints reference this table

Note: WAL logging is handled internally by this method.

§Example
// Fast delete by PK
let deleted = db.delete_by_pk_fast("users", &[SqlValue::Integer(42)])?;
if deleted {
    println!("User 42 deleted");
}
Source

pub fn get_table_index_info(&self, table_name: &str) -> Option<TableIndexInfo>

Get table index information for DML cost estimation

This method collects all the metadata needed by CostEstimator::estimate_insert(), estimate_update(), and estimate_delete() to compute accurate DML operation costs.

§Arguments
  • table_name - Name of the table to get index info for
§Returns
  • Some(TableIndexInfo) - Index information if table exists
  • None - If table doesn’t exist
§Example
let info = db.get_table_index_info("users")?;
let insert_cost = cost_estimator.estimate_insert(&info);
Source§

impl Database

Source

pub fn enable_persistence(&mut self, engine: PersistenceEngine)

Enable WAL-based async persistence

Creates a persistence engine that writes changes to a WAL file in the background. All subsequent DML and DDL operations will be logged to the WAL for durability.

§Arguments
  • engine - A pre-configured PersistenceEngine instance
§Example
use vibesql_storage::{Database, PersistenceEngine, PersistenceConfig};

let mut db = Database::new();
let engine = PersistenceEngine::new("/path/to/wal.log", PersistenceConfig::default())?;
db.enable_persistence(engine);
Source

pub fn persistence_enabled(&self) -> bool

Check if WAL persistence is enabled

Source

pub fn persistence_stats(&self) -> Option<PersistenceStats>

Get persistence statistics (if enabled)

Source

pub fn sync_persistence(&self) -> Result<(), StorageError>

Sync all pending WAL entries to disk

Blocks until all pending entries have been written and flushed. This is useful for ensuring durability before returning to the user.

Source

pub fn emit_wal_delete( &self, table_name: &str, row_id: u64, old_values: Vec<SqlValue>, )

Emit a WAL delete entry for persistence

Called by the DELETE executor before rows are removed. Captures old_values for recovery replay.

Source

pub fn emit_wal_create_index( &self, index_id: u32, index_name: &str, table_name: &str, column_indices: Vec<u32>, is_unique: bool, )

Emit a WAL create index entry for persistence

Called by the CREATE INDEX executor after index is created.

Source

pub fn emit_wal_drop_index(&self, index_id: u32, index_name: &str)

Emit a WAL drop index entry for persistence

Called by the DROP INDEX executor before index is dropped.

Source

pub fn last_insert_rowid(&self) -> i64

Get the last auto-generated ID from an INSERT operation

Returns the most recent value generated by AUTO_INCREMENT during an INSERT. This is used to implement LAST_INSERT_ROWID() and LAST_INSERT_ID() functions.

Returns 0 if no auto-generated values have been produced yet.

§Example
// Create table with AUTO_INCREMENT
db.execute("CREATE TABLE users (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100))")?;

// Insert a row (ID is auto-generated)
db.execute("INSERT INTO users (name) VALUES ('Alice')")?;

// Get the generated ID
let id = db.last_insert_rowid();
assert_eq!(id, 1);
Source

pub fn set_last_insert_rowid(&mut self, id: i64)

Set the last auto-generated ID

This is called internally by the INSERT executor when a sequence value is generated for an AUTO_INCREMENT column.

For multi-row inserts, this will be the ID of the first row inserted (following MySQL semantics for batch inserts).

Source

pub fn last_changes_count(&self) -> usize

Get the number of rows changed by the last INSERT/UPDATE/DELETE statement

Returns the count of rows affected by the most recent DML operation. This is used to implement the SQLite changes() function.

Returns 0 if no DML operations have been performed yet.

§Example
// Insert multiple rows
db.execute("INSERT INTO users (name) VALUES ('Alice'), ('Bob'), ('Carol')")?;

// Get the number of rows inserted
let changes = db.last_changes_count();
assert_eq!(changes, 3);

// Delete some rows
db.execute("DELETE FROM users WHERE name = 'Alice'")?;
assert_eq!(db.last_changes_count(), 1);
Source

pub fn set_last_changes_count(&mut self, count: usize)

Set the number of rows changed by the last DML statement

This is called internally by INSERT, UPDATE, and DELETE executors after completing their operations.

Source

pub fn total_changes_count(&self) -> usize

Get the total number of rows changed since the database connection was opened

Returns the cumulative count of rows affected by all INSERT, UPDATE, and DELETE operations since the database was created. This is used to implement the SQLite total_changes() function.

Returns 0 for a new database connection.

§Example
// Insert rows
db.execute("INSERT INTO users (name) VALUES ('Alice'), ('Bob')")?;
assert_eq!(db.last_changes_count(), 2);  // Last operation: 2 rows

// Delete a row
db.execute("DELETE FROM users WHERE name = 'Alice'")?;
assert_eq!(db.last_changes_count(), 1);  // Last operation: 1 row

// Total changes accumulates
assert_eq!(db.total_changes_count(), 3); // 2 + 1 = 3 rows total
Source

pub fn increment_total_changes_count(&mut self, count: usize)

Increment the total changes count by the specified amount

This is called internally by INSERT, UPDATE, and DELETE executors after completing their operations, in addition to set_last_changes_count().

Source

pub fn search_count(&self) -> u64

Get the current search count

Returns the number of rows examined during query execution. This is used to implement sqlite_search_count() for TCL test compatibility.

In SQLite, this tracks “MoveTo” and “Next” VDBE operations. In VibeSQL, this tracks rows read during table/index scans.

§Example
// Reset before query
db.reset_search_count();

// Execute query...
db.execute("SELECT * FROM users WHERE id = 1")?;

// Get count of rows examined
let count = db.search_count();
Source

pub fn reset_search_count(&self)

Reset the search count to zero

Call this before executing a query to measure how many rows were examined by that specific query.

Source

pub fn increment_search_count(&self, count: u64)

Increment the search count by a specified amount

Called internally by the executor when rows are examined during table scans, index scans, or other row-reading operations.

§Arguments
  • count - Number of rows examined (typically 1 for row-by-row, or batch size for columnar)
Source§

impl Database

Source

pub fn get_row_by_pk( &self, table_name: &str, pk_value: &SqlValue, ) -> Result<Option<&Row>, StorageError>

Get a row by primary key value - bypasses SQL parsing for maximum performance

This method provides O(1) point lookups directly using the primary key index, completely bypassing SQL parsing and the query execution pipeline.

§Arguments
  • table_name - Name of the table
  • pk_value - Primary key value to look up
§Returns
  • Ok(Some(&Row)) - The row if found
  • Ok(None) - If no row matches the primary key
  • Err(StorageError) - If table doesn’t exist or has no primary key
§Performance

This is ~100-300x faster than executing a SQL point SELECT query because it:

  • Skips SQL parsing (~300µs)
  • Skips query planning and optimization
  • Uses direct HashMap lookup on the PK index
§Example
let row = db.get_row_by_pk("users", &SqlValue::Integer(42))?;
if let Some(row) = row {
    let name = &row.values[1];
}
Source

pub fn get_column_by_pk( &self, table_name: &str, pk_value: &SqlValue, column_index: usize, ) -> Result<Option<&SqlValue>, StorageError>

Get a specific column value by primary key - bypasses SQL parsing for maximum performance

This is even faster than get_row_by_pk when you only need one column value, as it avoids returning the entire row.

§Arguments
  • table_name - Name of the table
  • pk_value - Primary key value to look up
  • column_index - Index of the column to retrieve (0-based)
§Returns
  • Ok(Some(&SqlValue)) - The column value if found
  • Ok(None) - If no row matches the primary key
  • Err(StorageError) - If table doesn’t exist or column index is out of bounds
Source

pub fn get_row_by_composite_pk( &self, table_name: &str, pk_values: &[SqlValue], ) -> Result<Option<&Row>, StorageError>

Get a row by composite primary key - for tables with multi-column primary keys

§Arguments
  • table_name - Name of the table
  • pk_values - Primary key values in column order
§Returns
  • Ok(Some(&Row)) - The row if found
  • Ok(None) - If no row matches the primary key
  • Err(StorageError) - If table doesn’t exist or has no primary key
Source§

impl Database

Source

pub fn set_role(&mut self, role: Option<String>)

Set the current session role for privilege checks

Source

pub fn get_current_role(&self) -> String

Get the current session role (defaults to “PUBLIC” if not set)

Source

pub fn is_security_enabled(&self) -> bool

Check if security enforcement is enabled

Source

pub fn disable_security(&mut self)

Disable security checks (for testing)

Source

pub fn enable_security(&mut self)

Enable security checks

Source

pub fn set_session_variable(&mut self, name: &str, value: SqlValue)

Set a session variable (MySQL-style @variable)

Source

pub fn get_session_variable(&self, name: &str) -> Option<&SqlValue>

Get a session variable value

Source

pub fn clear_session_variables(&mut self)

Clear all session variables

Source

pub fn sql_mode(&self) -> SqlMode

Get the current SQL compatibility mode

Source

pub fn full_column_names(&self) -> bool

Get the full_column_names PRAGMA setting

When ON, column names in result sets use “table.column” format

Source

pub fn set_full_column_names(&mut self, value: bool)

Set the full_column_names PRAGMA setting

Source

pub fn short_column_names(&self) -> bool

Get the short_column_names PRAGMA setting

When ON (default), column names use just the column name (e.g., “f1”) When OFF, column names may include expression text

Source

pub fn set_short_column_names(&mut self, value: bool)

Set the short_column_names PRAGMA setting

Source

pub fn case_sensitive_like(&self) -> bool

Get the case_sensitive_like PRAGMA setting

When OFF (default), LIKE comparisons are case-insensitive for ASCII letters (A-Z = a-z). When ON, LIKE comparisons are case-sensitive (strict byte-for-byte matching).

This matches SQLite’s default behavior where LIKE is case-insensitive for ASCII.

Source

pub fn set_case_sensitive_like(&mut self, value: bool)

Set the case_sensitive_like PRAGMA setting

Source

pub fn reverse_unordered_selects(&self) -> bool

Get the reverse_unordered_selects PRAGMA setting

When ON, the order of output rows from SELECT statements that do not have an ORDER BY clause is reversed. This is useful for testing to ensure that applications do not depend on an implicit row ordering.

Source

pub fn set_reverse_unordered_selects(&mut self, value: bool)

Set the reverse_unordered_selects PRAGMA setting

Source

pub fn insert_sqlite_stat1( &mut self, table_name: String, index_name: Option<String>, stat: String, )

Insert a sqlite_stat1 entry

This allows manual insertion of statistics for query optimizer tuning, matching SQLite’s behavior where users can INSERT INTO sqlite_stat1.

Source

pub fn get_sqlite_stat1( &self, table_name: &str, index_name: Option<&str>, ) -> Option<&String>

Get a sqlite_stat1 entry

Source

pub fn get_all_sqlite_stat1(&self) -> &HashMap<(String, Option<String>), String>

Get all sqlite_stat1 entries

Source

pub fn delete_sqlite_stat1( &mut self, table_name: &str, index_name: Option<&str>, )

Delete a sqlite_stat1 entry

Source

pub fn clear_sqlite_stat1(&mut self)

Clear all sqlite_stat1 entries

Source

pub fn reserve_rowid(&mut self, table_name: &str, rowid: u64, is_explicit: bool)

Reserve a rowid for a table during REPLACE operations

During REPLACE INTO, SQLite allocates the rowid for the new row BEFORE firing BEFORE DELETE triggers. Any INSERT within those triggers that tries to allocate the same rowid will fail with a UNIQUE constraint violation on rowid.

§Arguments
  • table_name - The table name (case-insensitive)
  • rowid - The rowid to reserve
  • is_explicit - True if the rowid comes from an explicit INTEGER PRIMARY KEY value, false if it’s auto-allocated. This affects how conflicts are handled in AFTER DELETE triggers.
Source

pub fn release_reserved_rowid(&mut self, table_name: &str)

Release a reserved rowid after REPLACE completes

Source

pub fn get_reserved_rowid_info(&self, table_name: &str) -> Option<(u64, bool)>

Check if a rowid is reserved for a table and get the reservation details

Returns Some((rowid, is_explicit)) if a rowid is reserved, None otherwise.

Source

pub fn is_rowid_reserved(&self, table_name: &str, rowid: u64) -> bool

Check if a rowid is reserved for a table

Source

pub fn get_reserved_rowid(&self, table_name: &str) -> Option<u64>

Get the reserved rowid for a table, if any

Source

pub fn set_sql_mode(&mut self, mode: SqlMode)

Set the SQL compatibility mode at runtime

This allows changing the SQL dialect (MySQL, SQLite, etc.) during a session. The @@sql_mode session variable is automatically updated to reflect the change.

§Example
use vibesql_storage::Database;
use vibesql_types::{MySqlModeFlags, SqlMode};

let mut db = Database::new();
// Default is MySQL (for SQLLogicTest compatibility)
assert!(matches!(db.sql_mode(), SqlMode::MySQL { .. }));

db.set_sql_mode(SqlMode::SQLite);
assert!(matches!(db.sql_mode(), SqlMode::SQLite));
Source§

impl Database

Source

pub fn create_table_with_identifier( &mut self, schema: TableSchema, identifier: TableIdentifier, ) -> Result<(), StorageError>

Create a table with SQL:1999 identifier semantics.

The identifier parameter determines how the table name is stored:

  • Quoted identifiers: stored with exact case
  • Unquoted identifiers: stored with lowercase canonical form
  • Qualified identifiers: schema and table have independent case handling

Temporary tables (in the “temp” schema) are not persisted to WAL.

Source

pub fn create_table(&mut self, schema: TableSchema) -> Result<(), StorageError>

Create a table Legacy method - uses global case_sensitive_identifiers setting

Source

pub fn get_table_by_identifier( &self, identifier: &TableIdentifier, ) -> Option<&Table>

Get a table by identifier using SQL:1999 case semantics.

Uses the canonical form of the identifier for direct lookup without fallbacks. Supports both simple and schema-qualified identifiers.

Source

pub fn get_table(&self, name: &str) -> Option<&Table>

Get a table for reading Legacy method with fallback lookups for backward compatibility

For unqualified names, checks temp schema first (SQLite semantics). SQLite Compatibility: The “temp” schema name is mapped to the session’s temp schema, allowing temp.tablename syntax.

Source

pub fn get_table_mut(&mut self, name: &str) -> Option<&mut Table>

Get a table for writing

For unqualified names, checks temp schema first (SQLite semantics). SQLite Compatibility: The “temp” schema name is mapped to the session’s temp schema, allowing temp.tablename syntax.

Source

pub fn drop_table(&mut self, name: &str) -> Result<(), StorageError>

Drop a table

Temporary tables (in the “temp” schema) are not persisted to WAL.

Source

pub fn insert_row( &mut self, table_name: &str, row: Row, ) -> Result<(), StorageError>

Insert a row into a table

Temporary tables (in the “temp” schema) are not persisted to WAL.

Source

pub fn insert_rows_batch( &mut self, table_name: &str, rows: Vec<Row>, ) -> Result<usize, StorageError>

Insert multiple rows into a table in a single batch

This method is optimized for bulk data loading and provides significant performance improvements over repeated insert_row calls:

  • Pre-allocation: Vector capacity reserved upfront
  • Batch validation: All rows validated before any insertion
  • Deferred index rebuild: Indexes rebuilt once after all inserts
  • Single cache invalidation: Columnar cache invalidated once at end
§Arguments
  • table_name - Name of the table to insert into
  • rows - Vector of rows to insert
§Returns
  • Ok(usize) - Number of rows successfully inserted
  • Err(StorageError) - If validation fails (no rows inserted on error)
§Performance

For large batches (1000+ rows), expect 10-50x speedup vs single-row inserts.

§Example
let rows = vec![
    Row::new(vec![SqlValue::Integer(1), SqlValue::Varchar(arcstr::ArcStr::from("Alice"))]),
    Row::new(vec![SqlValue::Integer(2), SqlValue::Varchar(arcstr::ArcStr::from("Bob"))]),
];
let count = db.insert_rows_batch("users", rows)?;
Source

pub fn insert_rows_iter<I>( &mut self, table_name: &str, rows: I, batch_size: usize, ) -> Result<usize, StorageError>
where I: Iterator<Item = Row>,

Insert rows from an iterator in a streaming fashion

This method is optimized for very large datasets that may not fit in memory all at once. Rows are processed in configurable batch sizes, balancing memory usage with performance.

§Arguments
  • table_name - Name of the table to insert into
  • rows - Iterator yielding rows to insert
  • batch_size - Number of rows per batch (0 defaults to 1000)
§Returns
  • Ok(usize) - Total number of rows successfully inserted
  • Err(StorageError) - If any batch fails validation
§Note

Unlike insert_rows_batch, this method commits rows batch-by-batch. A failure partway through will leave previously committed batches in the table. Use insert_rows_batch for all-or-nothing semantics.

§Example
// Stream 100K rows in batches of 5000
let rows = (0..100_000).map(|i| Row::new(vec![SqlValue::Integer(i)]));
let count = db.insert_rows_iter("numbers", rows, 5000)?;
Source

pub fn update_row_by_pk( &mut self, table_name: &str, pk_value: SqlValue, column_updates: Vec<(&str, SqlValue)>, ) -> Result<bool, StorageError>

Update a single row by primary key value (direct API, no SQL parsing)

This method provides a high-performance update path that bypasses SQL parsing, making it suitable for benchmarking and performance-critical code paths.

§Arguments
  • table_name - Name of the table
  • pk_value - Primary key value to match (single column PK only)
  • column_updates - List of (column_name, new_value) pairs to update
§Returns
  • Ok(true) - Row was found and updated
  • Ok(false) - Row was not found (no error)
  • Err(StorageError) - Table not found, column not found, or constraint violation
§Example
// Update column 'name' for row with id=5
let updated = db.update_row_by_pk(
    "users",
    SqlValue::Integer(5),
    vec![("name", SqlValue::Varchar(arcstr::ArcStr::from("Alice")))],
)?;
Source

pub fn list_tables(&self) -> Vec<String>

List all table names

Source§

impl Database

Source

pub fn record_change(&mut self, change: TransactionChange)

Record a change in the current transaction (if any)

Source

pub fn begin_transaction(&mut self) -> Result<(), StorageError>

Begin a new transaction

Source

pub fn begin_transaction_with_durability( &mut self, durability: TransactionDurability, ) -> Result<(), StorageError>

Begin a new transaction with a specific durability hint

The durability hint controls how the transaction’s changes are persisted. See TransactionDurability for available options.

Source

pub fn commit_transaction(&mut self) -> Result<(), StorageError>

Commit the current transaction

Source

pub fn rollback_transaction(&mut self) -> Result<(), StorageError>

Rollback the current transaction

Source

pub fn in_transaction(&self) -> bool

Check if we’re currently in a transaction

Source

pub fn transaction_id(&self) -> Option<u64>

Get current transaction ID (for debugging)

Source

pub fn create_savepoint(&mut self, name: String) -> Result<(), StorageError>

Create a savepoint within the current transaction

Source

pub fn rollback_to_savepoint( &mut self, name: String, ) -> Result<(), StorageError>

Rollback to a named savepoint

Source

pub fn release_savepoint(&mut self, name: String) -> Result<(), StorageError>

Release (destroy) a named savepoint

Source§

impl Database

Source

pub fn save_binary<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>

Save database in efficient binary format

Binary format is faster and more compact than SQL dumps. Use .vbsql extension to indicate binary format.

§Example
let db = Database::new();
db.save_binary("database.vbsql").unwrap();
Source

pub fn load_binary<P: AsRef<Path>>(path: P) -> Result<Self, StorageError>

Load database from binary format

Reads a binary .vbsql file and reconstructs the database.

§Example
let db = Database::load_binary("database.vbsql").unwrap();
Source

pub fn save<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>

Save database in default format

Uses compressed format when compression feature is enabled (default), otherwise falls back to uncompressed binary format.

§Example
let db = Database::new();
db.save("database.vbsql").unwrap();
Source

pub fn save_uncompressed<P: AsRef<Path>>( &self, path: P, ) -> Result<(), StorageError>

Save database in uncompressed binary format

Use this if you need uncompressed .vbsql files (e.g., for debugging or when compression overhead is not desired).

§Example
let db = Database::new();
db.save_uncompressed("database.vbsql").unwrap();
Source

pub fn save_compressed<P: AsRef<Path>>( &self, path: P, ) -> Result<(), StorageError>

Save database in compressed binary format (zstd compression)

Creates a .vbsqlz file containing zstd-compressed binary data. Typically 50-70% smaller than uncompressed .vbsql files.

Note: This method requires the compression feature to be enabled.

§Example
let db = Database::new();
db.save_compressed("database.vbsqlz").unwrap();
Source

pub fn load_compressed<P: AsRef<Path>>(path: P) -> Result<Self, StorageError>

Load database from compressed binary format

Reads a zstd-compressed .vbsqlz file and reconstructs the database.

Note: This method requires the compression feature to be enabled.

§Example
let db = Database::load_compressed("database.vbsqlz").unwrap();
Source§

impl Database

Source

pub fn save_json<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>

Save database in JSON format with default options

§Example
let db = Database::new();
db.save_json("database.json").unwrap();
Source

pub fn save_json_with_options<P: AsRef<Path>>( &self, path: P, options: JsonOptions, ) -> Result<(), StorageError>

Save database in JSON format with custom options

§Example
let db = Database::new();
let options = JsonOptions { pretty: true, include_metadata: true };
db.save_json_with_options("database.json", options).unwrap();
Source

pub fn load_json<P: AsRef<Path>>(path: P) -> Result<Self, StorageError>

Load database from JSON format

§Example
let db = Database::load_json("database.json").unwrap();
Source§

impl Database

Source

pub fn save_sql_dump<P: AsRef<Path>>(&self, path: P) -> Result<(), StorageError>

Save database state as SQL dump (human-readable, portable)

Generates SQL statements that recreate the database state including:

  • Schemas
  • Tables with column definitions
  • Indexes
  • Data (INSERT statements)
  • Roles and privileges
§Atomicity

This function uses atomic writes to prevent corruption:

  1. Writes to a temporary file in the same directory
  2. Flushes and syncs the buffer to ensure all data is on disk
  3. Atomically renames the temp file to the target path

This ensures the database file is never in a partial/corrupt state, even if the process crashes or is interrupted mid-write.

§Example
let db = Database::new();
db.save_sql_dump("database.sql").unwrap();
Source§

impl Database

Persistence format detection and auto-loading

Source

pub fn load<P: AsRef<Path>>(path: P) -> Result<Self, StorageError>

Load database from file with automatic format detection

Detects format based on:

  1. File extension (.vbsql for binary, .vbsqlz for compressed, .json for JSON, .sql for SQL dump)
  2. Magic number in file header (if extension is ambiguous)
§Example
// Auto-detects format from extension and content
let db = Database::load("database.vbsql").unwrap();
let db2 = Database::load("database.vbsqlz").unwrap();
let db3 = Database::load("database.json").unwrap();
let db4 = Database::load("database.sql").unwrap();

Trait Implementations§

Source§

impl Clone for Database

Source§

fn clone(&self) -> Self

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for Database

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Default for Database

Source§

fn default() -> Self

Returns the “default value” for a type. Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V