Expand description
Async cold storage engine for historical Ethereum data.
This module provides an abstraction over various backend storage systems for historical blockchain data. Unlike hot storage which uses transaction semantics for mutable state, cold storage is optimized for:
- Append-only writes with block-ordered data
- Efficient bulk reads by block number or index
- Truncation (reorg handling) that removes data beyond a certain block
- Index maintenance for hash-based lookups
§Architecture
The cold storage engine uses a task-based architecture:
ColdStoragetrait defines the backend interfaceColdStorageTaskprocesses requests from channelsColdStorageHandleprovides full read/write accessColdStorageReadHandleprovides read-only access
§Channel Separation
Reads and writes use separate channels:
- Read channel: Shared between
ColdStorageHandleandColdStorageReadHandle. Reads are processed concurrently (up to 64 in flight). - Write channel: Exclusive to
ColdStorageHandle. Writes are processed sequentially to maintain ordering.
This design allows read-heavy workloads to proceed without being blocked by write operations, while ensuring write ordering is preserved.
§Consistency Model
Cold storage is eventually consistent with hot storage. Hot storage is always authoritative.
§When Cold May Lag
- Normal operation: Writes are dispatched asynchronously. Cold may be a few blocks behind hot during normal block processing.
- Backpressure: If cold storage cannot keep up, the write channel fills.
Dispatch methods return
ColdStorageError::Backpressure. - Task termination: If the cold storage task stops, writes cannot be
dispatched. Dispatch methods return
ColdStorageError::TaskTerminated.
§When Cold May Have Stale Data
- Failed truncate after reorg: If a truncate dispatch fails, cold may temporarily contain blocks that hot has unwound. This is safe because hot is authoritative, but cold queries may return stale data.
§Recovery Procedures
Use these methods on UnifiedStorage (from signet-storage) to detect and
recover from inconsistencies:
cold_lag(): ReturnsSome(first_missing_block)if cold is behind hot. ReturnsNoneif synced.replay_to_cold(): Re-sends blocks to cold storage. Use after detecting a gap or recovering from task failure.
§Example
use tokio_util::sync::CancellationToken;
use signet_cold::{ColdStorageTask, mem::MemColdBackend};
let cancel = CancellationToken::new();
let handle = ColdStorageTask::spawn(MemColdBackend::new(), cancel);
// Use the handle to interact with cold storage
let header = handle.get_header_by_number(100).await?;
// Get a read-only handle for query-only components
let reader = handle.reader();
let tx = reader.get_tx_by_hash(hash).await?;§Future Work: Streaming Writes
For bulk data loading (e.g., initial sync or historical backfill), a streaming write interface is planned:
/// Streaming write session for bulk data loading.
///
/// This type enables efficient bulk writes by buffering data and
/// batching backend operations. Use for initial sync or historical
/// backfill scenarios.
pub struct ColdStreamingWrite { /* ... */ }
impl ColdStreamingWrite {
/// Create a new streaming write session.
///
/// # Arguments
///
/// * `handle` - The cold storage handle to write through
/// * `buffer_capacity` - Number of blocks to buffer before flushing
pub fn new(handle: &ColdStorageHandle, buffer_capacity: usize) -> Self;
/// Push a block to the write buffer.
///
/// May trigger an automatic flush if the buffer is full.
pub async fn push(&mut self, block: BlockData) -> ColdResult<()>;
/// Flush buffered blocks to storage.
pub async fn flush(&mut self) -> ColdResult<()>;
/// Create a checkpoint at the given block number.
///
/// Flushes the buffer and records that blocks up to this number
/// have been durably written. Useful for resumable sync.
pub async fn checkpoint(&mut self, block: BlockNumber) -> ColdResult<()>;
/// Finish the streaming session.
///
/// Flushes any remaining buffered data.
pub async fn finish(self) -> ColdResult<()>;
}This is a design sketch; no implementation is provided yet.
§Feature Flags
in-memory: Enables thememmodule, providing an in-memoryColdStoragebackend for testing.test-utils: Enables theconformancemodule with backend conformance tests. Impliesin-memory.
Re-exports§
pub use task::ColdStorageHandle;pub use task::ColdStorageReadHandle;pub use task::ColdStorageTask;
Modules§
- conformance
test-utils - Conformance tests for cold storage backends. Conformance tests for ColdStorage backends.
- mem
in-memory - In-memory cold storage backend for testing.
- task
- Task module containing the storage task runner and handles. Cold storage task and handles.
Structs§
- Append
Block Request - Block append request data (wrapper struct).
- Block
Data - Data for appending a complete block to cold storage.
- Cold
Receipt - A receipt with enriched RPC log metadata and block context.
- Confirmed
- A value paired with its block confirmation metadata.
- Filter
- Filter for logs.
- Recovered
- Signed object with recovered signer.
- RpcLog
- Ethereum Log emitted by a transaction
- Stream
Params - Parameters for a log-streaming request.
Enums§
- Cold
Read Request - Read requests for cold storage.
- Cold
Storage Error - Error type for cold storage operations.
- Cold
Write Request - Write requests for cold storage.
- Header
Specifier - Specifier for header lookups.
- Receipt
Specifier - Specifier for receipt lookups.
- Signet
Events Specifier - Specifier for SignetEvents lookups.
- Transaction
Specifier - Specifier for transaction lookups.
- Zenith
Header Specifier - Specifier for ZenithHeader lookups.
Traits§
- Cold
Storage - Unified cold storage backend trait.
Functions§
- produce_
log_ stream_ default - Log-streaming implementation for backends without snapshot semantics.
Type Aliases§
- Cold
Result - Result type alias for cold storage operations.
- LogStream
- A stream of log results backed by a bounded channel.
- Responder
- Response sender type alias that propagates Result types.