Skip to main content

Module cache

Module cache 

Source
Expand description

In-memory caching for decompressed blocks and deserialized index pages.

Caching is critical for performance—decompression is expensive. The LRU cache implementation stores recently accessed blocks to avoid repeated decompression. In-memory caching for decompressed blocks and index pages.

This module implements the caching layer that sits between the storage backend and the read API, dramatically reducing decompression overhead and I/O for repeated access patterns.

§Architecture

The cache system consists of three components:

┌──────────────┐
│  File  │ Read API
└──────┬───────┘
       │
┌──────┴────────────────────┐
│  Cache Layer              │
│  ┌────────────────────┐   │
│  │ LRU Block Cache    │   │ Sharded, thread-safe
│  ├────────────────────┤   │
│  │ Prefetcher         │   │ Sequential detection
│  ├────────────────────┤   │
│  │ Eviction Policy    │   │ LRU or none
│  └────────────────────┘   │
└───────────┬───────────────┘
            │
┌───────────┴──────────┐
│  Storage Backend     │ FileBackend, S3, HTTP, etc.
└──────────────────────┘

§Performance Impact

MetricNo CacheWith Cache (512MB)
Sequential read (1st pass)600 MB/s600 MB/s
Sequential read (2nd pass)600 MB/s2500 MB/s
Random IOPS (warm)200015000
Decompression CPU100%5% (cached)

§Cache Configuration

Default cache size is 512 MB (configurable via Config):

  • Sufficient for ~8K blocks @ 64KB each
  • Covers typical VM working set (database, OS cache)
  • Can be tuned based on available RAM

§Submodules

  • lru: Sharded LRU cache implementation
  • [prefetch]: Sequential pattern detection and read-ahead

Modules§

lru
LRU cache implementation for decompressed data and index pages.
prefetch
Background and anticipatory prefetch logic.