pgkv
A high-performance, production-grade key-value store backed by PostgreSQL unlogged tables.
Features
- High Performance: Uses PostgreSQL UNLOGGED tables for maximum write throughput (2-3x faster than regular tables)
- Runtime Agnostic: Synchronous API works with any async runtime or none at all
- Minimal Dependencies: Only depends on
postgresandthiserror- no async runtime required for your code - Rich API: Comprehensive operations including batch, atomic, TTL, and prefix scanning
- Type Safe: Strong typing with optional serde support for automatic serialization
- Production Ready: Comprehensive error handling, connection pooling support, and transaction safety
- Configurable TTL Cleanup: Choose automatic, manual, or disabled expiration handling
- Zero Unsafe Code: 100% safe Rust
Quick Start
Add to your Cargo.toml:
[]
= "0.1"
# Optional: Enable serde support for automatic serialization
# pgkv = { version = "0.1", features = ["serde"] }
Basic usage:
use ;
Why Unlogged Tables?
PostgreSQL UNLOGGED tables provide significantly higher write performance by skipping write-ahead logging (WAL). This makes them ideal for:
- Caching: Data that can be regenerated if lost
- Session storage: Ephemeral user session data
- Rate limiting: Counters and temporary state
- Job queues: Transient task data
- Feature flags: Temporary configuration
Trade-off: Data in UNLOGGED tables is not crash-safe and will be truncated after an unclean shutdown. Use regular tables (TableType::Regular) if you need durability.
API Overview
Basic Operations
use Store;
let store = connect?;
// Set/Get/Delete
store.set?;
store.set?; // Also accepts &str
let value = store.get?; // Returns Option<Vec<u8>>
let string = store.get_string?; // Returns Option<String>
store.delete?;
// Check existence
if store.exists?
// Set only if key doesn't exist
if store.set_nx?
TTL (Time-To-Live) Support
use Duration;
// Set with expiration
store.set_ex?;
// Update TTL on existing key
store.expire?;
// Check remaining TTL
if let Some = store.ttl?
// Remove expiration (make persistent)
store.persist?;
// Cleanup all expired keys
let cleaned = store.cleanup_expired?;
TTL Cleanup Strategies
You can configure how expired keys are handled:
use ;
// Automatic cleanup on read (default)
// Expired keys are deleted when accessed
let config = new
.ttl_cleanup_strategy;
// Manual cleanup - you control when expired keys are deleted
// Call store.cleanup_expired() on your own schedule (e.g., via cron)
let config = new
.ttl_cleanup_strategy;
// Disabled - TTL is ignored entirely (maximum read performance)
// Expired keys are returned as if still valid
let config = new
.ttl_cleanup_strategy;
Batch Operations
// Set multiple keys atomically
store.set_many?;
// Get multiple keys
let results = store.get_many?;
for kv in results
// Delete multiple keys
let deleted = store.delete_many?;
Atomic Operations
use CasResult;
// Atomic increment/decrement
let count = store.increment?;
let count = store.decrement?;
// Compare-and-swap
match store.compare_and_swap?
// Get and set atomically
let old_value = store.get_and_set?;
// Get and delete atomically
let value = store.get_and_delete?;
Prefix Scanning
use ScanOptions;
// List keys with prefix
let keys = store.keys?;
// Scan key-value pairs with pagination
let items = store.scan?;
// Count keys matching pattern
let count = store.count?;
// Delete all keys with prefix
let deleted = store.delete_prefix?;
Transactions
store.transaction?;
Configuration
use ;
let config = new
.table_name // Custom table name
.table_type // Or TableType::Regular for durability
.auto_create_table // Auto-create table on connect
.ttl_cleanup_strategy // TTL handling strategy
.max_key_length // Max key size in bytes
.max_value_size // Max value size (100MB)
.schema // Custom schema (default: public)
.application_name; // Shows in pg_stat_activity
let store = with_config?;
Typed Store (with serde feature)
use ;
use ;
let store = connect?;
let users: = new;
// Automatically serializes to JSON
users.set?;
// Automatically deserializes
let user: = users.get?;
Database Schema
The library creates the following table structure:
CREATE UNLOGGED TABLE IF NOT EXISTS kv_store (
key TEXT PRIMARY KEY,
value BYTEA NOT NULL,
expires_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW,
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW
);
-- Index for efficient expiration cleanup
ON kv_store (expires_at)
WHERE expires_at IS NOT NULL;
Thread Safety
Store is Send but not Sync due to the use of RefCell for interior mutability. For multi-threaded access:
- Connection pooling (recommended): Use a pool like
r2d2ordeadpoolwith separateStoreinstances per thread - Mutex wrapping: Wrap
StoreinMutex<Store>for shared access
use Mutex;
let store = new;
// In each thread:
let guard = store.lock.unwrap;
guard.set?;
Benchmarks
Run benchmarks comparing PostgreSQL UNLOGGED, PostgreSQL Regular, and Redis:
# PostgreSQL only
DATABASE_URL=postgresql://user@localhost/postgres
# With Redis comparison
DATABASE_URL=postgresql://user@localhost/postgres REDIS_URL=redis://localhost:6379
Benchmark groups:
set- Single key writes with various value sizes (64B - 4KB)get- Single key reads (existing and missing keys)set_many/get_many- Batch operations (10 - 500 keys)delete- Single key deletesexists- Key existence checksincrement- Atomic counter incrementsset_with_ttl- Writes with TTLscan- Prefix scanning with paginationmixed_workload- 80% reads / 20% writes
Benchmark Results
Results from running on localhost (Apple M1, PostgreSQL 16, Redis 7):
Single Key Operations
| Operation | PG UNLOGGED | PG Regular | Redis |
|---|---|---|---|
| SET (256B) | 112 µs | 245 µs | 35 µs |
| GET | 86 µs | 89 µs | 28 µs |
| DELETE | 91 µs | 198 µs | 26 µs |
| EXISTS | 94 µs | 96 µs | 25 µs |
| INCREMENT | 109 µs | 238 µs | 29 µs |
Batch Operations
| Operation | PG UNLOGGED | PG Regular | Redis |
|---|---|---|---|
| SET_MANY (10) | 1.0 ms | 2.2 ms | 0.12 ms |
| SET_MANY (100) | 9.0 ms | 19.8 ms | 0.95 ms |
| GET_MANY (10) | 110 µs | 115 µs | 85 µs |
| GET_MANY (100) | 202 µs | 215 µs | 145 µs |
Key Insights
- UNLOGGED vs Regular: UNLOGGED tables are ~2x faster for writes due to skipping WAL
- Read Performance: Similar between UNLOGGED and Regular (both use same query path)
- vs Redis: Redis is 3-4x faster (in-memory vs disk), but pgkv avoids an extra service
- Batch Efficiency:
set_manyis more efficient than individual sets due to transaction batching
Results vary by hardware, network latency, and PostgreSQL configuration. Run benchmarks on your system for accurate numbers.
Comparison with Alternatives
| Feature | pgkv | Redis | memcached |
|---|---|---|---|
| ACID Transactions | Yes | Limited | No |
| SQL Queries | Yes (via raw SQL) | No | No |
| TTL Support | Yes | Yes | Yes |
| Persistence | Optional | Optional | No |
| Clustering | Via PG | Yes | Yes |
| External Service | Uses existing PG | Yes | Yes |
| Memory Limit | Disk-based | Memory | Memory |
When to use pgkv:
- You already have PostgreSQL and want to avoid adding Redis/memcached
- You need ACID guarantees for some operations
- Your cache can fit on disk (not purely in-memory)
- You want SQL-level access to cached data for debugging
When to use Redis/memcached:
- You need sub-millisecond latency
- Your workload is purely in-memory
- You need built-in clustering
- You need advanced data structures (sorted sets, streams, etc.)
Error Handling
use ;
let store = connect?;
match store.get
// Error predicates
if let Err = store.get_or_err
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
Licensed under the MIT License.