URI Register
Beta Software: This library is in active development and the API may change. While it's being used in production environments, you should pin to a specific version and test thoroughly before upgrading.
A caching PostgreSQL-backed URI register service for assigning unique integer IDs to URIs. Perfect for string interning, deduplication, and systems that need consistent global identifier mappings.
Note: The Rust library requires an async runtime (tokio). Python bindings support both synchronous and asynchronous usage.
Overview
The URI Register provides a simple, fast way to assign unique integer IDs to URI strings. Once registered, a URI always returns the same ID, making it ideal for string interning and deduplication in distributed systems.
Features
- Simple API: Just 2 methods -
register_uri()andregister_uri_batch() - Async + Sync: Built on tokio for high concurrency, with sync wrappers for Python
- Batch optimised: Process thousands of URIs in a single database round-trip
- Configurable caching: W-TinyLFU (Moka) or LRU caching for frequently accessed URIs
- Order preservation: Batch operations maintain strict order correspondence
- PostgreSQL backend: Durable, scalable, with connection pooling
- Automatic retry logic: Configurable exponential backoff for transient database errors
- Thread-safe: Designed for concurrent access from multiple threads/processes
Use Cases
- String interning systems: Reduce memory footprint by storing strings once and referencing by ID
- URL deduplication: Assign unique IDs to URLs across distributed crawlers
- Global identifier systems: Centralised ID assignment for URIs/strings in microservices
- Data warehousing: Efficient storage of repeated string values
- Distributed caching: Consistent ID assignment across cache nodes
Installation
Rust
Add to your Cargo.toml:
[]
= "0.2.0"
Or use as a git dependency:
[]
= { = "https://github.com/telicent-oss/uri-register" }
Python
Install from TestPyPI (during beta):
Requirements: Python 3.8+
Note: The package is currently published to TestPyPI for testing. Once stable, it will be available on the main PyPI repository.
Setup
1. Database Initialisation
Before using the URI Register service, you must initialise the PostgreSQL schema.
Run the schema creation script:
Or execute the SQL directly:
(
id BIGSERIAL PRIMARY KEY,
uri TEXT NOT NULL,
uri_hash UUID GENERATED ALWAYS AS (md5(uri)::uuid) STORED UNIQUE
);
2. Database Configuration
The service requires a PostgreSQL connection string. Set it as an environment variable or pass it directly:
Usage
Rust Example
use ;
async
Synchronous Rust API
For synchronous Rust applications that cannot use async/await, use SyncPostgresUriRegister:
use SyncPostgresUriRegister;
The synchronous API wraps the async implementation with a Tokio runtime internally. All methods have identical semantics to their async counterparts but block the calling thread until completion.
Python Example (Synchronous)
# Connect to PostgreSQL
=
# Register a single URI
=
# Register the same URI again - returns the same ID
=
assert ==
# Register multiple URIs in batch (much faster!)
=
=
# IDs maintain order: ids[i] corresponds to uris[i]
# Get statistics
=
Python Example (Asynchronous)
# Connect to PostgreSQL
= await
# Register a single URI
= await
# Register multiple URIs in batch (much faster!)
=
= await
# Get statistics
= await
API Reference
The UriService trait provides two methods:
register_uri(uri: &str) -> u64
Register a single URI and return its ID.
- If the URI exists, returns the existing ID
- If the URI is new, creates a new ID and returns it
- Uses configurable cache (Moka/LRU) for fast repeated lookups
let id = register.register_uri.await?;
register_uri_batch(uris: &[String]) -> Vec<u64>
Register multiple URIs in batch and return their IDs.
- Order preserved:
ids[i]corresponds touris[i] - Much faster than calling
register_uri()multiple times - Handles duplicate URIs in input correctly
- Cache-optimised: only queries database for cache misses
let uris = vec!;
let ids = register.register_uri_batch.await?;
// Access by index
assert_eq!;
Statistics and Observability
The register exposes comprehensive metrics suitable for OpenTelemetry and Prometheus:
let stats = register.stats.await?;
// Database metrics
println!;
println!;
// Cache performance metrics
println!;
println!;
println!;
println!;
// Connection pool metrics
println!;
println!;
println!;
Integration with OpenTelemetry
The statistics are designed for easy integration with observability systems:
use Meter;
let stats = register.stats.await?;
// Report as gauges
meter.u64_gauge.record;
meter.u64_gauge.record;
meter.f64_gauge.record;
meter.u64_gauge.record;
meter.u64_gauge.record;
meter.u64_gauge.record;
meter.u64_gauge.record;
meter.u64_gauge.record;
All metrics are cumulative since process start and safe for concurrent access.
Cache Strategies
The URI register supports two caching strategies:
Moka (W-TinyLFU) - Default
Recommended for most workloads. W-TinyLFU (Window Tiny Least Frequently Used) combines recency and frequency tracking to provide better cache hit rates than plain LRU, especially for workloads with mixed hot/cold data.
Moka is the default cache strategy, so you don't need to specify it:
let register = new.await?;
To explicitly specify Moka:
use CacheStrategy;
let register = new_with_cache_strategy.await?;
Python:
=
LRU (Least Recently Used)
Simple eviction based on recency of access. Use this if you have specific requirements or want more predictable behavior.
use CacheStrategy;
let register = new_with_cache_strategy.await?;
Python:
=
Performance Comparison:
For most real-world workloads, Moka (W-TinyLFU) provides 10-30% better cache hit rates compared to LRU, especially when:
- Access patterns have varying frequency (some URIs accessed much more than others)
- There are periodic "scans" or one-time accesses that would pollute an LRU cache
- Working set size is close to cache capacity
Logging
The library uses the tracing crate for structured logging. Logs include connection info, cache hit/miss statistics, and batch sizes.
Rust
Use tracing-subscriber to see logs:
use EnvFilter;
// Initialize logging (typically in main())
fmt
.with_env_filter
.init;
// Set RUST_LOG environment variable to control log levels:
// RUST_LOG=uri_register=debug - see debug logs from uri-register
// RUST_LOG=uri_register=trace - see trace logs (cache hits/misses)
Python
Logs are automatically bridged to Python's logging module:
# Configure Python logging as usual
# Logs from uri-register will appear with logger name 'uri_register'
# You can also configure just the uri_register logger:
Log Levels:
INFO: Connection events, configurationDEBUG: Cache statistics, batch sizes, database queriesTRACE: Individual cache hits/misses (verbose)
Performance
Logged Tables (Default)
With default logged tables on typical hardware:
- Single registration: ~500-1K URIs/sec (with cache: 100K+/sec)
- Batch registration: ~10K-50K URIs/sec
- Batch lookup (cached): ~1M+ URIs/sec (no DB round-trip)
- Batch lookup (uncached): ~100K-200K URIs/sec
Unlogged Tables (Optional)
For 2-3x faster writes at the cost of durability:
uri_register SET UNLOGGED;
Performance with unlogged tables:
- Batch registration: ~30K-150K URIs/sec
WARNING: Unlogged tables lose all data if PostgreSQL crashes. Only use this if you can rebuild the register from source data.
To revert back to logged mode:
uri_register SET LOGGED;
Performance Tips
- Always use batch operations when processing multiple URIs
- Configure connection pooling appropriately for your workload (typical: 10-50 connections)
- Tune cache size based on your working set size and available memory (typical: 10,000-100,000 entries)
- Batch size: Optimal batch size is typically 1,000-10,000 URIs per operation
- Hash-based indexing: The compact UUID index on
uri_hashscales much better than indexing full URIs - Consider unlogged tables for initial bulk loading, then switch to logged
Architecture
Application
↓
UriService trait (2 methods)
↓
PostgresUriRegister impl
↓ ↓
Cache (Moka/LRU) Connection Pool (20 connections)
↓ ↓
└───────────────→ PostgreSQL Database
Schema Details
The register uses a three-column table with hash-based indexing:
id: BIGSERIAL primary key (auto-incrementing u64)uri: TEXT storing the full URI (not indexed)uri_hash: UUID generated frommd5(uri)::uuidwith UNIQUE constraint (indexed)
Why Hash-Based Indexing?
In environments with enormous numbers of URIs, maintaining a B-tree index on the full URI text becomes prohibitively expensive - both in storage and maintenance overhead. By hashing the URI to a compact 16-byte UUID, we get:
- Compact index: 16 bytes per entry vs potentially hundreds of bytes for full URIs
- Fast lookups: B-tree operations on fixed-size UUIDs are very efficient
- Automatic computation: PostgreSQL computes the hash via
GENERATED ALWAYS AS
The hash collision probability with MD5 (128-bit) is vanishingly small - you'd need ~2^64 URIs before expecting a collision. However, for absolute safety, queries should verify the full URI matches when retrieving data:
SELECT id FROM uri_register
WHERE uri_hash = md5('http://example.com/my-uri')::uuid -- Fast index lookup
AND uri = 'http://example.com/my-uri'; -- Collision safety check
Inserts use ON CONFLICT (uri_hash) to handle duplicates efficiently:
INSERT INTO uri_register (uri)
VALUES ('http://example.com/my-uri')
ON CONFLICT (uri_hash)
DO UPDATE SET uri = EXCLUDED.uri -- No-op trick to return existing ID
RETURNING id;
Testing
For testing purposes, an in-memory implementation is available:
use InMemoryUriRegister;
async
Error Handling
The library uses structured error types for better error handling and programmatic error inspection:
use ;
// Configuration errors with specific variants
match new.await
// Database errors (connection strings are sanitised to prevent password leaks)
match register.register_uri.await
Error Types
- Configuration - Invalid configuration parameters (structured with specific variants)
- Database - Database operation failures (error messages sanitised)
- ConnectionPool - Connection pool errors
- Cache - Cache operation failures
- InvalidUri - URI validation failures (non-RFC 3986 compliant URIs)
License
Licensed under the Apache License, Version 2.0 (LICENSE or http://www.apache.org/licenses/LICENSE-2.0).
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.