ignix 0.2.0

High-performance Redis-compatible key-value store built with Rust
Documentation

๐Ÿ”ฅ Ignix

High-Performance Redis-Compatible Key-Value Store

Ignix (from "Ignite" + "Index") is a blazing-fast, Redis-protocol compatible key-value store designed for modern multi-core systems. Built with Rust for maximum performance and safety.

Rust License: MIT

โœจ Features

  • ๐Ÿš€ High Performance: Built with Rust for maximum speed and safety
  • ๐Ÿ”Œ Redis Protocol Compatible: Drop-in replacement for Redis clients
  • ๐Ÿงต Async I/O: Non-blocking networking with mio for high concurrency
  • ๐Ÿ’พ AOF Persistence: Append-only file for data durability
  • ๐ŸŽฏ Zero Dependencies: Minimal external dependencies for security
  • ๐Ÿ“Š Built-in Benchmarks: Performance testing included

๐Ÿ—๏ธ Architecture

Ignix uses a simple but efficient architecture:

  • RESP Protocol: Full Redis Serialization Protocol support
  • Event-Driven Networking: mio-based async I/O for handling thousands of connections
  • In-Memory Storage: SwissTable-based hash map storage for optimal performance
  • AOF Persistence: Optional append-only file logging for durability

๐Ÿš€ Quick Start

Prerequisites

  • Rust 1.80+ (recommended: latest stable)
  • Cargo package manager

Installation

git clone https://github.com/CycleChain/ignix.git
cd ignix
cargo build --release

Running the Server

cargo run --release

The server will start on 0.0.0.0:7379 by default.

Testing with Client Example

# In another terminal
cargo run --example client

Expected output:

+OK
$5
world

๐Ÿ“ก Supported Commands

Ignix supports the following Redis commands:

Command Description Example
PING Test connectivity PING โ†’ +PONG
SET Set key-value pair SET key value โ†’ +OK
GET Get value by key GET key โ†’ $5\r\nvalue
DEL Delete key DEL key โ†’ :1
EXISTS Check if key exists EXISTS key โ†’ :1
INCR Increment integer value INCR counter โ†’ :1
RENAME Rename a key RENAME old new โ†’ +OK
MGET Get multiple values MGET key1 key2 โ†’ *2\r\n...
MSET Set multiple key-value pairs MSET k1 v1 k2 v2 โ†’ +OK

๐Ÿ”ง Configuration

Environment Variables

  • RUST_LOG: Set logging level (e.g., debug, info, warn, error)

AOF Persistence

Ignix automatically creates an ignix.aof file for persistence. Data is written to AOF and flushed every second for durability.

๐Ÿงช Testing

Run Unit Tests

cargo test

Run Benchmarks

# Execute benchmark
cargo bench --bench exec

# RESP parsing benchmark  
cargo bench --bench resp

Example Benchmark Results

exec/set_get            time:   [396.62 ยตs 403.23 ยตs 413.05 ยตs]
resp/parse_many_1k      time:   [296.51 ยตs 298.00 ยตs 299.44 ยตs]

๐Ÿ”Œ Client Usage

Using Redis CLI

redis-cli -h 127.0.0.1 -p 7379
127.0.0.1:7379> PING
PONG
127.0.0.1:7379> SET hello world
OK
127.0.0.1:7379> GET hello
"world"

Using Any Redis Client Library

Ignix is compatible with any Redis client library. Here's a Python example:

import redis

# Connect to Ignix
r = redis.Redis(host='localhost', port=7379, decode_responses=True)

# Use like Redis
r.set('hello', 'world')
print(r.get('hello'))  # Output: world

๐Ÿ“Š Performance

Benchmarks reflect Ignix v0.2.0 (reactor/worker split + DashMap). See benchmark_results/benchmark_results.json for full data.

Redis vs Ignix comparison

Performance ratio

Highlights (from latest results)

  • SET 64B (10 conns): Ignix 21.5k ops/s vs Redis 17.6k โ†’ ~1.22x
  • SET 256B (10 conns): Ignix 28.0k vs 17.9k โ†’ ~1.57x
  • SET 4KB (10 conns): Ignix 28.1k vs 17.2k โ†’ ~1.63x
  • GET 64B (10 conns): Ignix 28.9k vs 16.8k โ†’ ~1.72x
  • GET 4KB (50 conns): Ignix 16.4k vs 13.0k โ†’ ~1.25x

Latency remains sub-millisecond across small and medium payloads in both systems, with Ignix sustaining higher throughput under concurrency.

What changed in v0.2.0?

  • Reactor decoupled from storage/disk via worker pool + mio::Waker โ†’ no blocking on hot path.
  • DashMap storage with sharded locking โ†’ significantly less contention under writes.
  • Bounded channels (tasks/AOF) โ†’ backpressure and better stability under load.

๐Ÿ“Š Benchmark Your Own Workload

Run comprehensive benchmarks with our included tools:

# Quick comparison
python3 quick_benchmark.py

# Detailed analysis with charts
python3 benchmark_redis_vs_ignix.py

# Custom test scenarios
python3 benchmark_redis_vs_ignix.py --data-sizes 64 256 1024 --connections 1 10 25

Architecture Benefits:

  • Sub-millisecond latency for most operations
  • High throughput with async I/O
  • Memory efficient with zero-copy operations where possible
  • Minimal allocations in hot paths

๐Ÿ—๏ธ Development

๐Ÿšง Early Development Stage: Ignix is actively under development. APIs may change, and new features are being added regularly. We welcome contributions and feedback!

Project Structure

src/
โ”œโ”€โ”€ bin/ignix.rs        # Server binary
โ”œโ”€โ”€ lib.rs              # Library exports
โ”œโ”€โ”€ protocol.rs         # RESP protocol parser/encoder
โ”œโ”€โ”€ storage.rs          # In-memory storage (Dict)
โ”œโ”€โ”€ shard.rs           # Command execution logic  
โ”œโ”€โ”€ net.rs             # Networking and event loop
โ””โ”€โ”€ aof.rs             # AOF persistence

examples/
โ””โ”€โ”€ client.rs          # Example client

tests/
โ”œโ”€โ”€ basic.rs           # Basic functionality tests
โ””โ”€โ”€ resp.rs            # Protocol parsing tests

benches/
โ”œโ”€โ”€ exec.rs            # Command execution benchmarks
โ””โ”€โ”€ resp.rs            # Protocol parsing benchmarks

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Add tests for new functionality
  5. Run tests (cargo test)
  6. Run benchmarks (cargo bench)
  7. Commit your changes (git commit -m 'Add amazing feature')
  8. Push to the branch (git push origin feature/amazing-feature)
  9. Open a Pull Request

Code Style

  • Follow Rust standard formatting (cargo fmt)
  • Run Clippy lints (cargo clippy)
  • Maintain test coverage for new features

๐Ÿ” Debugging

Enable Debug Logging

RUST_LOG=debug cargo run --release

Monitor AOF File

tail -f ignix.aof

๐Ÿšง Roadmap

Current Development Phase: Core optimization and stability

๐ŸŽฏ Short Term (Next Release)

  • Performance optimization for large data operations
  • Memory management improvements
  • Connection pooling enhancements
  • Comprehensive benchmarking suite expansion

๐Ÿš€ Medium Term

  • More Redis commands (HASH, LIST, SET operations)
  • Multi-threading support for better concurrency
  • RDB snapshots for faster restarts
  • Metrics and monitoring endpoints

๐ŸŒŸ Long Term Vision

  • Clustering support for horizontal scaling
  • Replication for high availability
  • Lua scripting support for complex operations
  • Advanced data structures and algorithms
  • Plugin architecture for extensibility

๐Ÿ“ˆ Performance Goals: Our primary focus is achieving consistently high performance across all data sizes while maintaining the simplicity and reliability that makes Ignix special.

๐Ÿ› Known Limitations

Development Status: These limitations are actively being addressed as part of our development roadmap.

Current Limitations

  • Single-threaded execution (one shard) - Multi-threading planned
  • Limited command set compared to full Redis - Expanding gradually
  • Large data performance can be slower than Redis - Optimization in progress
  • No clustering or replication yet - Future releases
  • AOF-only persistence (no RDB snapshots) - RDB support planned

Performance Notes

  • Excellent for small data (64 bytes): Up to 4x faster SET operations than Redis
  • Competitive for medium data (256 bytes - 1KB): Similar to Redis performance
  • Room for improvement on large payloads (4KB+): Redis shows maturity in large data handling

These characteristics make Ignix ideal for:

  • โœ… Caching layers with small objects
  • โœ… Session storage with quick access patterns
  • โœ… Real-time applications requiring low latency
  • โœ… Microservices with high-frequency, small data operations

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • Redis for the protocol specification
  • mio for async I/O
  • The Rust community for excellent tooling and libraries

๐Ÿ“ž Support


Built with โค๏ธ and ๐Ÿฆ€ by the CycleChain.io team