An event camera processing library with Rust backend and Python bindings, designed for scalable data processing with real-world event camera datasets.
Core Features
- Universal Format Support: Load data from H5, AEDAT, EVT2/3, AER, and text formats
- Automatic Format Detection: No need to specify format types manually
- Polars DataFrame Integration: High-performance DataFrame operations with up to 360M events/s filtering
- Event Filtering: Comprehensive filtering with temporal, spatial, and polarity options
- Event Representations: Stacked histograms, voxel grids, and mixed density stacks
- Neural Network Models: E2VID model loading and inference
- Real-time Data Processing: Handle large datasets (550MB+ files) efficiently
- Polarity Encoding: Automatic conversion between 0/1 and -1/1 polarities
- Rust Performance: Memory-safe, high-performance backend with Python bindings
In Development: Advanced neural network processing (hopefully with Rust
backend, maybe Candle) Real-time visualization (Only simulated working at the
moment — see wasm-evlib
)
Note*: The Rust backend currently focuses on data loading and processing, with Python modules providing advanced features like filtering and representations.
- Quick Start
- Installation
- Polars DataFrame Integration
- Available Python Modules
- High-Performance PyTorch DataLoader
- Video-to-Events Conversion and Visualization
- Examples
- Development
- Community & Support
- License
Quick Start
What are Event Cameras?
Event cameras (also called neuromorphic or dynamic vision sensors) differ fundamentally from traditional frame-based cameras. Instead of capturing images at fixed frame rates, they operate asynchronously, with each pixel independently reporting changes in brightness as they occur.
Each event is represented as a 4-tuple:
$$e = (x, y, t, p)$$
Where:
- $x, y \in \mathbb{N}$: Pixel coordinates in the sensor array (e.g., $0 \leq x < 640$, $0 \leq y < 480$)
- $t \in \mathbb{R}^+$: Timestamp when the brightness change occurred (microsecond precision)
- $p \in {-1, +1}$ or ${0, 1}$: Polarity indicating brightness change direction
An event is triggered when the logarithmic brightness change exceeds a threshold:
$$\log(L(x,y,t)) - \log(L(x,y,t_{last})) > \pm C$$
where $L(x,y,t)$ is the brightness at pixel $(x,y)$ at time $t$, and $C$ is the contrast threshold.
Key advantages:
- High temporal resolution: Microsecond precision vs. millisecond frame intervals
- High dynamic range: 120dB+ vs. ~60dB for conventional cameras
- Low power consumption: Only active pixels generate data
- No motion blur: Events capture instantaneous changes
- Sparse data: Only reports meaningful changes, reducing bandwidth
Event cameras excel at tracking fast motion, operating in challenging lighting conditions, and applications requiring precise temporal information like robotics, autonomous vehicles, and augmented reality.
Below is an overview of what the data "looks" like...
Basic Usage
# Load events from any supported format (automatic detection)
=
# Or load as LazyFrame for memory-efficient processing
=
# Basic event information
# Convert to NumPy arrays for compatibility
=
=
=
=
Advanced Filtering
# Load events as LazyFrame for efficient processing
=
# Time filtering using Polars operations
=
# Spatial filtering (Region of Interest)
=
# Polarity filtering
=
# Collect final results
=
Event Representations
evlib provides comprehensive event representation functions for computer vision and neural network applications:
# Load events and create representations
=
=
# Create stacked histogram (replaces RVT preprocessing)
=
# Create mixed density stack representation
=
# Create voxel grid representation
=
# Advanced representations (require data type conversion)
# Convert timestamp and ensure proper dtypes for advanced functions
=
=
# Create time surface representation
=
# Create averaged time surface
=
RVT processing example:
# Let's load in some un-procedded RVT data, i.e. gen4_1mpx_original
: = 1519652
# 500M -> 1.5M in seconds :-)
:
:
:
┌──────────┬──────────┬─────┬─────┬───────┐
│ ┆ ┆ ┆ ┆ │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ ┆ ┆ ┆ ┆ │
╞══════════╪══════════╪═════╪═════╪═══════╡
│ 0 ┆ 1 ┆ 0 ┆ 0 ┆ 4 │
│ 0 ┆ 1 ┆ 0 ┆ 1 ┆ 3 │
│ 0 ┆ 1 ┆ 0 ┆ 2 ┆ 5 │
│ 0 ┆ 1 ┆ 0 ┆ 3 ┆ 6 │
│ 0 ┆ 1 ┆ 0 ┆ 4 ┆ 5 │
│ … ┆ … ┆ … ┆ … ┆ … │
│ 9 ┆ 1 ┆ 479 ┆ 563 ┆ 1 │
│ 9 ┆ 1 ┆ 479 ┆ 624 ┆ 1 │
│ 9 ┆ 1 ┆ 479 ┆ 626 ┆ 1 │
│ 9 ┆ 1 ┆ 479 ┆ 638 ┆ 1 │
│ 9 ┆ 1 ┆ 479 ┆ 639 ┆ 1 │
└──────────┴──────────┴─────┴─────┴───────┘
Installation
Basic Installation
# For Polars DataFrame support (recommended)
# For PyTorch integration
Development Installation
We recommend using uv for fast, reliable Python package management:
# Install uv (if not already installed)
|
# Clone the repository
# Create virtual environment and install dependencies
# Build the Rust extensions
System Dependencies
# Ubuntu/Debian
# macOS
Performance-Optimized Installation
For optimal performance, ensure you have the recommended system configuration:
System Requirements:
- RAM: 8GB+ recommended for files >100M events
- Python: 3.10+ (3.12 recommended for best performance)
- Polars: Latest version for advanced DataFrame operations
Installation for Performance:
# Install with Polars support (recommended)
# For development with all performance features (using uv)
# Verify installation with benchmark
Optional Performance Dependencies:
# For advanced memory monitoring
# For parallel processing (already included in dev)
Polars DataFrame Integration
evlib provides comprehensive Polars DataFrame support for high-performance event data processing:
Key Benefits
- Performance: 1.9M+ events/s loading speed, 360M+ events/s filtering speed
- Memory Efficiency: ~23 bytes/event (5x better than typical 110 bytes/event)
- Expressive Queries: SQL-like operations for complex data analysis
- Lazy Evaluation: Query optimization for better performance
- Ecosystem Integration: Seamless integration with data science tools
API Overview
Loading Data
# Load as LazyFrame (recommended)
=
= # Collect to DataFrame when needed
# Automatic format detection and optimization
= # EVT2 format automatically detected
Advanced Features
# Chain operations with LazyFrames for optimal performance
=
=
# Memory-efficient temporal analysis
=
# Complex filtering operations with Polars
=
=
Utility Functions
# Built-in format detection
=
# Spatial filtering using Polars operations
=
=
# Chain multiple filters efficiently
=
# Temporal analysis with Polars operations
=
# Save processed data (working example)
=
=
=
, , , =
# Convert Duration microseconds to seconds for save function
= / 1_000_000
Performance Benchmarks
Benchmark Results:
- Loading Speed: 1.9M+ events/second average across formats
- Filter Speed: 360M+ events/second for complex filtering operations
- Memory Efficiency: ~23 bytes/event
- Format Performance: RAW binary (2.6M events/s) > HDF5 (2.5M events/s) > Text (0.6M events/s)
Benchmarking and Monitoring
Run performance benchmarks to verify optimizations:
# Verify README performance claims and generate plots
# Memory efficiency benchmark
# Test with your own data
Performance Examples
Optimal Loading for Different File Sizes
# Small files (<5M events) - Direct loading
=
=
# Large files (>5M events) - Automatic streaming
=
# Same API, automatically uses streaming for memory efficiency
# Memory-efficient filtering on large datasets using Polars
=
=
# Collect only when needed for memory efficiency
=
Memory Monitoring
=
return . / 1024 / 1024 # MB
# Monitor memory usage during loading
=
=
=
=
Troubleshooting Large Files
Memory Constraints
- Automatic Streaming: Files >5M events use streaming by default
- LazyFrame Operations: Memory-efficient processing without full materialization
- Memory Monitoring: Use
benchmark_memory.py
to track usage - System Requirements: Recommend 8GB+ RAM for files >100M events
Performance Tuning
- Optimal Chunk Size: System automatically calculates based on available memory
- LazyFrame Operations: Use
.lazy()
for complex filtering chains - Memory-Efficient Formats: RAW binary formats provide best performance, followed by HDF5
- Progress Reporting: Large files show progress during loading
Common Issues and Solutions
Issue: Out of memory errors
# Solution: Use filtering before collecting (streaming activates automatically)
=
# Streaming activates automatically for files >5M events
# Apply filtering before collecting to reduce memory usage
=
= # Only collect when needed
# Or stream to disk using Polars
Issue: Slow loading performance
# Solution: Use LazyFrame for complex operations
=
# Use Polars operations for optimized filtering
=
=
# Or chain Polars operations
=
Issue: Memory usage higher than expected
# Solution: Monitor and verify optimization
=
=
# Check format detection
=
Available Python Modules
evlib provides several Python modules for different aspects of event processing:
Core Modules
evlib.formats
: Direct Rust access for format loading and detectionevlib.filtering
: High-performance event filtering with Polarsevlib.representations
: Event representations (stacked histograms, voxel grids)evlib.models
: Neural network model loading and inference (Under construction)
Module Overview
# Core event loading (returns Polars LazyFrame)
=
# Format detection and description
=
=
# Advanced filtering using Polars operations
=
=
# Event representations (working examples)
=
=
=
# Advanced representations (with proper data conversion)
=
=
=
# Neural network models (limited functionality)
# If available
# Data saving (working examples)
=
=
, , , =
# Convert Duration microseconds to seconds for save functions
= / 1_000_000
High-Performance PyTorch DataLoader
evlib includes an optimized PyTorch dataloader implementation that showcases best practices for event camera data processing:
Key Features
- Polars → PyTorch Integration: Native
.to_torch()
conversion for zero-copy data transfer - RVT Preprocessing: Loads real RVT (Recurrent Vision Transformer) preprocessed data
- Statistical Feature Extraction: Efficiently extracts 91 features from stacked histograms
- High Throughput: Achieves 13,000+ samples/sec training throughput
- Memory Efficient: Lazy evaluation and batched processing
Quick Start
# New: Use the built-in PyTorch integration
# Option 1: Raw event data (since RVT data not available in CI)
=
=
# Option 2: Manual setup for custom transforms (for RVT data when available)
# lazy_df = load_rvt_data("data/gen4_1mpx_processed_RVT/val/moorea_2019-02-21_000_td_2257500000_2317500000")
# Option 3: Raw event data from various formats
# events = evlib.load_events("data/eTram/h5/val_2/val_night_007_td.h5") # eTram dataset
# events = evlib.load_events("data/prophersee/samples/hdf5/pedestrians.hdf5") # Prophesee format
= # Text format
=
# Option 4: Advanced - Custom transform using provided functions (for RVT data)
# if lazy_df is not None:
# # Use the built-in RVT transform
# transform = create_rvt_transform()
# dataset = PolarsDataset(lazy_df, batch_size=256, shuffle=True,
# transform=transform, drop_last=True)
# dataloader = DataLoader(dataset, batch_size=None, num_workers=0)
# Option 5: Custom transform (if you need to modify the feature extraction)
"""Custom transform to separate RVT features and labels from Polars batch"""
=
# Add all temporal bin features (mean, std, max, nonzero for each bin)
= f
# Add bounding box features
# Add activity features
# Add normalized features (note: actual feature name is "t_norm", not "timestamp_norm")
# Stack into feature matrix and extract labels
= # Shape: (batch_size, 91)
= # Shape: (batch_size,)
return
# Train with real event camera data
= # Shape: (256, 91) - 91 statistical features
= # Shape: (256,) - object class labels
# Your PyTorch training loop here
# outputs = model(features)
# loss = criterion(outputs, labels)
# ... backward pass, optimizer step, etc.
break # Just show the data format
Architecture Overview
RVT HDF5 Data → Feature Extraction → Polars LazyFrame → .to_torch() → PyTorch Training
The dataloader demonstrates:
- Loading compressed HDF5 event representations (1198 samples, 20 temporal bins, 360×640 resolution)
- Statistical feature extraction (mean, std, max, nonzero) per temporal bin
- Object detection labels with bounding boxes and confidence scores
- Polars LazyFrame operations for memory-efficient processing
- Native PyTorch tensor conversion for optimal performance
Performance Benefits
- 95%+ accuracy on real 3-class classification tasks
- 13,262 samples/sec training throughput
- Memory efficient processing of large event datasets
- Zero-copy conversion between Polars and PyTorch
See examples/polars_pytorch_simplified.py
for the complete implementation and adapt it for your own event camera datasets.
Video-to-Events Conversion and Visualization
evlib includes a complete pipeline for converting standard video files to event camera data using the ESIM algorithm, with support for Mac GPU acceleration via MPS (Metal Performance Shaders).
Converting Video to Events
Use the ESIM (Event-based Simulator) algorithm to convert any video file to event data:
# Basic conversion with automatic device selection (MPS on Mac, CUDA on Linux/Windows, CPU fallback)
# Explicitly use Mac GPU acceleration
# Show video information and processing configuration
# Process specific time range
# Estimate event count before full processing
Parameters:
--cp, --positive_threshold
: Positive contrast threshold (default: 0.4)--cn, --negative_threshold
: Negative contrast threshold (default: 0.4)--device
: Computing device (auto
,cuda
,mps
,cpu
)--width, --height
: Output resolution (default: 640x480)--fps
: Override video FPS--refractory_period
: Minimum time between events at same pixel (ms)
Performance: Achieves 490,000+ events/second processing speed with MPS acceleration on Mac.
Python API Usage
# Configure ESIM algorithm
=
# Configure video processing
=
# Convert video to events
=
, , , =
# Save as HDF5 file
Visualizing Event Data
Convert the generated event data to a visualization video:
# Basic visualization of converted events
# High-quality visualization with custom parameters
# Thermal colormap visualization
# Process time range
Complete Workflow Example:
# 1. Convert video to events (generates ~18M events from 5.7s video)
# Output: h5/events_esim.h5 (240MB HDF5 file)
# 2. Visualize the events as a video
# Output: sample_events.mp4 (event visualization video)
This pipeline allows you to:
- Convert any standard video format to neuromorphic event data
- Leverage GPU acceleration for fast processing
- Visualize the results with customizable rendering
- Generate datasets for event camera research and development
Examples
Run examples:
# Test all notebooks
# Run specific examples
# Run the high-performance PyTorch dataloader example
Development
Testing
Core Testing
# Run all tests (Python and Rust)
# Test specific modules
# Test notebooks (including examples)
# Test with coverage
Documentation Testing
All code examples in the documentation are automatically tested to ensure they work correctly:
# Test all documentation examples
# Test specific documentation file
# Use the convenient test script
# Test specific documentation section
Code Quality
# Format code
# Run linting
# Check types
Building
Requirements
- Rust: Stable toolchain (see
rust-toolchain.toml
) - Python: ≥3.10 (3.12 recommended)
- Maturin: For building Python extensions
# Development build
# Build with features
# Release build
Community & Support
- GitHub: tallamjr/evlib
- Issues: Report bugs and request features
- Discussions: Community Q&A and ideas
License
MIT License - see LICENSE.md for details.