rigatoni-destinations
Destination implementations for Rigatoni CDC/Data Replication framework - write data to S3 and other targets.
Overview
Production-ready destination implementations for streaming data from MongoDB to various targets.
Supported Destinations
AWS S3
- Multiple Formats: JSON, CSV, Parquet, Avro
- Compression: Gzip, Zstandard
- Partitioning: Hive-style, date-based, collection-based
- Features: Retry logic, S3-compatible storage (LocalStack, MinIO)
Installation
[]
= { = "0.1.1", = ["s3", "json"] }
Available Features
Destinations:
s3- AWS S3 (enabled by default)
Formats:
json- JSON/JSONL (enabled by default)csv- CSV formatparquet- Apache Parquetavro- Apache Avro
Compression:
gzip- Gzip compressionzstandard- Zstandard compression
Convenience:
all-formats- All serialization formatsall- All features (S3 + all formats + compression)
Quick Start - S3 Destination
use ;
async
S3 Features
Serialization Formats
use SerializationFormat;
// JSON (default)
.format
// Parquet for analytics
.format
// CSV for exports
.format
// Avro for streaming
.format
Compression
use Compression;
// Gzip (widely compatible)
.compression
// Zstandard (better ratio and speed)
.compression
Partitioning Strategies
use KeyGenerationStrategy;
// Hive partitioning for analytics
.key_strategy
// Creates: collection=users/year=2025/month=01/day=16/hour=10/timestamp.ext
// Date-hour partitioning (default)
.key_strategy
// Creates: users/2025/01/16/10/timestamp.ext
// Date partitioning
.key_strategy
// Creates: users/2025/01/16/timestamp.ext
Examples
See the rigatoni-examples directory:
s3_basic- Basic S3 usages3_advanced- Advanced features (formats, compression, partitioning)s3_with_compression- Compression examples
Testing with LocalStack
# Start LocalStack
# Run integration tests
Documentation
License
Licensed under the Apache License, Version 2.0 (LICENSE or http://www.apache.org/licenses/LICENSE-2.0).