avz
A blistering-fast Avro CLI tool — a modern replacement for Java's avro-tools and Python's fastavro.
Supports local files, glob patterns, and S3 URIs.
Install
Homebrew (macOS)
Cargo
Debian / Ubuntu
Download the .deb from Releases:
From source
# binary at target/release/avz
Quick Start
# peek at the first 5 records
# pretty-print with syntax highlighting
# search for a record by regex
# search for a literal string (no regex)
# count records across files using a glob
# works with S3 too
Note: Quote glob patterns and S3 URIs to prevent your shell from expanding them.
Commands
Usage: avz <COMMAND>
Commands:
cat Print all records as JSON
head Print the first N records (default 10)
schema Print the Avro schema as JSON
count Count records in Avro files
meta Print file metadata (codec, sync marker, user metadata)
fromjson Convert JSON records to an Avro file
concat Concatenate Avro files into one
recodec Re-encode an Avro file with a different codec
fingerprint Print schema fingerprint (CRC-64-AVRO, MD5, SHA-256)
validate Validate an Avro file or check schema compatibility
grep Search records for a pattern, printing matching records as JSON
random Generate random records from a schema
Command Reference
All examples below use this sample dataset — 8 employee records with fields for id, name, department (enum), salary, email (nullable), tags (array), and active (boolean).
cat — Print records as JSON
Print all records, one JSON object per line:
...
With --pretty for colorized, indented output:
Limit output with -n:
head — Print first N records
With colorized output:
Default is 10 records when -n is omitted.
schema — Print the Avro schema
Outputs colorized JSON with automatic pager for large schemas:
Large schemas automatically pipe through less -R in interactive terminals.
count — Count records
Single file:
Multiple files show per-file counts and a total:
Works with globs:
meta — File metadata
Shows the raw schema, codec, sync marker, and any user-defined metadata:
avro.schema { ... }
avro.codec null
sync 0be4e3b6562329dbba6c5f06aa43ee96
fingerprint — Schema fingerprint
Print all fingerprints:
Or a specific algorithm:
Supported: rabin (CRC-64-AVRO), md5, sha256, all (default).
validate — Validate files and schema compatibility
Validate file integrity (reads every record):
Check schema compatibility:
grep — Search records
Searches the JSON representation of each record and prints the entire matching record:
Pretty-print matches:
...
Case-insensitive:
}
Fixed string (no regex — useful when pattern has special chars like *, ., ():
}
Count matches:
Invert match (show non-matching records):
| Flag | Description |
|---|---|
-i |
Case-insensitive matching |
-v |
Invert match (show records that do NOT match) |
-c |
Show only the count of matching records |
-F |
Treat pattern as a fixed string, not a regex |
--pretty |
Colorized pretty-print output |
fromjson — Convert JSON to Avro
Convert newline-delimited JSON to an Avro file:
Usage: avz fromjson [OPTIONS] --schema <SCHEMA> --output <OUTPUT> [INPUT]
Options:
-s, --schema <SCHEMA> Path to the Avro schema JSON file
-o, --output <OUTPUT> Output Avro file path
-c, --codec <CODEC> Compression codec [default: null]
[INPUT] Input JSON file (reads from stdin if omitted)
With compression:
From stdin:
|
concat — Concatenate Avro files
Merge multiple files into one:
recodec — Re-encode with a different codec
Change the compression codec of an existing Avro file:
Verify the codec changed:
|
random — Generate random test data
Generate random records from a schema:
Pretty-print:
Write directly to Avro format:
| Flag | Description |
|---|---|
-s, --schema |
Path to Avro schema JSON file (required) |
-n, --count |
Number of records to generate (default: 10) |
--seed |
Random seed for reproducible output |
-f, --format |
Output format: json (default) or avro |
-o, --output |
Output file path (required for avro format) |
--pretty |
Colorized pretty-print (json format only) |
S3 Support
All read commands work with S3 URIs. AWS credentials are loaded from the standard chain (env vars, ~/.aws/credentials, IAM role, etc.).
# single file
# glob pattern on S3
# grep across S3 files
S3 files are downloaded into memory. For very large individual files, consider downloading first with
aws s3 cp.
Supported Codecs
| Codec | Flag value |
|---|---|
| None | null |
| Deflate | deflate |
| Snappy | snappy |
| Zstandard | zstandard or zstd |
| Bzip2 | bzip2 or bzip |
| XZ | xz |
License
MIT OR Apache-2.0