qvd
High-performance Rust library for reading, writing and converting Qlik QVD files. With Parquet/Arrow interop, DataFusion SQL, streaming reader, CLI tool, and Python bindings (PyArrow, pandas, Polars).
First and only QVD crate on crates.io.
Features
- Read/Write QVD — byte-identical roundtrip, zero-copy where possible
- Parquet ↔ QVD — convert in both directions with compression support (snappy, zstd, gzip, lz4)
- Arrow RecordBatch — convert QVD to/from Arrow for integration with DataFusion, DuckDB, Polars
- DataFusion SQL — register QVD files as tables and query them with SQL
- DuckDB integration — use QVD data in DuckDB via Arrow bridge (Rust and Python)
- Streaming reader — read QVD files in chunks without loading everything into memory
- EXISTS() index — O(1) hash lookup, like Qlik's
EXISTS()function. Streaming filtered reads — 2.5x faster than Qlik Sense - CLI tool —
qvd-cli convert,inspect,head,schema,filter - Python bindings — PyArrow, pandas, Polars support via zero-copy Arrow bridge
- Zero dependencies for core QVD read/write (Parquet/Arrow/DataFusion/Python are optional features)
Performance
Tested on 399 real QVD files (11 KB to 2.8 GB) — all byte-identical roundtrip (MD5 match).
Selected benchmarks:
| File | Size | Rows | Columns | Read | Write |
|---|---|---|---|---|---|
| sample_tiny.qvd | 11 KB | 12 | 5 | 0.0s | 0.0s |
| sample_small.qvd | 418 KB | 2,746 | 8 | 0.0s | 0.0s |
| sample_medium.qvd | 41 MB | 465,810 | 12 | 0.5s | 0.0s |
| sample_large.qvd | 587 MB | 5,458,618 | 15 | 6.1s | 0.4s |
| sample_xlarge.qvd | 1.7 GB | 87,617,047 | 8 | 23.6s | 1.6s |
| sample_huge.qvd | 2.8 GB | 11,907,648 | 42 | 24.3s | 2.4s |
Streaming EXISTS() filter — vs Qlik Sense
Filtered read with EXISTS() + column selection — 2.5x faster than Qlik Sense.
The streaming reader loads only symbol tables (small, unique values) into memory, then scans the index table in chunks. For each row, only the filter column is decoded first. If the row matches, the selected columns are decoded. Non-matching rows are skipped entirely — no memory allocated.
Benchmark: 1.7 GB QVD, 87.6M rows × 8 columns → filter by 2 values, select 3 columns → 20.4M rows × 3 columns output
Qlik Sense script:
types:
LOAD * INLINE [%Type_ID
7
9];
filtered:
LOAD %Key_ID, DateField_BK, %Type_ID
FROM [lib://data/large_table.qvd](qvd)
WHERE EXISTS(%Type_ID);
STORE filtered INTO [lib://data/result.qvd](qvd);
DROP TABLE filtered;
qvdrs CLI equivalent:
| Qlik Sense | qvdrs (streaming) | |
|---|---|---|
| Read + filter | ~28s | 7.1s |
| Total (→ QVD) | ~28s | 11.4s |
| Total (→ Parquet) | — | 15.5s |
| Speedup | 1× | 2.5× (QVD) / 1.8× (Parquet) |
Recommendation: For large QVD files, always use
read_filtered()(orqvd-cli filter) instead of loading the full file and filtering afterwards. The streaming approach uses dramatically less memory (only matched rows are held) and is significantly faster because non-matching rows are never fully decoded.
Installation
Rust
# Core QVD read/write (zero dependencies)
[]
= "0.4.4"
# With Parquet/Arrow support
[]
= { = "0.4.4", = ["parquet_support"] }
# With DataFusion SQL support
[]
= { = "0.4.4", = ["datafusion_support"] }
CLI
Install with cargo:
Or run without installing using uvx (requires Python and the qvdrs package):
Python
Or with uv:
Quick Start — Rust
Read/Write QVD
use ;
let table = read_qvd_file?;
println!;
// Byte-identical roundtrip
write_qvd_file?;
Convert Parquet ↔ QVD
use ;
// Parquet → QVD
convert_parquet_to_qvd?;
// QVD → Parquet (with zstd compression)
convert_qvd_to_parquet?;
Arrow RecordBatch
use ;
let table = read_qvd_file?;
let batch = qvd_to_record_batch?;
// Use with DataFusion, DuckDB, Polars, etc.
// Arrow → QVD
let qvd_table = record_batch_to_qvd?;
DataFusion SQL (feature datafusion_support)
use *;
use register_qvd;
async
You can also register multiple QVD files and JOIN them:
register_qvd?;
register_qvd?;
let df = ctx.sql.await?;
DuckDB via Arrow (Rust)
DuckDB can ingest Arrow RecordBatches directly — no file conversion needed:
use ;
let table = read_qvd_file?;
let batch = qvd_to_record_batch?;
// Pass the Arrow RecordBatch to DuckDB via its Arrow interface
// See: https://docs.rs/duckdb/latest/duckdb/
Streaming Reader
use open_qvd_stream;
let mut reader = open_qvd_stream?;
println!;
while let Some = reader.next_chunk?
EXISTS() — O(1) Lookup
Like Qlik's EXISTS() function — build an index of unique values from one table
and use it to check or filter another table in O(1) per row.
use ;
// Build index from the "clients" table
let clients = read_qvd_file?;
let index = from_column.unwrap;
// O(1) lookup — does this value exist?
assert!;
println!;
// Filter another table — get row indices where ClientID exists in the clients table
let facts = read_qvd_file?;
let col_idx = 0; // index of "ClientID" column in facts table
let matching_rows = filter_rows_by_exists_fast;
println!;
Streaming EXISTS() — Filtered Read (recommended for large files)
For large QVD files, use streaming read_filtered() instead of loading everything into memory.
Only matching rows are loaded — 2.5x faster than Qlik Sense, uses dramatically less memory.
use ;
// 1. Build EXISTS index — from another table or from explicit values
let index = from_values;
// 2. Open streaming reader (loads only symbol tables, not the full index table)
let mut stream = open_qvd_stream?;
// 3. Stream + filter + select columns — only matching rows loaded into memory
let filtered = stream.read_filtered?;
println!;
// 4. Save result
write_qvd_file?;
You can also build an EXISTS index from another QVD table's column:
let clients = read_qvd_file?;
let index = from_column.unwrap;
drop; // free memory before opening the large file
let mut stream = open_qvd_stream?;
let filtered = stream.read_filtered?;
Quick Start — Python
Basic usage
# Read QVD
=
# Save QVD
# Parquet ↔ QVD
# Load Parquet as QvdTable
=
# EXISTS — O(1) lookup (like Qlik's EXISTS() function)
=
=
# Check if a value exists
# True/False
# same thing
# number of unique values
# Check multiple values at once
=
# [True, True, False]
# Filter rows from another table — returns list of matching row indices
=
=
PyArrow
# QVD → PyArrow RecordBatch (zero-copy via Arrow C Data Interface)
=
=
# Or directly:
=
# PyArrow → QVD
=
pandas
# QVD → pandas DataFrame (via Arrow, zero-copy where possible)
=
# Or directly:
=
# pandas → QVD (via PyArrow round-trip)
=
=
Polars
# QVD → Polars DataFrame
=
# Or directly:
=
# Polars → QVD (via PyArrow round-trip)
=
=
DuckDB (Python) — native QVD support
Register QVD files as DuckDB tables with a single function call, then query with SQL:
=
# Register a single QVD file as a DuckDB table
# Register all QVD files from a folder at once
=
# ["customers", "orders", "products", "sales", ...]
# JOIN across multiple QVD tables
Register with streaming EXISTS() filter for large files:
# Only load matching rows — memory-efficient for huge QVDs
=
CLI
Install with cargo:
Or run directly via uvx (no install needed):
Convert between formats
# Parquet → QVD
# QVD → Parquet (default compression: snappy)
# QVD → Parquet with specific compression
# Rewrite QVD (re-generate from internal representation)
# Recompress Parquet
Inspect QVD metadata
Output example:
File: data.qvd
Size: 41.3 MB
Table: SalesData
Rows: 465,810
Columns: 12
Created: 2024-01-15 10:30:00
Build: 14.0
RecordSize: 89 bytes
Read time: 0.50s
Column Symbols BitWidth Bias FmtType Tags
--------------------------------------------------------------------------------
OrderID 465810 20 0 0 $numeric, $integer
CustomerID 12500 14 0 0 $numeric, $integer
Region 5 3 0 0 $text
Amount 389201 19 0 2 $numeric
Preview rows
# Show first 10 rows (default)
# Show first 50 rows
Filter rows with EXISTS() (streaming)
# Filter by column value(s) — streaming, memory-efficient
# Filter + select only specific columns
# Filter and save as Parquet
Show Arrow schema
Output example:
Arrow Schema for 'data.qvd':
OrderID Int64
CustomerID Int64
Region Utf8
Amount Float64 (nullable)
OrderDate Date32
Architecture
src/
├── lib.rs — public API, re-exports
├── error.rs — error types (QvdError, QvdResult)
├── header.rs — XML header parser/writer (custom, zero-dep)
├── value.rs — QVD data types (QvdSymbol, QvdValue)
├── symbol.rs — symbol table binary reader/writer
├── index.rs — index table bit-stuffing reader/writer
├── reader.rs — high-level QVD reader
├── writer.rs — high-level QVD writer + QvdTableBuilder
├── exists.rs — ExistsIndex with HashSet + filter functions
├── streaming.rs — streaming chunk-based QVD reader with filtered reads
├── parquet.rs — Parquet/Arrow ↔ QVD conversion (optional)
├── datafusion.rs — DataFusion TableProvider for SQL on QVD (optional)
├── python.rs — PyO3 bindings with PyArrow/pandas/Polars (optional)
└── bin/qvd.rs — CLI binary (optional)
Feature Flags
| Feature | Dependencies | Description |
|---|---|---|
| (default) | none | Core QVD read/write |
parquet_support |
arrow, parquet, chrono | Parquet/Arrow conversion |
datafusion_support |
+ datafusion, tokio | SQL queries on QVD via DataFusion |
cli |
+ clap | CLI binary |
python |
+ pyo3, arrow/pyarrow | Python bindings with PyArrow/pandas/Polars |
Author
Stanislav Chernov (@bintocher)
License
MIT — see LICENSE