1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
//! # qvd — High-performance Qlik QVD file library
//!
//! Read, write, and convert [Qlik QVD](https://help.qlik.com/en-US/sense/February2024/Subsystems/Hub/Content/Sense_Hub/Scripting/QVD-files-scripting.htm)
//! files with zero-copy roundtrip fidelity. First and only QVD crate on crates.io.
//!
//! ## Features
//!
//! - **Read/Write QVD** — byte-identical roundtrip (MD5 match on 20 real files up to 2.8 GB)
//! - **Parquet ↔ QVD** — bidirectional conversion with compression (snappy, zstd, gzip, lz4).
//! Requires feature `parquet_support`.
//! - **Arrow RecordBatch** — convert QVD to/from Arrow for DataFusion, DuckDB, Polars integration.
//! Requires feature `parquet_support`.
//! - **DataFusion SQL** — register QVD as a table, query with SQL.
//! Requires feature `datafusion_support`.
//! - **Streaming reader** — read QVD in chunks without loading entire file into memory
//! - **EXISTS() index** — O(1) hash lookup, like Qlik's `EXISTS()` function
//! - **Concatenate** — merge QVD tables with Qlik CONCATENATE semantics (schema union, NULL fill)
//! - **Concatenate with PK** — upsert/dedup merge with primary key: Replace, Skip, or Error on conflict.
//! First QVD library in any language with PK-based merge
//! - **write_arrow** — write PyArrow RecordBatch/Table directly to QVD (no Parquet roundtrip)
//! - **Python bindings** — PyArrow, pandas, Polars via zero-copy Arrow bridge
//! - **Zero dependencies** for core read/write (Parquet/Arrow/DataFusion are optional)
//!
//! ## Quick Start
//!
//! ### Read and write QVD files
//!
//! ```no_run
//! use qvd::{read_qvd_file, write_qvd_file};
//!
//! let table = read_qvd_file("data.qvd").unwrap();
//! println!("Rows: {}, Cols: {}", table.num_rows(), table.num_cols());
//! println!("Columns: {:?}", table.column_names());
//!
//! // Byte-identical roundtrip
//! write_qvd_file(&table, "output.qvd").unwrap();
//! ```
//!
//! ### EXISTS() — O(1) lookup
//!
//! ```no_run
//! use qvd::{read_qvd_file, ExistsIndex, filter_rows_by_exists_fast};
//!
//! let clients = read_qvd_file("clients.qvd").unwrap();
//! let index = ExistsIndex::from_column(&clients, "ClientID").unwrap();
//!
//! assert!(index.exists("12345"));
//!
//! let facts = read_qvd_file("facts.qvd").unwrap();
//! // col_idx = column index for "ClientID" in facts table
//! let col_idx = 0;
//! let filtered = filter_rows_by_exists_fast(&facts, col_idx, &index);
//! ```
//!
//! ### Streaming reader
//!
//! ```no_run
//! use qvd::open_qvd_stream;
//!
//! let mut reader = open_qvd_stream("huge_file.qvd").unwrap();
//! while let Some(chunk) = reader.next_chunk(65536).unwrap() {
//! println!("Chunk: {} rows", chunk.num_rows);
//! }
//! ```
//!
//! ### Parquet ↔ QVD (feature `parquet_support`)
//!
//! ```ignore
//! use qvd::{convert_parquet_to_qvd, convert_qvd_to_parquet, ParquetCompression};
//!
//! convert_parquet_to_qvd("input.parquet", "output.qvd").unwrap();
//! convert_qvd_to_parquet("input.qvd", "output.parquet", ParquetCompression::Zstd).unwrap();
//! ```
//!
//! ### Arrow RecordBatch (feature `parquet_support`)
//!
//! ```ignore
//! use qvd::{read_qvd_file, qvd_to_record_batch, record_batch_to_qvd};
//!
//! let table = read_qvd_file("data.qvd").unwrap();
//! let batch = qvd_to_record_batch(&table).unwrap();
//! // Use with DataFusion, DuckDB, Polars...
//! ```
//!
//! ### Concatenate — merge QVD tables
//!
//! ```no_run
//! use qvd::{read_qvd_file, concatenate, write_qvd_file};
//!
//! let a = read_qvd_file("data_jan.qvd").unwrap();
//! let b = read_qvd_file("data_feb.qvd").unwrap();
//! let merged = concatenate(&a, &b).unwrap();
//! write_qvd_file(&merged, "data_all.qvd").unwrap();
//! ```
//!
//! ### Concatenate with PK — upsert/dedup merge
//!
//! ```no_run
//! use qvd::{read_qvd_file, concatenate_with_pk, OnConflict, write_qvd_file};
//!
//! let existing = read_qvd_file("master.qvd").unwrap();
//! let updates = read_qvd_file("delta.qvd").unwrap();
//! // New rows win on PK collision (upsert)
//! let merged = concatenate_with_pk(&existing, &updates, &["ID"], OnConflict::Replace).unwrap();
//! write_qvd_file(&merged, "master_updated.qvd").unwrap();
//! ```
//!
//! ### DataFusion SQL (feature `datafusion_support`)
//!
//! ```ignore
//! use datafusion::prelude::*;
//! use qvd::register_qvd;
//!
//! #[tokio::main]
//! async fn main() -> Result<(), Box<dyn std::error::Error>> {
//! let ctx = SessionContext::new();
//! register_qvd(&ctx, "sales", "sales.qvd")?;
//! let df = ctx.sql("SELECT Region, SUM(Amount) FROM sales GROUP BY Region").await?;
//! df.show().await?;
//! Ok(())
//! }
//! ```
//!
//! ## Feature Flags
//!
//! | Feature | Dependencies | Description |
//! |---------|-------------|-------------|
//! | *(default)* | none | Core QVD read/write, streaming, EXISTS |
//! | `parquet_support` | arrow, parquet, chrono | Parquet/Arrow ↔ QVD conversion |
//! | `datafusion_support` | + datafusion, tokio | SQL queries on QVD via DataFusion |
//! | `cli` | + clap | CLI binary `qvd-cli` |
//! | `python` | + pyo3, arrow/pyarrow | Python bindings with PyArrow/pandas/Polars |
/// Error types for QVD operations.
/// QVD XML header parser and writer.
/// Binary symbol table reader and writer.
/// QVD value types: [`QvdSymbol`] and [`QvdValue`].
/// Bit-stuffed index table reader and writer.
/// High-level QVD file reader. See [`read_qvd_file`] and [`QvdTable`].
/// High-level QVD file writer and [`QvdTableBuilder`] for creating QVD files from scratch.
/// O(1) EXISTS() index and fast row filtering. See [`ExistsIndex`].
/// Streaming chunk-based QVD reader for memory-efficient processing of large files.
/// See [`QvdStreamReader`], [`open_qvd_stream`], and [`QvdStreamReader::read_filtered`]
/// for EXISTS()-style filtered reads that are 2.5x faster than Qlik Sense.
/// QVD table concatenation and merge operations.
/// See [`concatenate`] for pure append and [`concatenate_with_pk`] for PK-based upsert.
/// Parquet/Arrow ↔ QVD conversion (requires feature `parquet_support`).
/// DataFusion integration — SQL queries on QVD files (requires feature `datafusion_support`).
pub use ;
pub use QvdTableHeader;
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;
pub use crate;