ClickHouse for SeaQL
A ClickHouse client that integrates with the SeaQL ecosystem.
Query results are decoded into sea_query::Value, giving you first-class support for
DateTime, Decimal, BigDecimal, Json arrays,
and more - without defining any schema structs.
Apache Arrow is also supported: stream query results directly into RecordBatches, or insert
Arrow batches back into ClickHouse.
This is a soft fork of clickhouse.rs, 100% compatible with all upstream features, and continually rebased on upstream.
Features
- Dynamic rows - fetch results as
Vec<DataRow>with no compile-time schema - SeaQuery values - every column maps to a typed
sea_query::Valuevariant - Rich types -
Date,Time,DateTime,Decimal,BigDecimal,Json - Column-oriented batches -
next_batch(n)streams rows in column-majorRowBatches - Apache Arrow - stream query results as
RecordBatches; insert Arrow batches directly
Setup
[]
# Dynamic DataRow + SeaQuery value support
= { = "0.14", = ["sea-ql"] }
# Apache Arrow support (includes sea-ql)
= { = "0.14", = ["arrow"] }
Dynamic DataRow
fetch_rows() decodes every column into the matching sea_query::Value variant:
integers, floats, strings, booleans, dates, decimals, arrays - all without a schema struct.
use ;
use Value;
use ;
let mut cursor = client
.query
.fetch_rows?;
let row = cursor.next.await?.unwrap;
let DataRow = &row;
assert_eq!;
assert_eq!;
assert_eq!;
assert_eq!;
assert_eq!;
// Decimal32 / Decimal64 / Decimal128 -> Value::Decimal
assert_eq!;
// Decimal128 / Decimal256 -> Value::BigDecimal
assert_eq!;
assert_eq!; // Nullable value -> typed None
// Array(String) -> Json array
let expected_arr = json!;
assert_eq!;
No-fuzz Dynamic Type
No need to guess the resulting type of a SQL expression, it can be converted to the desired type on runtime:
let mut cursor = client
.query // what's the output type?
.fetch_rows?;
let row = cursor.next.await?.expect;
// UInt32 + Float32 -> Float64
assert_eq!; // designated type
assert_eq!; // get by index, also works
assert_eq!; // get by column name, also works
Insert DataRows
Build DataRows with a shared column list and insert them in a single streaming request.
use Arc;
use ;
use Value;
let columns: = from;
let rows: =
.map
.collect;
// schema derived from first row
let mut insert = client.insert_data_row.await?;
for row in &rows
insert.end.await?;
Column-oriented batches
next_batch(max_rows) accumulates rows column-by-column into a RowBatch:
one Vec<Value> per column, making it a natural bridge toward Apache Arrow.
let mut cursor = client
.query
.fetch_rows?;
while let Some = cursor.next_batch.await?
Apache Arrow
next_arrow_batch(chunk_size) streams ClickHouse results as arrow::RecordBatches -
ready for DataFusion, Polars, Parquet export, or any Arrow consumer.
use pretty;
let mut cursor = client.query.fetch_rows?;
while let Some = cursor.next_arrow_batch.await?
$ cargo run --example arrow_sensor_data --features=arrow,chrono,rust_decimal
+----+-------------------------+-----------+----------------------+---------+
| id | recorded_at | sensor_id | temperature | voltage |
+----+-------------------------+-----------+----------------------+---------+
| 1 | 2026-01-01T13:35:36.736 | 106 | 36.345616831016436 | 3.2736 |
| 2 | 2026-01-01T10:07:38.458 | 108 | 10.122001773336567 | 3.3458 |
| 3 | 2026-01-01T01:15:18.518 | 108 | 35.21406789966149 | 3.1518 |
| 4 | 2026-01-01T05:36:57.017 | 107 | 22.92828141235666 | 3.2016 |
| 5 | 2026-01-01T13:17:36.056 | 106 | -2.082591477369223 | 3.0056 |
| 6 | 2026-01-01T02:08:08.688 | 108 | 18.693990809409808 | 3.1688 |
| 7 | 2026-01-01T23:09:28.768 | 108 | 30.205472457922546 | 3.0768 |
| 8 | 2026-01-01T15:14:07.247 | 107 | 1.8525432800697583 | 3.0247 |
| 9 | 2026-01-01T05:15:53.753 | 103 | 21.397067736011795 | 3.0753 |
| 10 | 2026-01-01T00:02:49.769 | 109 | 17.550203554882934 | 3.0769 |
+----+-------------------------+-----------+----------------------+---------+
SeaORM -> ClickHouse
Build an Arrow RecordBatch using SeaORM and insert it directly into ClickHouse.
Full working example: sea-orm-arrow-example.
use *;
use ;
let base_ts = from_ymd_opt
.unwrap
.and_hms_milli_opt
.unwrap;
let models: =
.map
.collect;
let schema = arrow_schema;
let batch = to_arrow?;
let mut insert = client.insert_arrow.await?;
insert.write_batch.await?;
insert.end.await?;
Arrow Schema to ClickHouse Table
ClickHouseSchema::from_arrow derives a full CREATE TABLE DDL from an Arrow schema,
so you can go from query result to table definition without writing any DDL by hand.
use ;
use RecordBatch;
// 1. Stream any query as Arrow batches
let mut cursor = client.query.fetch_rows?;
let mut batches: = Vecnew;
while let Some = cursor.next_arrow_batch.await?
// 2. Derive the CREATE TABLE DDL from the Arrow schema
let mut schema = from_arrow;
schema
.table_name
.engine
.primary_key;
schema.find_column_mut.set_low_cardinality;
let ddl = schema.to_string;
assert_eq!;
client.query.execute.await?;
// 3. Insert the batches
let mut insert = client.insert_arrow.await?;
for batch in &batches
insert.end.await?;
The same workflow works with DataRow via ClickHouseSchema::from_data_row too.
Type mapping
| ClickHouse type | sea_query::Value variant |
|---|---|
Bool |
Value::Bool |
Int8 / Int16 / Int32 / Int64 |
Value::TinyInt / SmallInt / Int / BigInt |
UInt8 / UInt16 / UInt32 / UInt64 |
Value::TinyUnsigned / SmallUnsigned / Unsigned / BigUnsigned |
Int128 / Int256 / UInt128 / UInt256 |
Value::BigDecimal (scale 0) |
Float32 |
Value::Float |
Float64 |
Value::Double |
String |
Value::String |
FixedString(n) |
Value::Bytes |
UUID |
Value::Uuid |
Date / Date32 |
Value::ChronoDate |
DateTime / DateTime64 |
Value::ChronoDateTime |
Time / Time64 |
Value::ChronoTime |
Decimal32 / Decimal64 |
Value::Decimal (rust_decimal) |
Decimal128 |
Value::Decimal, or Value::BigDecimal if scale > 28 |
Decimal256 |
Value::BigDecimal (bigdecimal) |
IPv4 / IPv6 |
Value::String |
Enum8 / Enum16 |
Value::String |
Array(T) / Tuple(…) / Map(K,V) |
Value::Json |
Nullable(T) null |
typed None variant |
Examples
| Example | Feature | Description |
|---|---|---|
data_rows |
sea-ql |
Fetch rows; assert type mappings for all major types |
data_row_insert |
sea-ql |
Insert, mutate in place, re-insert (ReplacingMergeTree pattern) |
data_row_schema |
sea-ql |
Derive CREATE TABLE DDL from a DataRow |
row_batch |
sea-ql |
Column-oriented batch streaming |
arrow_batch |
arrow |
Stream query results as RecordBatches |
arrow_batch_schema |
arrow |
Derive CREATE TABLE DDL from an Arrow RecordBatch |
arrow_insert |
arrow |
Arrow RecordBatch insert round-trip with Decimal128 / Decimal256 |
arrow_sensor_data |
arrow |
Practical example of sensor data processing via Arrow |
sea-orm-arrow-example |
arrow |
SeaORM entity -> Arrow RecordBatch -> ClickHouse insert |
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
We invite you to participate, contribute and together help build Rust's future.
Mascot
A friend of Ferris, Terres the hermit crab is the official mascot of SeaORM. His hobby is collecting shells.