tstable
A command-line tool for creating, managing, and querying time-series tables—without writing any Rust code.
What is this?
This CLI lets you work with time-series tables: a structured way to store and query time-indexed data (like stock prices, sensor readings, or log events) on top of Parquet files.
Instead of manually managing scattered Parquet files, you get:
- A table abstraction that tracks all your data segments
- SQL queries powered by Apache DataFusion
- Automatic schema tracking and overlap detection
- An interactive shell for exploratory analysis
Installation
From crates.io
From a local clone
Verify installation
Quick start
# 1. Create a table for hourly stock bars
# 2. Add some data (any Parquet file with a timestamp column)
# 3. Query with SQL
Commands
create — Create a new table
Creates an empty time-series table. The schema is automatically inferred when you append the first data segment.
| Flag | Required | Description |
|---|---|---|
--table |
✓ | Path where the table will be created |
--time-column |
✓ | Name of the timestamp column in your data |
--bucket |
✓ | Time granularity for indexing (see below) |
--timezone |
IANA timezone (e.g., America/New_York, UTC) |
|
--entity |
Entity column(s) for partitioning, repeatable |
Bucket values: 1s, 1m, 5m, 15m, 1h, 1d
What are entity columns?
If your data has multiple "things" (like stock symbols or sensor IDs), specify them with --entity. This helps with coverage tracking and future optimizations.
append — Add data to a table
Appends a Parquet file as a new segment. The file must have the timestamp column defined when you created the table.
| Flag | Required | Description |
|---|---|---|
--table |
✓ | Path to an existing table |
--parquet |
✓ | Path to the Parquet file to append |
--time-column |
Override timestamp column (default: from table metadata) | |
--timing |
Print elapsed time |
Notes:
- The Parquet file is copied/moved into the table's
data/directory - Overlapping time ranges with existing segments will cause an error
- Schema must be compatible with existing data (if any)
query — Run SQL queries
Execute SQL queries against your table using DataFusion.
| Flag | Required | Description |
|---|---|---|
--table |
✓ | Path to the table |
--sql |
✓ | SQL query to execute |
--max-rows |
Max rows to display (default: 10, use 0 for unlimited) |
|
--format |
Output format: csv (default) or jsonl |
|
--output |
Write results to a file instead of stdout | |
--explain |
Show the query execution plan | |
--timing |
Print elapsed time | |
--pager |
Pipe output through less -S for horizontal scrolling |
Table name in SQL:
The table is registered under its directory name. For ./data/my_table, use my_table in your SQL.
Examples:
# Show all data (no row limit)
# Export to JSON Lines
# See the query plan
shell — Interactive mode
Opens an interactive shell that keeps the table loaded in memory. Great for exploratory analysis.
If you omit --table, the shell will prompt you for a path (and can create a new table interactively).
| Flag | Description |
|---|---|
--table |
Path to a table (optional—will prompt if omitted) |
--history |
Path to command history file |
Shell commands:
| Command | Description |
|---|---|
query <sql> |
Run a SQL query |
query --max-rows 100 <sql> |
Query with options |
explain <sql> |
Show query execution plan |
append <parquet_path> |
Append a new segment |
refresh |
Reload table state from disk |
\timing |
Toggle elapsed time display |
\pager |
Toggle pager output |
alias <name> |
Set a shorter table name for queries |
alias --clear |
Reset to default table name |
clear |
Clear screen |
help |
Show all commands |
exit |
Exit the shell |
Query flags in shell:
query [--max-rows N] [--format csv|jsonl] [--output PATH] [--timing] [--explain] [--] <sql>
Use -- before your SQL if it starts with -- (to avoid flag parsing issues).
Example: Stock market data
Here's a complete workflow for managing daily stock bars:
# Create a table for daily bars, partitioned by symbol
# Append historical data
# Query: Find the highest closing prices
# Interactive exploration
Output formats
CSV (default)
symbol,date,open,high,low,close,volume
AAPL,2024-01-02,185.50,186.20,184.80,185.90,50000000
AAPL,2024-01-03,186.00,187.10,185.50,186.80,48000000
JSON Lines (--format jsonl)
Tips
-
Row limit: By default, only 10 rows are displayed. Use
--max-rows 0to see everything, or--output file.csvto save full results. -
Table names with special characters: If your table directory has spaces or hyphens, quote it in SQL:
SELECT * FROM "my-table". -
Time filtering: DataFusion supports standard SQL timestamp syntax:
WHERE timestamp > '2024-01-01T00:00:00Z' WHERE timestamp BETWEEN '2024-01-01' AND '2024-06-30' -
Refreshing in shell: If another process appends data while you're in the shell, run
refreshto see the new segments.
Related
- timeseries-table-core — Core Rust library for building on this format
- timeseries-table-datafusion — DataFusion integration with time-based pruning