tsdb_timon 1.1.3

Efficient local storage and Amazon S3-compatible data synchronization for time-series data, leveraging Parquet for storage and DataFusion for querying, all wrapped in a simple and intuitive API.
Documentation
# Timon File & S3-Compatible Storage API

This API provides a set of functions for managing databases and tables in both local file storage and S3-compatible storage. It supports creating databases and tables, inserting data, querying using SQL, and more.

## Table of Contents

# Mobile based API's
1. [File Storage Functions]#file-storage-functions
2. [S3-Compatible Storage Functions]#s3-compatible-storage-functions
3. [Function Descriptions]#function-descriptions

# Utility CLI
1. [Get The Latest Utility Build]#get-the-latest-utility-build
2. [How To Run The Utility]#how-to-run-the-utility
---

## File Storage Functions

These functions manage databases and tables stored locally on the file system. Data can be inserted, queried, and organized using SQL-like operations.

```kotlin
// Initialize Timon with a local storage path
external fun initTimon(storagePath: String, bucketInterval: Number, userName: String): String

// Create a new database
external fun createDatabase(dbName: String): String

// Create a new table within a specific database
external fun createTable(dbName: String, tableName: String, schema: String): String

// List all available databases
external fun listDatabases(): String

// List all tables within a specific database
external fun listTables(dbName: String): String

// Delete a specific database
external fun deleteDatabase(dbName: String): String

// Delete a specific table within a database
external fun deleteTable(dbName: String, tableName: String): String

// Insert data into a table in JSON format
external fun insert(dbName: String, tableName: String, jsonData: String): String

// Query a database with SQL query
// username: Optional username for group user queries (null for default user)
// limitPartitions: Optional limit on number of partitions to scan (null for all partitions)
external fun query(dbName: String, sqlQuery: String, userName: String?, limitPartitions: Number?): String

// Query a database and return DataFrame (for advanced use cases)
// username: Optional username for group user queries (null for default user)
// limitPartitions: Optional limit on number of partitions to scan (null for all partitions)
external fun queryDf(dbName: String, sqlQuery: String, userName: String?, limitPartitions: Number?): DataFrame
```

### Usage Example: Pre-warming Tables

The `preloadTables` function is designed to be called at app startup to register frequently-used tables with DataFusion, eliminating the latency of on-demand registration during the first query.

**React Native/TypeScript:**
```typescript
import { preloadTables } from './test-rust-module';

// At app initialization (e.g., in useEffect or App startup)
useEffect(() => {
  preloadTables("mydb", ["users", "posts", "comments"], "username");
}, []);

// Later queries to these tables will be instant!
const result = await query("mydb", "SELECT * FROM users", "username", null);
```

**Response Format:**
```json
{
  "status": 200,
  "message": "Successfully preloaded 3 table(s) in database 'mydb'",
  "json_value": ["users", "posts", "comments"]
}
```

**Benefits:**
- ✅ Eliminates first-query latency by pre-registering tables
- ✅ Parallel table loading (up to 10 concurrent registrations)
- ✅ Automatically skips already-registered tables
- ✅ Gracefully handles missing tables (logs warning, continues loading others)

## S3-Compatible Storage Functions

These functions manage data stored in an S3-compatible bucket, allowing for querying and saving daily data as Parquet files.

```kotlin
// Initialize S3-compatible storage with endpoint and credentials
external fun initBucket(bucket_endpoint: String, bucket_name: String, access_key_id: String, secret_access_key: String, bucket_region: String): String

// Sink daily data to Parquet format in the bucket
external fun cloudSinkParquet(dbName: String, tableName: String): String

// Sync data: upload local data to cloud and fetch updates (bidirectional sync)
// username: Optional username for group user sync (null for default user)
external fun cloudSyncParquet(dbName: String, tableName: String, dateRange: Map<String, String>, userName: String?): String

// Fetch data from a given user and save it locally
external fun cloudFetchParquet(userName: String, dbName: String, tableName: String, dateRange: Map<String, String>): String

// Batch fetch data for multiple users, databases, and tables in parallel
// Returns summary with success/error counts and duration
external fun cloudFetchParquetBatch(usernames: Array<String>, dbNames: Array<String>, tableNames: Array<String>, dateRange: Map<String, String>): String
```

## Get The Latest Utility Build

### Build the Binary
Run the following command to build the utility with the necessary features:  

#### **Cross-Compile the Binary**
Rust provides tools to cross-compile your code for different platforms. This involves building the binary for a platform different from your current one.

#### Example for Windows:
On Linux or macOS, you can compile for Windows:
```bash
rustup target add x86_64-pc-windows-gnu
cargo build --features dev_cli --release --target x86_64-pc-windows-gnu
```

#### Example for macOS:
On Linux, you can compile for macOS:
```bash
rustup target add x86_64-apple-darwin
cargo build --features dev_cli --release --target x86_64-apple-darwin
```

#### **Build Natively on Each Platform**
If cross-compilation is not feasible, you can build the binary on each target platform natively. This ensures compatibility.

#### On macOS:
```bash
cargo build --release
```

#### On Windows:
```powershell
cargo build --release
```

#### **Use Cross (Simplified Cross-Compiling)**
The [`cross`](https://github.com/cross-rs/cross) tool simplifies cross-compiling by providing pre-configured Docker containers for various targets. It automatically handles dependencies and toolchains.

#### Install `cross`:
```bash
cargo install cross
```

```bash
cross build --release --target x86_64-pc-windows-gnu
cross build --release --target x86_64-apple-darwin
```


#### **Consider Using Rust's MUSL for Static Linking (Linux Only)**
If targeting Linux systems with no shared libraries, you can build a statically linked binary using MUSL:
```bash
rustup target add x86_64-unknown-linux-musl
cargo build --release --target x86_64-unknown-linux-musl
```
This produces a binary that works on most Linux distributions.

### Summary
- Use **cross-compilation** to build for other platforms without a native environment.
- Use **`cross`** for easier cross-compilation.
- If you have access to all platforms, build natively on each.

---

## How To Run The Utility

### Available Commands

#### 1. Convert JSON to Parquet
To convert a JSON file to a Parquet file, use the following command:  
```bash
./tsdb_timon convert <json_file_path> <parquet_file_path>
```

**Example:**  
```bash
./tsdb_timon convert test_input.json test_output.parquet
```

#### 2. Execute SQL Query on Parquet
Run an SQL query against the Parquet file:  
```bash
./tsdb_timon query <parquet_file_path> "<sql_query>"
```

**Example:**  
```bash
./tsdb_timon query test_output.parquet "SELECT * FROM timon"
```

### Notes:
- The table name is always set to **`timon`**. Ensure all SQL queries reference the `timon` table explicitly.
- Replace `<json_file_path>`, `<parquet_file_path>`, and `<sql_query>` with your respective input file paths and query.