pub struct TimeSeriesTable { /* private fields */ }Expand description
High-level time-series table handle.
This is the main entry point for callers. It bundles:
- where the table is,
- how to talk to the transaction log,
- what the current committed state is,
- and the extracted time index spec.
Implementations§
Source§impl TimeSeriesTable
impl TimeSeriesTable
Sourcepub async fn append_parquet_segment_with_id(
&mut self,
segment_id: SegmentId,
relative_path: &str,
time_column: &str,
) -> Result<u64, TableError>
pub async fn append_parquet_segment_with_id( &mut self, segment_id: SegmentId, relative_path: &str, time_column: &str, ) -> Result<u64, TableError>
Append a new Parquet segment with a caller-provided segment_id, registering it in the transaction log.
v0.1 behavior:
- Build SegmentMeta from the Parquet file (ts_min, ts_max, row_count).
- Derive the segment logical schema from the Parquet file.
- If the table has no logical_schema yet, adopt this segment schema as canonical and write an UpdateTableMeta + AddSegment commit.
- Otherwise, enforce “no schema evolution” via schema_helpers.
- Compute coverage for the segment and table; reject if coverage overlaps.
- Write the segment coverage sidecar before committing (safe to orphan on failure).
- Commit with OCC on the current version.
- Update in-memory TableState on success.
v0.1: duplicates (same segment_id/path) are allowed if their coverage does not overlap existing data; otherwise overlap is rejected.
This wrapper reads the Parquet bytes from storage, then delegates to
append_parquet_segment_with_id_and_bytes for the core logic.
Sourcepub async fn append_parquet_segment(
&mut self,
relative_path: &str,
time_column: &str,
) -> Result<u64, TableError>
pub async fn append_parquet_segment( &mut self, relative_path: &str, time_column: &str, ) -> Result<u64, TableError>
Append a Parquet segment using a deterministic, content-derived segment_id.
This wrapper reads the Parquet bytes from storage, derives segment_id
via segment_id_v1(relative_path, bytes), then delegates to
append_parquet_segment_with_id_and_bytes for the core logic.
Behavior (schema adoption/enforcement, coverage, OCC, state updates)
matches append_parquet_segment_with_id.
Sourcepub async fn append_parquet_segment_with_report(
&mut self,
relative_path: &str,
time_column: &str,
) -> Result<(u64, AppendReport), TableError>
pub async fn append_parquet_segment_with_report( &mut self, relative_path: &str, time_column: &str, ) -> Result<(u64, AppendReport), TableError>
Append a Parquet segment and return a profiling report.
Source§impl TimeSeriesTable
impl TimeSeriesTable
Sourcepub async fn load_table_coverage_snapshot_only(
&self,
) -> Result<Coverage, TableError>
pub async fn load_table_coverage_snapshot_only( &self, ) -> Result<Coverage, TableError>
Load table coverage using the snapshot pointer only.
- If there is no snapshot pointer:
- If table has zero segments: returns empty coverage.
- Else: returns MissingTableCoveragePointer (strict mode).
- If snapshot exists but is missing/corrupt: returns the snapshot read error.
Source§impl TimeSeriesTable
impl TimeSeriesTable
Sourcepub async fn coverage_ratio_for_range(
&self,
start: DateTime<Utc>,
end: DateTime<Utc>,
) -> Result<f64, TableError>
pub async fn coverage_ratio_for_range( &self, start: DateTime<Utc>, end: DateTime<Utc>, ) -> Result<f64, TableError>
Coverage ratio in [0.0, 1.0] for the half-open time range [start, end).
Uses the table-level coverage snapshot (with readonly recovery from segments if needed).
§Errors
TableError::InvalidRangeifstart >= end.TableError::BucketDomainOverflowif the derived bucket ids exceedu32::MAX.
§Examples
use chrono::{TimeZone, Utc};
let start = Utc.timestamp_opt(0, 0).single().unwrap();
let end = Utc.timestamp_opt(120, 0).single().unwrap();
let ratio = table.coverage_ratio_for_range(start, end).await?;Sourcepub async fn max_gap_len_for_range(
&self,
start: DateTime<Utc>,
end: DateTime<Utc>,
) -> Result<u64, TableError>
pub async fn max_gap_len_for_range( &self, start: DateTime<Utc>, end: DateTime<Utc>, ) -> Result<u64, TableError>
Maximum contiguous missing run length (in buckets) for the half-open time range [start, end).
§Errors
TableError::InvalidRangeifstart >= end.TableError::BucketDomainOverflowif the derived bucket ids exceedu32::MAX.
§Examples
use chrono::{TimeZone, Utc};
let start = Utc.timestamp_opt(0, 0).single().unwrap();
let end = Utc.timestamp_opt(180, 0).single().unwrap();
let gap = table.max_gap_len_for_range(start, end).await?;Sourcepub async fn last_fully_covered_window(
&self,
ts_end: DateTime<Utc>,
window_len_buckets: u64,
) -> Result<Option<RangeInclusive<u32>>, TableError>
pub async fn last_fully_covered_window( &self, ts_end: DateTime<Utc>, window_len_buckets: u64, ) -> Result<Option<RangeInclusive<u32>>, TableError>
Return the last fully covered contiguous window (in bucket space) of length >= window_len_buckets, ending at or before ts_end.
Notes:
- This returns a bucket-id RangeInclusive in the v0.1 bucket domain (u32).
- Returns
Nonewhenwindow_len_buckets == 0or when no fully covered window is found.
§Errors
TableError::BucketDomainOverflowifts_endmaps beyond the u32 bucket domain.
§Examples
use chrono::{TimeZone, Utc};
let ts_end = Utc.timestamp_opt(360, 0).single().unwrap(); // end of bucket 5
let window = table.last_fully_covered_window(ts_end, 2).await?;Source§impl TimeSeriesTable
impl TimeSeriesTable
Sourcepub async fn scan_range(
&self,
ts_start: DateTime<Utc>,
ts_end: DateTime<Utc>,
) -> Result<Pin<Box<dyn Stream<Item = Result<RecordBatch, TableError>> + Send>>, TableError>
pub async fn scan_range( &self, ts_start: DateTime<Utc>, ts_end: DateTime<Utc>, ) -> Result<Pin<Box<dyn Stream<Item = Result<RecordBatch, TableError>> + Send>>, TableError>
Scan the time-series table for record batches overlapping [ts_start, ts_end),
returning a stream of filtered batches from the segments covering that range.
Source§impl TimeSeriesTable
impl TimeSeriesTable
Sourcepub fn from_state(
location: TableLocation,
state: TableState,
) -> Result<TimeSeriesTable, TableError>
pub fn from_state( location: TableLocation, state: TableState, ) -> Result<TimeSeriesTable, TableError>
Construct a table handle from an existing snapshot.
This does not replay the transaction log; callers must provide a state derived from the same location.
Sourcepub fn state(&self) -> &TableState
pub fn state(&self) -> &TableState
Return the current committed table state.
Sourcepub fn index_spec(&self) -> &TimeIndexSpec
pub fn index_spec(&self) -> &TimeIndexSpec
Return the time index specification for this table.
Sourcepub fn location(&self) -> &TableLocation
pub fn location(&self) -> &TableLocation
Return the table location.
Sourcepub fn log_store(&self) -> &TransactionLogStore
pub fn log_store(&self) -> &TransactionLogStore
Return the transaction log store handle.
Sourcepub async fn open(
location: TableLocation,
) -> Result<TimeSeriesTable, TableError>
pub async fn open( location: TableLocation, ) -> Result<TimeSeriesTable, TableError>
Open an existing time-series table at the given location.
Steps:
- Build a
TransactionLogStorefor the location. - Rebuild
TableStatefrom the transaction log. - Reject empty tables (version == 0).
- Require
TableKind::TimeSeriesand extractTimeIndexSpec.
Sourcepub async fn create(
location: TableLocation,
table_meta: TableMeta,
) -> Result<TimeSeriesTable, TableError>
pub async fn create( location: TableLocation, table_meta: TableMeta, ) -> Result<TimeSeriesTable, TableError>
Create a new time-series table at the given location.
This:
- Requires
table_meta.kindto beTableKind::TimeSeries, - Verifies that there are no existing commits (version must be 0),
- Writes an initial commit with
UpdateTableMeta(table_meta.clone()), - Returns a
TimeSeriesTablewith a freshTableState.
Sourcepub async fn current_version(&self) -> Result<u64, TableError>
pub async fn current_version(&self) -> Result<u64, TableError>
Load the current log version from disk without mutating in-memory state.
Sourcepub async fn load_latest_state(&self) -> Result<TableState, TableError>
pub async fn load_latest_state(&self) -> Result<TableState, TableError>
Rebuild and return the latest table state from the transaction log.
Sourcepub async fn refresh(&mut self) -> Result<bool, TableError>
pub async fn refresh(&mut self) -> Result<bool, TableError>
Refresh in-memory state if the log has advanced; returns true if updated.
Trait Implementations§
Auto Trait Implementations§
impl Freeze for TimeSeriesTable
impl RefUnwindSafe for TimeSeriesTable
impl Send for TimeSeriesTable
impl Sync for TimeSeriesTable
impl Unpin for TimeSeriesTable
impl UnsafeUnpin for TimeSeriesTable
impl UnwindSafe for TimeSeriesTable
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more