1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
/// Record batches + schema
/// Implementation of `BlockProcessor` for async events
/// Materializable view of async span events accessible through datafusion
/// Write parquet in object store
/// BatchPartitionMerger merges multiple partitions by splitting the work in batches to use less memory.
/// The batches are based on event times.
/// Materialize views on a schedule based on the time data was received from the ingestion service
/// Specification for a view partition backed by a set of telemetry blocks which can be processed out of order
/// Replicated view of the `blocks` table of the postgresql metadata database.
/// Adds file content caching to object store reads
/// Catalog utilities for discovering and managing view schemas
/// Export mechanism that doubles as audit trail
/// Global LRU cache for parquet file contents
/// Fetch payload from the object store using SQL
/// Management of process-specific partitions built on demand
/// Bundles runtime resources for lakehouse query execution
/// Read access to the list of lakehouse partitions
/// Read access to view sets with their schema information
/// Implementation of `BlockProcessor` for log entries
/// SQL-based view for log statistics aggregated by process, minute, level, and target
/// Materializable view of log entries accessible through datafusion
/// Exposes materialize_partitions as a table function
/// TableProvider implementation for the lakehouse
/// Merge consecutive parquet partitions into a single file
/// Global LRU cache for partition metadata
/// Compatibility layer for parsing legacy Arrow 56.0 metadata and upgrading to Arrow 57.0
/// Specification for a view partition backed by a table in the postgresql metadata database.
/// Implementation of `BlockProcessor` for measures
/// Materializable view of measures accessible through datafusion
/// Maintenance of the postgresql tables and indices use to track the parquet files used to implement the views
/// Write & delete sections of views
/// In-memory copy of a subnet of the list of the partitions in the db
/// Operations on the dedicated partition_metadata table
/// Describes the event blocks backing a partition
/// ExecutionPlan based on a set of parquet files
/// TableProvider based on a set of parquet files
/// ExecutionPlan for generating Perfetto trace chunks
/// Table function for generating Perfetto trace chunks
/// Table function returning thread and/or async spans from all CPU streams of a process
/// Shared utilities for discovering CPU streams of a process
/// Replicated view of the `processes` table of the postgresql metadata database.
/// property_get function support from SQL
/// Datafusion integration
/// Wrapper around ParquetObjectreader to provide ParquetMetaData without hitting the ObjectStore
/// Scalar UDF to retire a single partition by file path
/// Scalar UDF to retire a single partition by metadata
/// Exposes retire_partitions as a table function
/// Runtime resources
/// SessionConfigurator trait for custom session context configuration
/// Sql-defined view updated in batch
/// Specification for a view partition backed by a SQL query on the lakehouse.
/// Auto-discovery configurator for static JSON/CSV tables
/// Replicated view of the `streams` table of the postgresql metadata database.
/// Rewrite table scans to take the query range into account
/// Tracking of expired partitions
/// Jit view of the call tree built from the thread events of a single stream
/// Basic interface for a set of rows queryable and materializable
/// Table function to query process-specific views
/// Add or remove view partitions