pub struct QueryBuilder<T: DeserializeOwned + Unpin + 'static = Value> { /* private fields */ }Expand description
Generic query builder
The type parameter T controls consumer-side deserialization only.
Default type T = serde_json::Value for backward compatibility.
§Examples
Type-safe query (recommended):
// Requires: live Postgres connection via FraiseClient.
use serde::Deserialize;
#[derive(Deserialize)]
struct Project {
id: String,
name: String,
}
let stream = client.query::<Project>("projects")
.where_sql("status='active'")
.execute()
.await?;Raw JSON query (debugging, forward compatibility):
// Requires: live Postgres connection via FraiseClient.
let stream = client.query::<serde_json::Value>("projects")
.execute()
.await?;Implementations§
Source§impl<T: DeserializeOwned + Unpin + 'static> QueryBuilder<T>
impl<T: DeserializeOwned + Unpin + 'static> QueryBuilder<T>
Sourcepub fn where_sql(self, predicate: impl Into<String>) -> Self
pub fn where_sql(self, predicate: impl Into<String>) -> Self
Add SQL WHERE clause predicate
Type T does NOT affect SQL generation. Multiple predicates are AND’ed together.
Sourcepub fn where_rust<F>(self, predicate: F) -> Self
pub fn where_rust<F>(self, predicate: F) -> Self
Add Rust-side predicate
Type T does NOT affect filtering.
Applied after SQL filtering, runs on streamed JSON values.
Predicates receive &serde_json::Value regardless of T.
Sourcepub fn order_by(self, order: impl Into<String>) -> Self
pub fn order_by(self, order: impl Into<String>) -> Self
Set ORDER BY clause
Type T does NOT affect ordering.
Sourcepub fn select_projection(self, projection_sql: impl Into<String>) -> Self
pub fn select_projection(self, projection_sql: impl Into<String>) -> Self
Set a custom SELECT clause for SQL projection optimization
When provided, this replaces the default SELECT data with a projection SQL
that filters fields at the database level, reducing network payload.
The projection SQL will be wrapped as SELECT {projection_sql} as data to maintain
the hard invariant of a single data column.
This feature enables architectural consistency with PostgreSQL optimization and prepares for future performance improvements.
§Arguments
projection_sql- PostgreSQL expression, typically fromjsonb_build_object()
§Example
// Requires: live Postgres connection via FraiseClient.
let stream = client
.query::<Project>("projects")
.select_projection("jsonb_build_object('id', data->>'id', 'name', data->>'name')")
.execute()
.await?;§Backward Compatibility
If not specified, defaults to SELECT data (original behavior).
Sourcepub const fn limit(self, count: usize) -> Self
pub const fn limit(self, count: usize) -> Self
Set LIMIT clause to restrict result set size
§Example
// Requires: live Postgres connection via FraiseClient.
let stream = client.query::<Project>("projects")
.limit(10)
.execute()
.await?;Sourcepub const fn offset(self, count: usize) -> Self
pub const fn offset(self, count: usize) -> Self
Set OFFSET clause to skip first N rows
§Example
// Requires: live Postgres connection via FraiseClient.
let stream = client.query::<Project>("projects")
.limit(10)
.offset(20) // Skip first 20, return next 10
.execute()
.await?;Sourcepub const fn chunk_size(self, size: usize) -> Self
pub const fn chunk_size(self, size: usize) -> Self
Set chunk size (default: 256)
Sourcepub const fn max_memory(self, bytes: usize) -> Self
pub const fn max_memory(self, bytes: usize) -> Self
Set maximum memory limit for buffered items (default: unbounded)
When the estimated memory usage of buffered items exceeds this limit,
the stream will return Error::MemoryLimitExceeded instead of additional items.
Memory is estimated as: items_buffered * 2048 bytes (conservative for typical JSON).
By default, max_memory() is None (unbounded), maintaining backward compatibility.
Only set if you need hard memory bounds.
§Example
// Requires: live Postgres connection via FraiseClient.
let stream = client
.query::<Project>("projects")
.max_memory(500_000_000) // 500 MB limit
.execute()
.await?;§Interpretation
If memory limit is exceeded:
- It indicates the consumer is too slow relative to data arrival
- The error is terminal (non-retriable) — retrying won’t help
- Consider: increasing consumer throughput, reducing
chunk_size, or removing limit
Sourcepub fn memory_soft_limits(
self,
warn_threshold: f32,
fail_threshold: f32,
) -> Self
pub fn memory_soft_limits( self, warn_threshold: f32, fail_threshold: f32, ) -> Self
Set soft memory limit thresholds for progressive degradation
Allows warning at a threshold before hitting hard limit.
Only applies if max_memory() is also set.
§Parameters
warn_threshold: Percentage (0.0-1.0) at which to emit a warningfail_threshold: Percentage (0.0-1.0) at which to return error (must be >warn_threshold)
§Example
// Requires: live Postgres connection via FraiseClient.
let stream = client
.query::<Project>("projects")
.max_memory(500_000_000) // 500 MB hard limit
.memory_soft_limits(0.80, 1.0) // Warn at 80%, error at 100%
.execute()
.await?;If only hard limit needed, skip this and just use max_memory().
Sourcepub const fn adaptive_chunking(self, enabled: bool) -> Self
pub const fn adaptive_chunking(self, enabled: bool) -> Self
Enable or disable adaptive chunk sizing (default: enabled)
Adaptive chunking automatically adjusts chunk_size based on channel occupancy:
- High occupancy (>80%): Decreases chunk size to reduce producer pressure
- Low occupancy (<20%): Increases chunk size to optimize batching efficiency
Enabled by default for zero-configuration self-tuning. Disable if you need fixed chunk sizes or encounter unexpected behavior.
§Example
// Requires: live Postgres connection via FraiseClient.
let stream = client
.query::<Project>("projects")
.adaptive_chunking(false) // Disable adaptive tuning
.chunk_size(512) // Use fixed size
.execute()
.await?;Sourcepub const fn adaptive_min_size(self, size: usize) -> Self
pub const fn adaptive_min_size(self, size: usize) -> Self
Override minimum chunk size for adaptive tuning (default: 16)
Adaptive chunking will never decrease chunk size below this value. Useful if you need minimum batching for performance.
Only applies if adaptive chunking is enabled.
§Example
// Requires: live Postgres connection via FraiseClient.
let stream = client
.query::<Project>("projects")
.adaptive_chunking(true)
.adaptive_min_size(32) // Don't go below 32 items per batch
.execute()
.await?;Sourcepub const fn adaptive_max_size(self, size: usize) -> Self
pub const fn adaptive_max_size(self, size: usize) -> Self
Override maximum chunk size for adaptive tuning (default: 1024)
Adaptive chunking will never increase chunk size above this value. Useful if you need memory bounds or latency guarantees.
Only applies if adaptive chunking is enabled.
§Example
// Requires: live Postgres connection via FraiseClient.
let stream = client
.query::<Project>("projects")
.adaptive_chunking(true)
.adaptive_max_size(512) // Cap at 512 items per batch
.execute()
.await?;Sourcepub async fn execute(self) -> Result<QueryStream<T>>
pub async fn execute(self) -> Result<QueryStream<T>>
Execute query and return typed stream
Type T ONLY affects consumer-side deserialization at poll_next().
SQL, filtering, ordering, and wire protocol are identical regardless of T.
The returned stream supports pause/resume/stats for advanced stream control.
§Examples
With type-safe deserialization:
// Requires: live Postgres connection via FraiseClient.
let mut stream = client.query::<Project>("projects").execute().await?;
while let Some(result) = stream.next().await {
let project: Project = result?;
}With raw JSON (escape hatch):
// Requires: live Postgres connection via FraiseClient.
let mut stream = client.query::<serde_json::Value>("projects").execute().await?;
while let Some(result) = stream.next().await {
let json: serde_json::Value = result?;
}With stream control:
// Requires: live Postgres connection via FraiseClient.
let mut stream = client.query::<serde_json::Value>("projects").execute().await?;
stream.pause().await?; // Pause the stream
let stats = stream.stats(); // Get statistics
stream.resume().await?; // Resume the stream§Errors
Returns Error::InvalidSchema if the SQL query cannot be built from the configured predicates.
Returns Error::Io or Error::Protocol if the streaming query fails to start.
Returns Error::Sql if the database rejects the query.