Struct ultra_batch::BatchFetcher
source · pub struct BatchFetcher<F>where
F: Fetcher,{ /* private fields */ }
Expand description
Batches and caches loads from some datastore. A BatchFetcher
can be
used with any type that implements Fetcher
. BatchFetcher
s are
asynchronous and designed to be passed and shared between threads or tasks.
Cloning a BatchFetcher
is shallow and will use the same Fetcher
.
BatchFetcher
is designed primarily around batching database lookups–
for example, fetching a user from a user ID, where a signle query to
retrieve 50 users by ID is significantly faster than 50 separate queries to
look up the same set of users.
A BatchFetcher
is designed to be ephemeral. In the context of a web
service, this means callers should most likely create a new BatchFetcher
for each request, and not a BatchFetcher
shared across multiple
requests. BatchFetcher
s have no concept of cache invalidation, so old
values are stored indefinitely (which means callers may get stale data or
may exhaust memory endlessly).
BatchFetcher
s introduce a small amount of latency for loads. Each time a
BatchFetcher
receives a key to fetch that hasn’t been cached (or a set of
keys), it will first wait for more keys to build a batch. The load will only
trigger after a timeout is reached or once enough keys have been queued in
the batch. See BatchFetcherBuilder
for options to tweak latency and
batch sizes.
See also BatchExecutor
for a more general type
designed primarly for mutations, but can also be used for fetching with
more control over how batches are fetched.
§Load semantics
If the underlying Fetcher
returns an error during the batch request,
then all pending load
and load_many
requests will fail. Subsequent calls to load
or
load_many
with the same keys will retry.
If the underlying Fetcher
succeeds but does not return a value for a
given key during a batch request, then the BatchFetcher
will mark that key
as “not found” and an error value of NotFound
will
be returned to all pending load
and
load_many
requests. The “not found” status will
be preserved, so subsequent calls with the same key will fail and will
not retry.
Implementations§
source§impl<F> BatchFetcher<F>
impl<F> BatchFetcher<F>
sourcepub fn build(fetcher: F) -> BatchFetcherBuilder<F>
pub fn build(fetcher: F) -> BatchFetcherBuilder<F>
Create a new BatchFetcher
that uses the given Fetcher
to retrieve
data. Returns a BatchFetcherBuilder
, which can be used to customize
the BatchFetcher
. Call .finish()
to
create the BatchFetcher
.
§Examples
Creating a BatchFetcher
with default options:
let user_fetcher = UserFetcher::new(db_conn);
let batch_fetcher = BatchFetcher::build(user_fetcher).finish();
Creating a BatchFetcher
with custom options:
let user_fetcher = UserFetcher::new(db_conn);
let batch_fetcher = BatchFetcher::build(user_fetcher)
.eager_batch_size(Some(50))
.delay_duration(tokio::time::Duration::from_millis(5))
.finish();
sourcepub async fn load(&self, key: F::Key) -> Result<F::Value, LoadError>
pub async fn load(&self, key: F::Key) -> Result<F::Value, LoadError>
Load the value with the associated key, either by calling the Fetcher
or by loading the cached value. Returns an error if the value could
not be loaded or if a value for the given key was not found.
See the type-level docs for BatchFetcher
for more
detailed loading semantics.
sourcepub async fn load_many(
&self,
keys: &[F::Key]
) -> Result<Vec<F::Value>, LoadError>
pub async fn load_many( &self, keys: &[F::Key] ) -> Result<Vec<F::Value>, LoadError>
Load all the values for the given keys, either by calling the Fetcher
or by loading cached values. Values are returned in the same order as
the input keys. Returns an error if any load fails.
See the type-level docs for BatchFetcher
for more
detailed loading semantics.