pub struct JobsBuilder<'a> { /* private fields */ }Expand description
Select a set of Job instances to return from the LAVA server.
This is the way to construct a Jobs object, which can stream
the actual data. It allows customisation of which jobs to return,
and in what order.
Example:
use futures::stream::TryStreamExt;
use lava_api::{Lava, job::State, job::Ordering};
let lava = Lava::new(&service_uri, lava_token).expect("failed to make lava");
let mut lj = lava
.jobs()
.state(State::Submitted)
.ordering(Ordering::StartTime, true)
.query();
while let Some(job) = lj
.try_next()
.await
.expect("failed to get job")
{
println!("Got job {:?}", job);
}Implementations§
Source§impl<'a> JobsBuilder<'a>
impl<'a> JobsBuilder<'a>
Sourcepub fn new(lava: &'a Lava) -> Self
pub fn new(lava: &'a Lava) -> Self
Create a new JobsBuilder
The default query is:
- order by
Ordering::Id - no filtering
- default result pagination
Sourcepub fn limit(self, limit: u32) -> Self
pub fn limit(self, limit: u32) -> Self
Set the number of jobs retrieved at a time while the query is running. The query will be processed transparently as a sequence of requests that return all matching responses. This setting governs the size of each of the (otherwise transparent) requests, so this number is really a page size.
Note that you will see artifacts on queries that are split into many requests, especially when responses are slow. This makes setting the limit much smaller than the response size unattractive when accurate data is required. However, the server will need to return records in chunks of this size, regardless of how many are consumed from the response stream, which makes setting the limit much higher than the response size wasteful. In practice, it is probably best to set this limit to the expected response size for most use cases.
Artifacts occur when paging occurs, because paging is entirely client side. Each page contains a section of the query begining with the job at some multiple of the limit count into the result set. However the result set is evolving while the paging is occurring, and this is not currently compensated for, which leads to jobs being returned multiple times at the boundaries between pages - or even omitted depending on the query. In general, query sets that can shrink are not safe to use with paging, because results can be lost rather than duplicated.
Sourcepub fn health_not(self, health: Health) -> Self
pub fn health_not(self, health: Health) -> Self
Exclude jobs with this health.
Sourcepub fn started_after(self, when: DateTime<Utc>) -> Self
pub fn started_after(self, when: DateTime<Utc>) -> Self
Return only jobs whose start time is strictly after the given instant.
Sourcepub fn submitted_after(self, when: DateTime<Utc>) -> Self
pub fn submitted_after(self, when: DateTime<Utc>) -> Self
Return only jobs whose submission time is strictly after the given instant.
Sourcepub fn ended_after(self, when: DateTime<Utc>) -> Self
pub fn ended_after(self, when: DateTime<Utc>) -> Self
Return only jobs which ended strictly after the given instant.
Trait Implementations§
Source§impl<'a> Clone for JobsBuilder<'a>
impl<'a> Clone for JobsBuilder<'a>
Source§fn clone(&self) -> JobsBuilder<'a>
fn clone(&self) -> JobsBuilder<'a>
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more