Struct databento::historical::batch::BatchJob
source · pub struct BatchJob {Show 32 fields
pub id: String,
pub user_id: Option<String>,
pub bill_id: Option<String>,
pub cost_usd: Option<f64>,
pub dataset: String,
pub symbols: Symbols,
pub stype_in: SType,
pub stype_out: SType,
pub schema: Schema,
pub start: OffsetDateTime,
pub end: OffsetDateTime,
pub limit: Option<NonZeroU64>,
pub encoding: Encoding,
pub compression: Compression,
pub pretty_px: bool,
pub pretty_ts: bool,
pub map_symbols: bool,
pub split_symbols: bool,
pub split_duration: SplitDuration,
pub split_size: Option<NonZeroU64>,
pub packaging: Option<Packaging>,
pub delivery: Delivery,
pub record_count: Option<u64>,
pub billed_size: Option<u64>,
pub actual_size: Option<u64>,
pub package_size: Option<u64>,
pub state: JobState,
pub ts_received: OffsetDateTime,
pub ts_queued: Option<OffsetDateTime>,
pub ts_process_start: Option<OffsetDateTime>,
pub ts_process_done: Option<OffsetDateTime>,
pub ts_expiration: Option<OffsetDateTime>,
}
historical
only.Expand description
The description of a submitted batch job.
Fields§
§id: String
The unique job ID.
user_id: Option<String>
The user ID of the user who submitted the job.
bill_id: Option<String>
The bill ID (for internal use).
cost_usd: Option<f64>
The cost of the job in US dollars. Will be None
until the job is processed.
dataset: String
The dataset code.
symbols: Symbols
The list of symbols specified in the request.
stype_in: SType
The symbology type of the input symbols
.
stype_out: SType
The symbology type of the output symbols
.
schema: Schema
The data record schema.
start: OffsetDateTime
The start of the request time range (inclusive).
end: OffsetDateTime
The end of the request time range (exclusive).
limit: Option<NonZeroU64>
The maximum number of records to return.
encoding: Encoding
The data encoding.
compression: Compression
The data compression mode.
pretty_px: bool
If prices are formatted to the correct scale (using the fixed-precision scalar 1e-9).
pretty_ts: bool
If timestamps are formatted as ISO 8601 strings.
map_symbols: bool
If a symbol field is included with each text-encoded record.
split_symbols: bool
If files are split by raw symbol.
split_duration: SplitDuration
The maximum time interval for an individual file before splitting into multiple files.
split_size: Option<NonZeroU64>
The maximum size for an individual file before splitting into multiple files.
packaging: Option<Packaging>
The packaging method of the batch data.
delivery: Delivery
The delivery mechanism of the batch data.
record_count: Option<u64>
The number of data records (None
until the job is processed).
billed_size: Option<u64>
The size of the raw binary data used to process the batch job (used for billing purposes).
actual_size: Option<u64>
The total size of the result of the batch job after splitting and compression.
package_size: Option<u64>
The total size of the result of the batch job after any packaging (including metadata).
state: JobState
The current status of the batch job.
ts_received: OffsetDateTime
The timestamp of when Databento received the batch job.
ts_queued: Option<OffsetDateTime>
The timestamp of when the batch job was queued.
ts_process_start: Option<OffsetDateTime>
The timestamp of when the batch job began processing.
ts_process_done: Option<OffsetDateTime>
The timestamp of when the batch job finished processing.
ts_expiration: Option<OffsetDateTime>
The timestamp of when the batch job will expire from the Download center.