Struct google_cloud_bigquery::http::job::JobConfigurationQuery
source · pub struct JobConfigurationQuery {Show 23 fields
pub query: String,
pub destination_table: Option<TableReference>,
pub table_definitions: Option<HashMap<String, ExternalDataConfiguration>>,
pub user_defined_function_resources: Option<Vec<UserDefinedFunctionResource>>,
pub create_disposition: Option<CreateDisposition>,
pub write_disposition: Option<WriteDisposition>,
pub default_dataset: Option<DatasetReference>,
pub priority: Option<Priority>,
pub allow_large_results: Option<bool>,
pub use_query_cache: Option<bool>,
pub flatten_results: Option<bool>,
pub maximum_bytes_billed: Option<i64>,
pub use_legacy_sql: Option<bool>,
pub parameter_mode: Option<String>,
pub query_parameters: Option<Vec<QueryParameter>>,
pub schema_update_options: Option<Vec<SchemaUpdateOption>>,
pub time_partitioning: Option<TimePartitioning>,
pub range_partitioning: Option<RangePartitioning>,
pub clustering: Option<Clustering>,
pub destination_encryption_configuration: Option<EncryptionConfiguration>,
pub script_options: Option<ScriptOptions>,
pub connection_properties: Option<Vec<ConnectionProperty>>,
pub create_session: Option<bool>,
}Fields§
§query: String[Required] SQL query text to execute. The useLegacySql field can be used to indicate whether the query uses legacy SQL or GoogleSQL.
destination_table: Option<TableReference>Optional. Describes the table where the query results should be stored. This property must be set for large results that exceed the maximum response size. For queries that produce anonymous (cached) results, this field will be populated by BigQuery.
table_definitions: Option<HashMap<String, ExternalDataConfiguration>>Optional. You can specify external table definitions, which operate as ephemeral tables that can be queried. These definitions are configured using a JSON map, where the string key represents the table identifier, and the value is the corresponding external data configuration object. An object containing a list of “key”: value pairs. Example: { “name”: “wrench”, “mass”: “1.3kg”, “count”: “3” }.
user_defined_function_resources: Option<Vec<UserDefinedFunctionResource>>Describes user-defined function resources used in the query.
create_disposition: Option<CreateDisposition>Optional. Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion.
write_disposition: Option<WriteDisposition>Optional. Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. The default value is WRITE_EMPTY. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.
default_dataset: Option<DatasetReference>Optional. Specifies the default dataset to use for unqualified table names in the query. This setting does not alter behavior of unqualified dataset names. Setting the system variable @@dataset_id achieves the same behavior.
priority: Option<Priority>Optional. Specifies a priority for the query. Possible values include INTERACTIVE and BATCH. The default value is INTERACTIVE.
allow_large_results: Option<bool>Optional. If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires destinationTable to be set. For GoogleSQL queries, this flag is ignored and large results are always allowed. However, you must still set destinationTable when result size exceeds the allowed maximum response size.
use_query_cache: Option<bool>Optional. Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. The default value is true.
flatten_results: Option<bool>Optional. If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results. allowLargeResults must be true if this is set to false. For GoogleSQL queries, this flag is ignored and results are never flattened.
maximum_bytes_billed: Option<i64>Limits the bytes billed for this job. Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default.
use_legacy_sql: Option<bool>Optional. Specifies whether to use BigQuery’s legacy SQL dialect for this query. The default value is true. If set to false, the query will use BigQuery’s GoogleSQL: https://cloud.google.com/bigquery/sql-reference/ When useLegacySql is set to false, the value of flattenResults is ignored; query will be run as if flattenResults is false.
parameter_mode: Option<String>GoogleSQL only. Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.
query_parameters: Option<Vec<QueryParameter>>Query parameters for GoogleSQL queries.
schema_update_options: Option<Vec<SchemaUpdateOption>>Allows the schema of the destination table to be updated as a side effect of the query job. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the origin
time_partitioning: Option<TimePartitioning>Time-based partitioning specification for the destination table. Only one of timePartitioning and rangePartitioning should be specified
range_partitioning: Option<RangePartitioning>Range partitioning specification for the destination table. Only one of timePartitioning and rangePartitioning should be specified.
clustering: Option<Clustering>Clustering specification for the destination table.
destination_encryption_configuration: Option<EncryptionConfiguration>Custom encryption configuration (e.g., Cloud KMS keys)
script_options: Option<ScriptOptions>Options controlling the execution of scripts.
connection_properties: Option<Vec<ConnectionProperty>>Connection properties which can modify the query behavior.
create_session: Option<bool>if this property is true, the job creates a new session using a randomly generated sessionId. To continue using a created session with subsequent queries, pass the existing session identifier as a ConnectionProperty value. The session identifier is returned as part of the SessionInfo message within the query statistics. The new session’s location will be set to Job.JobReference.location if it is present, otherwise it’s set to the default location based on existing routing logic.
Trait Implementations§
source§impl Clone for JobConfigurationQuery
impl Clone for JobConfigurationQuery
source§fn clone(&self) -> JobConfigurationQuery
fn clone(&self) -> JobConfigurationQuery
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moresource§impl Debug for JobConfigurationQuery
impl Debug for JobConfigurationQuery
source§impl Default for JobConfigurationQuery
impl Default for JobConfigurationQuery
source§fn default() -> JobConfigurationQuery
fn default() -> JobConfigurationQuery
source§impl<'de> Deserialize<'de> for JobConfigurationQuery
impl<'de> Deserialize<'de> for JobConfigurationQuery
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>,
source§impl PartialEq<JobConfigurationQuery> for JobConfigurationQuery
impl PartialEq<JobConfigurationQuery> for JobConfigurationQuery
source§fn eq(&self, other: &JobConfigurationQuery) -> bool
fn eq(&self, other: &JobConfigurationQuery) -> bool
self and other values to be equal, and is used
by ==.source§impl Serialize for JobConfigurationQuery
impl Serialize for JobConfigurationQuery
impl StructuralPartialEq for JobConfigurationQuery
Auto Trait Implementations§
impl RefUnwindSafe for JobConfigurationQuery
impl Send for JobConfigurationQuery
impl Sync for JobConfigurationQuery
impl Unpin for JobConfigurationQuery
impl UnwindSafe for JobConfigurationQuery
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T in a tonic::Request