#[non_exhaustive]pub struct BigQueryDestination {
pub dataset: String,
pub table: String,
pub force: bool,
pub partition_spec: Option<PartitionSpec>,
pub separate_tables_per_asset_type: bool,
/* private fields */
}Expand description
A BigQuery destination for exporting assets to.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.dataset: StringRequired. The BigQuery dataset in format
“projects/projectId/datasets/datasetId”, to which the snapshot result
should be exported. If this dataset does not exist, the export call returns
an INVALID_ARGUMENT error. Setting the contentType for exportAssets
determines the
schema
of the BigQuery table. Setting separateTablesPerAssetType to TRUE also
influences the schema.
table: StringRequired. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
force: boolIf the destination table already exists and this flag is TRUE, the
table will be overwritten by the contents of assets snapshot. If the flag
is FALSE or unset and the destination table already exists, the export
call returns an INVALID_ARGUMEMT error.
partition_spec: Option<PartitionSpec>partition_spec determines whether to export to partitioned table(s) and how to partition the data.
If partition_spec is unset or [partition_spec.partition_key] is unset or
PARTITION_KEY_UNSPECIFIED, the snapshot results will be exported to
non-partitioned table(s). [force] will decide whether to overwrite existing
table(s).
If partition_spec is specified. First, the snapshot results will be
written to partitioned table(s) with two additional timestamp columns,
readTime and requestTime, one of which will be the partition key. Secondly,
in the case when any destination table already exists, it will first try to
update existing table’s schema as necessary by appending additional
columns. Then, if [force] is TRUE, the corresponding partition will be
overwritten by the snapshot results (data in different partitions will
remain intact); if [force] is unset or FALSE, it will append the data. An
error will be returned if the schema update or data appension fails.
separate_tables_per_asset_type: boolIf this flag is TRUE, the snapshot results will be written to one or
multiple tables, each of which contains results of one asset type. The
[force] and partition_spec fields will apply to each of them.
Field [table] will be concatenated with “” and the asset type names (see https://cloud.google.com/asset-inventory/docs/supported-asset-types for supported asset types) to construct per-asset-type table names, in which all non-alphanumeric characters like “.” and “/” will be substituted by “”. Example: if field [table] is “mytable” and snapshot results contain “storage.googleapis.com/Bucket” assets, the corresponding table name will be “mytable_storage_googleapis_com_Bucket”. If any of these tables does not exist, a new table with the concatenated name will be created.
When [content_type] in the ExportAssetsRequest is RESOURCE, the schema of
each table will include RECORD-type columns mapped to the nested fields in
the Asset.resource.data field of that asset type (up to the 15 nested level
BigQuery supports
(https://cloud.google.com/bigquery/docs/nested-repeated#limitations)). The
fields in >15 nested levels will be stored in JSON format string as a child
column of its parent RECORD column.
If error occurs when exporting to any table, the whole export call will return an error but the export results that already succeed will persist. Example: if exporting to table_type_A succeeds when exporting to table_type_B fails during one export call, the results in table_type_A will persist and there will not be partial results persisting in a table.
Implementations§
Source§impl BigQueryDestination
impl BigQueryDestination
pub fn new() -> Self
Sourcepub fn set_dataset<T: Into<String>>(self, v: T) -> Self
pub fn set_dataset<T: Into<String>>(self, v: T) -> Self
Sourcepub fn set_partition_spec<T>(self, v: T) -> Selfwhere
T: Into<PartitionSpec>,
pub fn set_partition_spec<T>(self, v: T) -> Selfwhere
T: Into<PartitionSpec>,
Sets the value of partition_spec.
§Example
use google_cloud_asset_v1::model::PartitionSpec;
let x = BigQueryDestination::new().set_partition_spec(PartitionSpec::default()/* use setters */);Sourcepub fn set_or_clear_partition_spec<T>(self, v: Option<T>) -> Selfwhere
T: Into<PartitionSpec>,
pub fn set_or_clear_partition_spec<T>(self, v: Option<T>) -> Selfwhere
T: Into<PartitionSpec>,
Sets or clears the value of partition_spec.
§Example
use google_cloud_asset_v1::model::PartitionSpec;
let x = BigQueryDestination::new().set_or_clear_partition_spec(Some(PartitionSpec::default()/* use setters */));
let x = BigQueryDestination::new().set_or_clear_partition_spec(None::<PartitionSpec>);Sourcepub fn set_separate_tables_per_asset_type<T: Into<bool>>(self, v: T) -> Self
pub fn set_separate_tables_per_asset_type<T: Into<bool>>(self, v: T) -> Self
Sets the value of separate_tables_per_asset_type.
§Example
let x = BigQueryDestination::new().set_separate_tables_per_asset_type(true);Trait Implementations§
Source§impl Clone for BigQueryDestination
impl Clone for BigQueryDestination
Source§fn clone(&self) -> BigQueryDestination
fn clone(&self) -> BigQueryDestination
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more