pub struct ExternalDataConfiguration {Show 18 fields
pub source_uris: Vec<String>,
pub schema: Option<TableSchema>,
pub source_format: SourceFormat,
pub max_bad_records: i32,
pub autodetect: bool,
pub ignore_unknown_values: Option<bool>,
pub compression: Option<bool>,
pub csv_options: Option<CsvOptions>,
pub bigtable_options: Option<BigtableOptions>,
pub google_sheets_options: Option<GoogleSheetsOptions>,
pub hive_partitioning_options: Option<HivePartitioningOptions>,
pub connection_id: Option<String>,
pub decimal_target_types: Option<Vec<DecimalTargetType>>,
pub avro_options: Option<AvroOptions>,
pub parquet_options: Option<ParquetOptions>,
pub reference_file_schema_uri: Option<String>,
pub metadata_cache_mode: Option<MetadataCacheMode>,
pub object_metadata: Option<ObjectMetadata>,
}Fields§
§source_uris: Vec<String>[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one ‘’ wildcard character and it must come after the ‘bucket’ name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the ‘’ wildcard character is not allowed.
schema: Option<TableSchema>Optional. The schema for the data. Schema is required for CSV and JSON formats if autodetect is not on. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, Avro, ORC and Parquet formats.
source_format: SourceFormat[Required] The data format. For CSV files, specify “CSV”. For Google sheets, specify “GOOGLE_SHEETS”. For newline-delimited JSON, specify “NEWLINE_DELIMITED_JSON”. For Avro files, specify “AVRO”. For Google Cloud Datastore backups, specify “DATASTORE_BACKUP”. For ORC files, specify “ORC”. For Parquet files, specify “PARQUET”. [Beta] For Google Cloud Bigtable, specify “BIGTABLE”.
max_bad_records: i32Optional. The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups, Avro, ORC and Parquet formats.
autodetect: boolTry to detect schema and format options automatically. Any option specified explicitly will be honored.
ignore_unknown_values: Option<bool>Optional. Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don’t match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored. ORC: This setting is ignored. Parquet: This setting is ignored.
compression: Option<bool>Optional. The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups, Avro, ORC and Parquet formats. An empty string is an invalid value.
csv_options: Option<CsvOptions>Optional. Additional properties to set if sourceFormat is set to CSV.
bigtable_options: Option<BigtableOptions>Optional. Additional options if sourceFormat is set to BIGTABLE.
google_sheets_options: Option<GoogleSheetsOptions>Optional. Additional options if sourceFormat is set to GOOGLE_SHEETS.
hive_partitioning_options: Option<HivePartitioningOptions>Optional. When set, configures hive partitioning support. Not all storage formats support hive partitioning – requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification..
connection_id: Option<String>Optional. The connection specifying the credentials to be used to read external storage, such as Azure Blob, Cloud Storage, or S3. The connectionId can have the form “<project_id>.<location_id>.<connection_id>” or “projects/<project_id>/locations/<location_id>/connections/<connection_id>”.
decimal_target_types: Option<Vec<DecimalTargetType>>Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown.
Example: Suppose the value of this field is [“NUMERIC”, “BIGNUMERIC”]. If (precision,scale) is:
(38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, [“BIGNUMERIC”, “NUMERIC”] is the same as [“NUMERIC”, “BIGNUMERIC”] and NUMERIC always takes precedence over BIGNUMERIC.
Defaults to [“NUMERIC”, “STRING”] for ORC and [“NUMERIC”] for the other file format
avro_options: Option<AvroOptions>Optional. Additional properties to set if sourceFormat is set to AVRO.
parquet_options: Option<ParquetOptions>Optional. Additional properties to set if sourceFormat is set to PARQUET.
reference_file_schema_uri: Option<String>Optional. When creating an external table, the user can provide a reference file with the table schema. This is enabled for the following formats: AVRO, PARQUET, ORC.
metadata_cache_mode: Option<MetadataCacheMode>Optional. Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
object_metadata: Option<ObjectMetadata>Optional. ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the sourceUris. If ObjectMetadata is set, sourceFormat should be omitted. Currently SIMPLE is the only supported Object Metadata type.
Trait Implementations§
Source§impl Clone for ExternalDataConfiguration
impl Clone for ExternalDataConfiguration
Source§fn clone(&self) -> ExternalDataConfiguration
fn clone(&self) -> ExternalDataConfiguration
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl Debug for ExternalDataConfiguration
impl Debug for ExternalDataConfiguration
Source§impl Default for ExternalDataConfiguration
impl Default for ExternalDataConfiguration
Source§fn default() -> ExternalDataConfiguration
fn default() -> ExternalDataConfiguration
Source§impl<'de> Deserialize<'de> for ExternalDataConfiguration
impl<'de> Deserialize<'de> for ExternalDataConfiguration
Source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
impl Eq for ExternalDataConfiguration
impl StructuralPartialEq for ExternalDataConfiguration
Auto Trait Implementations§
impl Freeze for ExternalDataConfiguration
impl RefUnwindSafe for ExternalDataConfiguration
impl Send for ExternalDataConfiguration
impl Sync for ExternalDataConfiguration
impl Unpin for ExternalDataConfiguration
impl UnwindSafe for ExternalDataConfiguration
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key and return true if they are equal.Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key and return true if they are equal.Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
Source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T in a tonic::Request