[−][src]Crate google_bigquerydatatransfer1
This documentation was generated from BigQuery Data Transfer crate version 1.0.12+20190629, where 20190629 is the exact revision of the bigquerydatatransfer:v1 schema built by the mako code generator v1.0.12.
Everything else about the BigQuery Data Transfer v1 API can be found at the official documentation site. The original source code is on github.
Features
Handle the following Resources with ease from the central hub ...
- projects
- data sources check valid creds, data sources get, data sources list, locations data sources check valid creds, locations data sources get, locations data sources list, locations get, locations list, locations transfer configs create, locations transfer configs delete, locations transfer configs get, locations transfer configs list, locations transfer configs patch, locations transfer configs runs delete, locations transfer configs runs get, locations transfer configs runs list, locations transfer configs runs transfer logs list, locations transfer configs schedule runs, locations transfer configs start manual runs, transfer configs create, transfer configs delete, transfer configs get, transfer configs list, transfer configs patch, transfer configs runs delete, transfer configs runs get, transfer configs runs list, transfer configs runs transfer logs list, transfer configs schedule runs and transfer configs start manual runs
Not what you are looking for ? Find all other Google APIs in their Rust documentation index.
Structure of this Library
The API is structured into the following primary items:
- Hub
- a central object to maintain state and allow accessing all Activities
- creates Method Builders which in turn allow access to individual Call Builders
- Resources
- primary types that you can apply Activities to
- a collection of properties and Parts
- Parts
- a collection of properties
- never directly used in Activities
- Activities
- operations to apply to Resources
All structures are marked with applicable traits to further categorize them and ease browsing.
Generally speaking, you can invoke Activities like this:
let r = hub.resource().activity(...).doit()
Or specifically ...
let r = hub.projects().transfer_configs_patch(...).doit() let r = hub.projects().locations_transfer_configs_patch(...).doit() let r = hub.projects().transfer_configs_create(...).doit() let r = hub.projects().locations_transfer_configs_create(...).doit() let r = hub.projects().transfer_configs_get(...).doit() let r = hub.projects().locations_transfer_configs_get(...).doit()
The resource()
and activity(...)
calls create builders. The second one dealing with Activities
supports various methods to configure the impending operation (not shown here). It is made such that all required arguments have to be
specified right away (i.e. (...)
), whereas all optional ones can be build up as desired.
The doit()
method performs the actual communication with the server and returns the respective result.
Usage
Setting up your Project
To use this library, you would put the following lines into your Cargo.toml
file:
[dependencies]
google-bigquerydatatransfer1 = "*"
# This project intentionally uses an old version of Hyper. See
# https://github.com/Byron/google-apis-rs/issues/173 for more
# information.
hyper = "^0.10"
hyper-rustls = "^0.6"
serde = "^1.0"
serde_json = "^1.0"
yup-oauth2 = "^1.0"
A complete example
extern crate hyper; extern crate hyper_rustls; extern crate yup_oauth2 as oauth2; extern crate google_bigquerydatatransfer1 as bigquerydatatransfer1; use bigquerydatatransfer1::TransferConfig; use bigquerydatatransfer1::{Result, Error}; use std::default::Default; use oauth2::{Authenticator, DefaultAuthenticatorDelegate, ApplicationSecret, MemoryStorage}; use bigquerydatatransfer1::BigQueryDataTransfer; // Get an ApplicationSecret instance by some means. It contains the `client_id` and // `client_secret`, among other things. let secret: ApplicationSecret = Default::default(); // Instantiate the authenticator. It will choose a suitable authentication flow for you, // unless you replace `None` with the desired Flow. // Provide your own `AuthenticatorDelegate` to adjust the way it operates and get feedback about // what's going on. You probably want to bring in your own `TokenStorage` to persist tokens and // retrieve them from storage. let auth = Authenticator::new(&secret, DefaultAuthenticatorDelegate, hyper::Client::with_connector(hyper::net::HttpsConnector::new(hyper_rustls::TlsClient::new())), <MemoryStorage as Default>::default(), None); let mut hub = BigQueryDataTransfer::new(hyper::Client::with_connector(hyper::net::HttpsConnector::new(hyper_rustls::TlsClient::new())), auth); // As the method needs a request, you would usually fill it with the desired information // into the respective structure. Some of the parts shown here might not be applicable ! // Values shown here are possibly random and not representative ! let mut req = TransferConfig::default(); // You can configure optional parameters by calling the respective setters at will, and // execute the final call using `doit()`. // Values shown here are possibly random and not representative ! let result = hub.projects().transfer_configs_patch(req, "name") .version_info("dolores") .update_mask("kasd") .authorization_code("accusam") .doit(); match result { Err(e) => match e { // The Error enum provides details about what exactly happened. // You can also just use its `Debug`, `Display` or `Error` traits Error::HttpError(_) |Error::MissingAPIKey |Error::MissingToken(_) |Error::Cancelled |Error::UploadSizeLimitExceeded(_, _) |Error::Failure(_) |Error::BadRequest(_) |Error::FieldClash(_) |Error::JsonDecodeError(_, _) => println!("{}", e), }, Ok(res) => println!("Success: {:?}", res), }
Handling Errors
All errors produced by the system are provided either as Result enumeration as return value of the doit() methods, or handed as possibly intermediate results to either the Hub Delegate, or the Authenticator Delegate.
When delegates handle errors or intermediate values, they may have a chance to instruct the system to retry. This makes the system potentially resilient to all kinds of errors.
Uploads and Downloads
If a method supports downloads, the response body, which is part of the Result, should be
read by you to obtain the media.
If such a method also supports a Response Result, it will return that by default.
You can see it as meta-data for the actual media. To trigger a media download, you will have to set up the builder by making
this call: .param("alt", "media")
.
Methods supporting uploads can do so using up to 2 different protocols:
simple and resumable. The distinctiveness of each is represented by customized
doit(...)
methods, which are then named upload(...)
and upload_resumable(...)
respectively.
Customization and Callbacks
You may alter the way an doit()
method is called by providing a delegate to the
Method Builder before making the final doit()
call.
Respective methods will be called to provide progress information, as well as determine whether the system should
retry on failure.
The delegate trait is default-implemented, allowing you to customize it with minimal effort.
Optional Parts in Server-Requests
All structures provided by this library are made to be enocodable and decodable via json. Optionals are used to indicate that partial requests are responses are valid. Most optionals are are considered Parts which are identifiable by name, which will be sent to the server to indicate either the set parts of the request or the desired parts in the response.
Builder Arguments
Using method builders, you are able to prepare an action call by repeatedly calling it's methods. These will always take a single argument, for which the following statements are true.
- PODs are handed by copy
- strings are passed as
&str
- request values are moved
Arguments will always be copied or cloned into the builder, to make them independent of their original life times.
Structs
BigQueryDataTransfer | Central instance to access all BigQueryDataTransfer related resource activities |
CheckValidCredsRequest | A request to determine whether the user has valid credentials. This method is used to limit the number of OAuth popups in the user interface. The user id is inferred from the API call context. If the data source has the Google+ authorization type, this method returns false, as it cannot be determined whether the credentials are already valid merely based on the user id. |
CheckValidCredsResponse | A response indicating whether the credentials exist and are valid. |
Chunk | |
ContentRange | Implements the Content-Range header, for serialization only |
DataSource | Represents data source metadata. Metadata is sufficient to render UI and request proper OAuth tokens. |
DataSourceParameter | Represents a data source parameter with validation rules, so that parameters can be rendered in the UI. These parameters are given to us by supported data sources, and include all needed information for rendering and validation. Thus, whoever uses this api can decide to generate either generic ui, or custom data source specific forms. |
DefaultDelegate | A delegate with a conservative default implementation, which is used if no other delegate is set. |
DummyNetworkStream | |
Empty | A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: |
ErrorResponse | A utility to represent detailed errors we might see in case there are BadRequests. The latter happen if the sent parameters or request structures are unsound |
JsonServerError | A utility type which can decode a server response that indicates error |
ListDataSourcesResponse | Returns list of supported data sources and their metadata. |
ListLocationsResponse | The response message for Locations.ListLocations. |
ListTransferConfigsResponse | The returned list of pipelines in the project. |
ListTransferLogsResponse | The returned list transfer run messages. |
ListTransferRunsResponse | The returned list of pipelines in the project. |
Location | A resource that represents Google Cloud Platform location. |
MethodInfo | Contains information about an API request. |
MultiPartReader | Provides a |
ProjectDataSourceCheckValidCredCall | Returns true if valid credentials exist for the given data source and requesting user. Some data sources doesn't support service account, so we need to talk to them on behalf of the end user. This API just checks whether we have OAuth token for the particular user, which is a pre-requisite before user can create a transfer config. |
ProjectDataSourceGetCall | Retrieves a supported data source and returns its settings, which can be used for UI rendering. |
ProjectDataSourceListCall | Lists supported data sources and returns their settings, which can be used for UI rendering. |
ProjectLocationDataSourceCheckValidCredCall | Returns true if valid credentials exist for the given data source and requesting user. Some data sources doesn't support service account, so we need to talk to them on behalf of the end user. This API just checks whether we have OAuth token for the particular user, which is a pre-requisite before user can create a transfer config. |
ProjectLocationDataSourceGetCall | Retrieves a supported data source and returns its settings, which can be used for UI rendering. |
ProjectLocationDataSourceListCall | Lists supported data sources and returns their settings, which can be used for UI rendering. |
ProjectLocationGetCall | Gets information about a location. |
ProjectLocationListCall | Lists information about the supported locations for this service. |
ProjectLocationTransferConfigCreateCall | Creates a new data transfer configuration. |
ProjectLocationTransferConfigDeleteCall | Deletes a data transfer configuration, including any associated transfer runs and logs. |
ProjectLocationTransferConfigGetCall | Returns information about a data transfer config. |
ProjectLocationTransferConfigListCall | Returns information about all data transfers in the project. |
ProjectLocationTransferConfigPatchCall | Updates a data transfer configuration. All fields must be set, even if they are not updated. |
ProjectLocationTransferConfigRunDeleteCall | Deletes the specified transfer run. |
ProjectLocationTransferConfigRunGetCall | Returns information about the particular transfer run. |
ProjectLocationTransferConfigRunListCall | Returns information about running and completed jobs. |
ProjectLocationTransferConfigRunTransferLogListCall | Returns user facing log messages for the data transfer run. |
ProjectLocationTransferConfigScheduleRunCall | Creates transfer runs for a time range [start_time, end_time]. For each date - or whatever granularity the data source supports - in the range, one transfer run is created. Note that runs are created per UTC time in the time range. DEPRECATED: use StartManualTransferRuns instead. |
ProjectLocationTransferConfigStartManualRunCall | Start manual transfer runs to be executed now with schedule_time equal to current time. The transfer runs can be created for a time range where the run_time is between start_time (inclusive) and end_time (exclusive), or for a specific run_time. |
ProjectMethods | A builder providing access to all methods supported on project resources.
It is not used directly, but through the |
ProjectTransferConfigCreateCall | Creates a new data transfer configuration. |
ProjectTransferConfigDeleteCall | Deletes a data transfer configuration, including any associated transfer runs and logs. |
ProjectTransferConfigGetCall | Returns information about a data transfer config. |
ProjectTransferConfigListCall | Returns information about all data transfers in the project. |
ProjectTransferConfigPatchCall | Updates a data transfer configuration. All fields must be set, even if they are not updated. |
ProjectTransferConfigRunDeleteCall | Deletes the specified transfer run. |
ProjectTransferConfigRunGetCall | Returns information about the particular transfer run. |
ProjectTransferConfigRunListCall | Returns information about running and completed jobs. |
ProjectTransferConfigRunTransferLogListCall | Returns user facing log messages for the data transfer run. |
ProjectTransferConfigScheduleRunCall | Creates transfer runs for a time range [start_time, end_time]. For each date - or whatever granularity the data source supports - in the range, one transfer run is created. Note that runs are created per UTC time in the time range. DEPRECATED: use StartManualTransferRuns instead. |
ProjectTransferConfigStartManualRunCall | Start manual transfer runs to be executed now with schedule_time equal to current time. The transfer runs can be created for a time range where the run_time is between start_time (inclusive) and end_time (exclusive), or for a specific run_time. |
RangeResponseHeader | |
ResumableUploadHelper | A utility type to perform a resumable upload from start to end. |
ScheduleOptions | Options customizing the data transfer schedule. |
ScheduleTransferRunsRequest | A request to schedule transfer runs for a time range. |
ScheduleTransferRunsResponse | A response to schedule transfer runs for a time range. |
ServerError | |
ServerMessage | |
StartManualTransferRunsRequest | A request to start manual transfer runs. |
StartManualTransferRunsResponse | A response to start manual transfer runs. |
Status | The |
TimeRange | A specification for a time range, this will request transfer runs with run_time between start_time (inclusive) and end_time (exclusive). |
TransferConfig | Represents a data transfer configuration. A transfer configuration
contains all metadata needed to perform a data transfer. For example,
|
TransferMessage | Represents a user facing message for a particular data transfer run. |
TransferRun | Represents a data transfer run. |
XUploadContentType | The |
Enums
Error | |
Scope | Identifies the an OAuth2 authorization scope. A scope is needed when requesting an authorization token. |
Traits
CallBuilder | Identifies types which represent builders for a particular resource method |
Delegate | A trait specifying functionality to help controlling any request performed by the API. The trait has a conservative default implementation. |
Hub | Identifies the Hub. There is only one per library, this trait is supposed to make intended use more explicit. The hub allows to access all resource methods more easily. |
MethodsBuilder | Identifies types for building methods of a particular resource type |
NestedType | Identifies types which are only used by other types internally. They have no special meaning, this trait just marks them for completeness. |
Part | Identifies types which are only used as part of other types, which
usually are carrying the |
ReadSeek | A utility to specify reader types which provide seeking capabilities too |
RequestValue | Identifies types which are used in API requests. |
Resource | Identifies types which can be inserted and deleted. Types with this trait are most commonly used by clients of this API. |
ResponseResult | Identifies types which are used in API responses. |
ToParts | A trait for all types that can convert themselves into a parts string |
UnusedType | Identifies types which are not actually used by the API This might be a bug within the google API schema. |
Functions
remove_json_null_values |
Type Definitions
Result | A universal result type used as return for all calls. |