[−][src]Crate google_dataproc1
This documentation was generated from Dataproc crate version 1.0.14+20200703, where 20200703 is the exact revision of the dataproc:v1 schema built by the mako code generator v1.0.14.
Everything else about the Dataproc v1 API can be found at the official documentation site. The original source code is on github.
Features
Handle the following Resources with ease from the central hub ...
- projects
- locations autoscaling policies create, locations autoscaling policies delete, locations autoscaling policies get, locations autoscaling policies get iam policy, locations autoscaling policies list, locations autoscaling policies set iam policy, locations autoscaling policies test iam permissions, locations autoscaling policies update, locations workflow templates create, locations workflow templates delete, locations workflow templates get, locations workflow templates get iam policy, locations workflow templates instantiate, locations workflow templates instantiate inline, locations workflow templates list, locations workflow templates set iam policy, locations workflow templates test iam permissions, locations workflow templates update, regions autoscaling policies create, regions autoscaling policies delete, regions autoscaling policies get, regions autoscaling policies get iam policy, regions autoscaling policies list, regions autoscaling policies set iam policy, regions autoscaling policies test iam permissions, regions autoscaling policies update, regions clusters create, regions clusters delete, regions clusters diagnose, regions clusters get, regions clusters get iam policy, regions clusters list, regions clusters patch, regions clusters set iam policy, regions clusters test iam permissions, regions jobs cancel, regions jobs delete, regions jobs get, regions jobs get iam policy, regions jobs list, regions jobs patch, regions jobs set iam policy, regions jobs submit, regions jobs submit as operation, regions jobs test iam permissions, regions operations cancel, regions operations delete, regions operations get, regions operations get iam policy, regions operations list, regions operations set iam policy, regions operations test iam permissions, regions workflow templates create, regions workflow templates delete, regions workflow templates get, regions workflow templates get iam policy, regions workflow templates instantiate, regions workflow templates instantiate inline, regions workflow templates list, regions workflow templates set iam policy, regions workflow templates test iam permissions and regions workflow templates update
Not what you are looking for ? Find all other Google APIs in their Rust documentation index.
Structure of this Library
The API is structured into the following primary items:
- Hub
- a central object to maintain state and allow accessing all Activities
- creates Method Builders which in turn allow access to individual Call Builders
- Resources
- primary types that you can apply Activities to
- a collection of properties and Parts
- Parts
- a collection of properties
- never directly used in Activities
- Activities
- operations to apply to Resources
All structures are marked with applicable traits to further categorize them and ease browsing.
Generally speaking, you can invoke Activities like this:
let r = hub.resource().activity(...).doit()
Or specifically ...
let r = hub.projects().regions_clusters_get_iam_policy(...).doit() let r = hub.projects().regions_workflow_templates_set_iam_policy(...).doit() let r = hub.projects().regions_clusters_set_iam_policy(...).doit() let r = hub.projects().regions_workflow_templates_get_iam_policy(...).doit() let r = hub.projects().regions_jobs_get_iam_policy(...).doit() let r = hub.projects().locations_autoscaling_policies_set_iam_policy(...).doit() let r = hub.projects().locations_autoscaling_policies_get_iam_policy(...).doit() let r = hub.projects().regions_autoscaling_policies_get_iam_policy(...).doit() let r = hub.projects().regions_operations_get_iam_policy(...).doit() let r = hub.projects().regions_operations_set_iam_policy(...).doit() let r = hub.projects().locations_workflow_templates_set_iam_policy(...).doit() let r = hub.projects().regions_jobs_set_iam_policy(...).doit() let r = hub.projects().locations_workflow_templates_get_iam_policy(...).doit() let r = hub.projects().regions_autoscaling_policies_set_iam_policy(...).doit()
The resource()
and activity(...)
calls create builders. The second one dealing with Activities
supports various methods to configure the impending operation (not shown here). It is made such that all required arguments have to be
specified right away (i.e. (...)
), whereas all optional ones can be build up as desired.
The doit()
method performs the actual communication with the server and returns the respective result.
Usage
Setting up your Project
To use this library, you would put the following lines into your Cargo.toml
file:
[dependencies]
google-dataproc1 = "*"
# This project intentionally uses an old version of Hyper. See
# https://github.com/Byron/google-apis-rs/issues/173 for more
# information.
hyper = "^0.10"
hyper-rustls = "^0.6"
serde = "^1.0"
serde_json = "^1.0"
yup-oauth2 = "^1.0"
A complete example
extern crate hyper; extern crate hyper_rustls; extern crate yup_oauth2 as oauth2; extern crate google_dataproc1 as dataproc1; use dataproc1::GetIamPolicyRequest; use dataproc1::{Result, Error}; use std::default::Default; use oauth2::{Authenticator, DefaultAuthenticatorDelegate, ApplicationSecret, MemoryStorage}; use dataproc1::Dataproc; // Get an ApplicationSecret instance by some means. It contains the `client_id` and // `client_secret`, among other things. let secret: ApplicationSecret = Default::default(); // Instantiate the authenticator. It will choose a suitable authentication flow for you, // unless you replace `None` with the desired Flow. // Provide your own `AuthenticatorDelegate` to adjust the way it operates and get feedback about // what's going on. You probably want to bring in your own `TokenStorage` to persist tokens and // retrieve them from storage. let auth = Authenticator::new(&secret, DefaultAuthenticatorDelegate, hyper::Client::with_connector(hyper::net::HttpsConnector::new(hyper_rustls::TlsClient::new())), <MemoryStorage as Default>::default(), None); let mut hub = Dataproc::new(hyper::Client::with_connector(hyper::net::HttpsConnector::new(hyper_rustls::TlsClient::new())), auth); // As the method needs a request, you would usually fill it with the desired information // into the respective structure. Some of the parts shown here might not be applicable ! // Values shown here are possibly random and not representative ! let mut req = GetIamPolicyRequest::default(); // You can configure optional parameters by calling the respective setters at will, and // execute the final call using `doit()`. // Values shown here are possibly random and not representative ! let result = hub.projects().regions_clusters_get_iam_policy(req, "resource") .doit(); match result { Err(e) => match e { // The Error enum provides details about what exactly happened. // You can also just use its `Debug`, `Display` or `Error` traits Error::HttpError(_) |Error::MissingAPIKey |Error::MissingToken(_) |Error::Cancelled |Error::UploadSizeLimitExceeded(_, _) |Error::Failure(_) |Error::BadRequest(_) |Error::FieldClash(_) |Error::JsonDecodeError(_, _) => println!("{}", e), }, Ok(res) => println!("Success: {:?}", res), }
Handling Errors
All errors produced by the system are provided either as Result enumeration as return value of the doit() methods, or handed as possibly intermediate results to either the Hub Delegate, or the Authenticator Delegate.
When delegates handle errors or intermediate values, they may have a chance to instruct the system to retry. This makes the system potentially resilient to all kinds of errors.
Uploads and Downloads
If a method supports downloads, the response body, which is part of the Result, should be
read by you to obtain the media.
If such a method also supports a Response Result, it will return that by default.
You can see it as meta-data for the actual media. To trigger a media download, you will have to set up the builder by making
this call: .param("alt", "media")
.
Methods supporting uploads can do so using up to 2 different protocols:
simple and resumable. The distinctiveness of each is represented by customized
doit(...)
methods, which are then named upload(...)
and upload_resumable(...)
respectively.
Customization and Callbacks
You may alter the way an doit()
method is called by providing a delegate to the
Method Builder before making the final doit()
call.
Respective methods will be called to provide progress information, as well as determine whether the system should
retry on failure.
The delegate trait is default-implemented, allowing you to customize it with minimal effort.
Optional Parts in Server-Requests
All structures provided by this library are made to be encodable and decodable via json. Optionals are used to indicate that partial requests are responses are valid. Most optionals are are considered Parts which are identifiable by name, which will be sent to the server to indicate either the set parts of the request or the desired parts in the response.
Builder Arguments
Using method builders, you are able to prepare an action call by repeatedly calling it's methods. These will always take a single argument, for which the following statements are true.
- PODs are handed by copy
- strings are passed as
&str
- request values are moved
Arguments will always be copied or cloned into the builder, to make them independent of their original life times.
Structs
AcceleratorConfig | Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/). |
AutoscalingConfig | Autoscaling Policy config associated with the cluster. |
AutoscalingPolicy | Describes an autoscaling policy for Dataproc cluster autoscaler. |
BasicAutoscalingAlgorithm | Basic algorithm for autoscaling. |
BasicYarnAutoscalingConfig | Basic autoscaling configurations for YARN. |
Binding | Associates members with a role. |
CancelJobRequest | A request to cancel a job. |
Chunk | |
Cluster | Describes the identifying information, config, and status of a cluster of Compute Engine instances. |
ClusterConfig | The cluster config. |
ClusterMetrics | Contains cluster daemon metrics, such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release. |
ClusterSelector | A selector that chooses target cluster for jobs based on metadata. |
ClusterStatus | The status of a cluster and its instances. |
ContentRange | Implements the Content-Range header, for serialization only |
Dataproc | Central instance to access all Dataproc related resource activities |
DefaultDelegate | A delegate with a conservative default implementation, which is used if no other delegate is set. |
DiagnoseClusterRequest | A request to collect cluster diagnostic information. |
DiskConfig | Specifies the config of disk options for a group of VM instances. |
DummyNetworkStream | |
Empty | A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}. |
EncryptionConfig | Encryption settings for the cluster. |
EndpointConfig | Endpoint config for this cluster |
ErrorResponse | A utility to represent detailed errors we might see in case there are BadRequests. The latter happen if the sent parameters or request structures are unsound |
Expr | Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec.Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information. |
GceClusterConfig | Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster. |
GetIamPolicyRequest | Request message for GetIamPolicy method. |
GetPolicyOptions | Encapsulates settings provided to GetIamPolicy. |
HadoopJob | A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). |
HiveJob | A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. |
InstanceGroupAutoscalingPolicyConfig | Configuration for the size bounds of an instance group, including its proportional size to other groups. |
InstanceGroupConfig | The config settings for Compute Engine resources in an instance group, such as a master or worker group. |
InstantiateWorkflowTemplateRequest | A request to instantiate a workflow template. |
Job | A Dataproc job resource. |
JobPlacement | Dataproc job config. |
JobReference | Encapsulates the full scoping used to reference a job. |
JobScheduling | Job scheduling options. |
JobStatus | Dataproc job status. |
JsonServerError | A utility type which can decode a server response that indicates error |
KerberosConfig | Specifies Kerberos related configuration. |
LifecycleConfig | Specifies the cluster auto-delete schedule configuration. |
ListAutoscalingPoliciesResponse | A response to a request to list autoscaling policies in a project. |
ListClustersResponse | The list of all clusters in a project. |
ListJobsResponse | A list of jobs in a project. |
ListOperationsResponse | The response message for Operations.ListOperations. |
ListWorkflowTemplatesResponse | A response to a request to list workflow templates in a project. |
LoggingConfig | The runtime logging config of the job. |
ManagedCluster | Cluster that is managed by the workflow. |
ManagedGroupConfig | Specifies the resources used to actively manage an instance group. |
MethodInfo | Contains information about an API request. |
MultiPartReader | Provides a |
NodeInitializationAction | Specifies an executable to run on a fully configured node and a timeout period for executable completion. |
Operation | This resource represents a long-running operation that is the result of a network API call. |
OrderedJob | A job executed by the workflow. |
ParameterValidation | Configuration for parameter validation. |
PigJob | A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. |
Policy | An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources.A Policy is a collection of bindings. A binding binds one or more members to a single role. Members can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role is a named list of permissions; each role can be an IAM predefined role or a user-created custom role.For some types of Google Cloud resources, a binding can also specify a condition, which is a logical expression that allows access to a resource only if the expression evaluates to true. A condition can add constraints based on attributes of the request, the resource, or both. To learn which resources support conditions in their IAM policies, see the IAM documentation (https://cloud.google.com/iam/help/conditions/resource-policies).JSON example: { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 } YAML example: bindings: |
PrestoJob | A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. |
ProjectLocationAutoscalingPolicyCreateCall | Creates new autoscaling policy. |
ProjectLocationAutoscalingPolicyDeleteCall | Deletes an autoscaling policy. It is an error to delete an autoscaling policy that is in use by one or more clusters. |
ProjectLocationAutoscalingPolicyGetCall | Retrieves autoscaling policy. |
ProjectLocationAutoscalingPolicyGetIamPolicyCall | Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set. |
ProjectLocationAutoscalingPolicyListCall | Lists autoscaling policies in the project. |
ProjectLocationAutoscalingPolicySetIamPolicyCall | Sets the access control policy on the specified resource. Replaces any existing policy.Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIED errors. |
ProjectLocationAutoscalingPolicyTestIamPermissionCall | Returns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error.Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may "fail open" without warning. |
ProjectLocationAutoscalingPolicyUpdateCall | Updates (replaces) autoscaling policy.Disabled check for update_mask, because all updates will be full replacements. |
ProjectLocationWorkflowTemplateCreateCall | Creates new workflow template. |
ProjectLocationWorkflowTemplateDeleteCall | Deletes a workflow template. It does not cancel in-progress workflows. |
ProjectLocationWorkflowTemplateGetCall | Retrieves the latest workflow template.Can retrieve previously instantiated template by specifying optional version parameter. |
ProjectLocationWorkflowTemplateGetIamPolicyCall | Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set. |
ProjectLocationWorkflowTemplateInstantiateCall | Instantiates a template and begins execution.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#workflowmetadata). Also see Using WorkflowMetadata (https://cloud.google.com/dataproc/docs/concepts/workflows/debugging#using_workflowmetadata).On successful completion, Operation.response will be Empty. |
ProjectLocationWorkflowTemplateInstantiateInlineCall | Instantiates a template and begins execution.This method is equivalent to executing the sequence CreateWorkflowTemplate, InstantiateWorkflowTemplate, DeleteWorkflowTemplate.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#workflowmetadata). Also see Using WorkflowMetadata (https://cloud.google.com/dataproc/docs/concepts/workflows/debugging#using_workflowmetadata).On successful completion, Operation.response will be Empty. |
ProjectLocationWorkflowTemplateListCall | Lists workflows that match the specified filter in the request. |
ProjectLocationWorkflowTemplateSetIamPolicyCall | Sets the access control policy on the specified resource. Replaces any existing policy.Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIED errors. |
ProjectLocationWorkflowTemplateTestIamPermissionCall | Returns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error.Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may "fail open" without warning. |
ProjectLocationWorkflowTemplateUpdateCall | Updates (replaces) workflow template. The updated template must contain version that matches the current server version. |
ProjectMethods | A builder providing access to all methods supported on project resources.
It is not used directly, but through the |
ProjectRegionAutoscalingPolicyCreateCall | Creates new autoscaling policy. |
ProjectRegionAutoscalingPolicyDeleteCall | Deletes an autoscaling policy. It is an error to delete an autoscaling policy that is in use by one or more clusters. |
ProjectRegionAutoscalingPolicyGetCall | Retrieves autoscaling policy. |
ProjectRegionAutoscalingPolicyGetIamPolicyCall | Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set. |
ProjectRegionAutoscalingPolicyListCall | Lists autoscaling policies in the project. |
ProjectRegionAutoscalingPolicySetIamPolicyCall | Sets the access control policy on the specified resource. Replaces any existing policy.Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIED errors. |
ProjectRegionAutoscalingPolicyTestIamPermissionCall | Returns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error.Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may "fail open" without warning. |
ProjectRegionAutoscalingPolicyUpdateCall | Updates (replaces) autoscaling policy.Disabled check for update_mask, because all updates will be full replacements. |
ProjectRegionClusterCreateCall | Creates a cluster in a project. The returned Operation.metadata will be ClusterOperationMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#clusteroperationmetadata). |
ProjectRegionClusterDeleteCall | Deletes a cluster in a project. The returned Operation.metadata will be ClusterOperationMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#clusteroperationmetadata). |
ProjectRegionClusterDiagnoseCall | Gets cluster diagnostic information. The returned Operation.metadata will be ClusterOperationMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#clusteroperationmetadata). After the operation completes, Operation.response contains DiagnoseClusterResults (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#diagnoseclusterresults). |
ProjectRegionClusterGetCall | Gets the resource representation for a cluster in a project. |
ProjectRegionClusterGetIamPolicyCall | Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set. |
ProjectRegionClusterListCall | Lists all regions/{region}/clusters in a project alphabetically. |
ProjectRegionClusterPatchCall | Updates a cluster in a project. The returned Operation.metadata will be ClusterOperationMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#clusteroperationmetadata). |
ProjectRegionClusterSetIamPolicyCall | Sets the access control policy on the specified resource. Replaces any existing policy.Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIED errors. |
ProjectRegionClusterTestIamPermissionCall | Returns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error.Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may "fail open" without warning. |
ProjectRegionJobCancelCall | Starts a job cancellation request. To access the job resource after cancellation, call regions/{region}/jobs.list (https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/list) or regions/{region}/jobs.get (https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/get). |
ProjectRegionJobDeleteCall | Deletes the job from the project. If the job is active, the delete fails, and the response returns FAILED_PRECONDITION. |
ProjectRegionJobGetCall | Gets the resource representation for a job in a project. |
ProjectRegionJobGetIamPolicyCall | Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set. |
ProjectRegionJobListCall | Lists regions/{region}/jobs in a project. |
ProjectRegionJobPatchCall | Updates a job in a project. |
ProjectRegionJobSetIamPolicyCall | Sets the access control policy on the specified resource. Replaces any existing policy.Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIED errors. |
ProjectRegionJobSubmitAsOperationCall | Submits job to a cluster. |
ProjectRegionJobSubmitCall | Submits a job to a cluster. |
ProjectRegionJobTestIamPermissionCall | Returns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error.Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may "fail open" without warning. |
ProjectRegionOperationCancelCall | Starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. If the server doesn't support this method, it returns google.rpc.Code.UNIMPLEMENTED. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED. |
ProjectRegionOperationDeleteCall | Deletes a long-running operation. This method indicates that the client is no longer interested in the operation result. It does not cancel the operation. If the server doesn't support this method, it returns google.rpc.Code.UNIMPLEMENTED. |
ProjectRegionOperationGetCall | Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service. |
ProjectRegionOperationGetIamPolicyCall | Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set. |
ProjectRegionOperationListCall | Lists operations that match the specified filter in the request. If the server doesn't support this method, it returns UNIMPLEMENTED.NOTE: the name binding allows API services to override the binding to use different resource name schemes, such as users//operations. To override the binding, API services can add a binding such as "/v1/{name=users/}/operations" to their service configuration. For backwards compatibility, the default name includes the operations collection id, however overriding users must ensure the name binding is the parent resource, without the operations collection id. |
ProjectRegionOperationSetIamPolicyCall | Sets the access control policy on the specified resource. Replaces any existing policy.Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIED errors. |
ProjectRegionOperationTestIamPermissionCall | Returns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error.Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may "fail open" without warning. |
ProjectRegionWorkflowTemplateCreateCall | Creates new workflow template. |
ProjectRegionWorkflowTemplateDeleteCall | Deletes a workflow template. It does not cancel in-progress workflows. |
ProjectRegionWorkflowTemplateGetCall | Retrieves the latest workflow template.Can retrieve previously instantiated template by specifying optional version parameter. |
ProjectRegionWorkflowTemplateGetIamPolicyCall | Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set. |
ProjectRegionWorkflowTemplateInstantiateCall | Instantiates a template and begins execution.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#workflowmetadata). Also see Using WorkflowMetadata (https://cloud.google.com/dataproc/docs/concepts/workflows/debugging#using_workflowmetadata).On successful completion, Operation.response will be Empty. |
ProjectRegionWorkflowTemplateInstantiateInlineCall | Instantiates a template and begins execution.This method is equivalent to executing the sequence CreateWorkflowTemplate, InstantiateWorkflowTemplate, DeleteWorkflowTemplate.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#workflowmetadata). Also see Using WorkflowMetadata (https://cloud.google.com/dataproc/docs/concepts/workflows/debugging#using_workflowmetadata).On successful completion, Operation.response will be Empty. |
ProjectRegionWorkflowTemplateListCall | Lists workflows that match the specified filter in the request. |
ProjectRegionWorkflowTemplateSetIamPolicyCall | Sets the access control policy on the specified resource. Replaces any existing policy.Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIED errors. |
ProjectRegionWorkflowTemplateTestIamPermissionCall | Returns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error.Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may "fail open" without warning. |
ProjectRegionWorkflowTemplateUpdateCall | Updates (replaces) workflow template. The updated template must contain version that matches the current server version. |
PySparkJob | A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. |
QueryList | A list of queries to run on a cluster. |
RangeResponseHeader | |
RegexValidation | Validation based on regular expressions. |
ReservationAffinity | Reservation Affinity for consuming Zonal reservation. |
ResumableUploadHelper | A utility type to perform a resumable upload from start to end. |
SecurityConfig | Security related configuration, including Kerberos. |
ServerError | |
ServerMessage | |
SetIamPolicyRequest | Request message for SetIamPolicy method. |
SoftwareConfig | Specifies the selection and config of software inside the cluster. |
SparkJob | A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. |
SparkRJob | A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN. |
SparkSqlJob | A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. |
Status | The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC (https://github.com/grpc). Each Status message contains three pieces of data: error code, error message, and error details.You can find out more about this error model and how to work with it in the API Design Guide (https://cloud.google.com/apis/design/errors). |
SubmitJobRequest | A request to submit a job. |
TemplateParameter | A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector) |
TestIamPermissionsRequest | Request message for TestIamPermissions method. |
TestIamPermissionsResponse | Response message for TestIamPermissions method. |
ValueValidation | Validation based on a list of allowed values. |
WorkflowTemplate | A Dataproc workflow template resource. |
WorkflowTemplatePlacement | Specifies workflow execution target.Either managed_cluster or cluster_selector is required. |
XUploadContentType | The |
YarnApplication | A YARN application created by a job. Application information is a subset of |
Enums
Error | |
Scope | Identifies the an OAuth2 authorization scope. A scope is needed when requesting an authorization token. |
Traits
CallBuilder | Identifies types which represent builders for a particular resource method |
Delegate | A trait specifying functionality to help controlling any request performed by the API. The trait has a conservative default implementation. |
Hub | Identifies the Hub. There is only one per library, this trait is supposed to make intended use more explicit. The hub allows to access all resource methods more easily. |
MethodsBuilder | Identifies types for building methods of a particular resource type |
NestedType | Identifies types which are only used by other types internally. They have no special meaning, this trait just marks them for completeness. |
Part | Identifies types which are only used as part of other types, which
usually are carrying the |
ReadSeek | A utility to specify reader types which provide seeking capabilities too |
RequestValue | Identifies types which are used in API requests. |
Resource | Identifies types which can be inserted and deleted. Types with this trait are most commonly used by clients of this API. |
ResponseResult | Identifies types which are used in API responses. |
ToParts | A trait for all types that can convert themselves into a parts string |
UnusedType | Identifies types which are not actually used by the API This might be a bug within the google API schema. |
Functions
remove_json_null_values |
Type Definitions
Result | A universal result type used as return for all calls. |