[−][src]Struct google_remotebuildexecution2::BlobMethods
A builder providing access to all methods supported on blob resources.
It is not used directly, but through the RemoteBuildExecution
hub.
Example
Instantiate a resource builder
extern crate hyper; extern crate hyper_rustls; extern crate yup_oauth2 as oauth2; extern crate google_remotebuildexecution2 as remotebuildexecution2; use std::default::Default; use oauth2::{Authenticator, DefaultAuthenticatorDelegate, ApplicationSecret, MemoryStorage}; use remotebuildexecution2::RemoteBuildExecution; let secret: ApplicationSecret = Default::default(); let auth = Authenticator::new(&secret, DefaultAuthenticatorDelegate, hyper::Client::with_connector(hyper::net::HttpsConnector::new(hyper_rustls::TlsClient::new())), <MemoryStorage as Default>::default(), None); let mut hub = RemoteBuildExecution::new(hyper::Client::with_connector(hyper::net::HttpsConnector::new(hyper_rustls::TlsClient::new())), auth); // Usually you wouldn't bind this to a variable, but keep calling *CallBuilders* // like `batch_read(...)`, `batch_update(...)`, `find_missing(...)` and `get_tree(...)` // to build up your call. let rb = hub.blobs();
Methods
impl<'a, C, A> BlobMethods<'a, C, A>
[src]
pub fn find_missing(
&self,
request: BuildBazelRemoteExecutionV2FindMissingBlobsRequest,
instance_name: &str
) -> BlobFindMissingCall<'a, C, A>
[src]
&self,
request: BuildBazelRemoteExecutionV2FindMissingBlobsRequest,
instance_name: &str
) -> BlobFindMissingCall<'a, C, A>
Create a builder to help you perform the following task:
Determine if blobs are present in the CAS.
Clients can use this API before uploading blobs to determine which ones are already present in the CAS and do not need to be uploaded again.
There are no method-specific errors.
Arguments
request
- No description provided.instanceName
- The instance of the execution system to operate against. A server may support multiple instances of the execution system (with their own workers, storage, caches, etc.). The server MAY require use of this field to select between them in an implementation-defined fashion, otherwise it can be omitted.
pub fn batch_update(
&self,
request: BuildBazelRemoteExecutionV2BatchUpdateBlobsRequest,
instance_name: &str
) -> BlobBatchUpdateCall<'a, C, A>
[src]
&self,
request: BuildBazelRemoteExecutionV2BatchUpdateBlobsRequest,
instance_name: &str
) -> BlobBatchUpdateCall<'a, C, A>
Create a builder to help you perform the following task:
Upload many blobs at once.
The server may enforce a limit of the combined total size of blobs to be uploaded using this API. This limit may be obtained using the Capabilities API. Requests exceeding the limit should either be split into smaller chunks or uploaded using the ByteStream API, as appropriate.
This request is equivalent to calling a Bytestream Write
request
on each individual blob, in parallel. The requests may succeed or fail
independently.
Errors:
INVALID_ARGUMENT
: The client attempted to upload more than the server supported limit.
Individual requests may return the following errors, additionally:
RESOURCE_EXHAUSTED
: There is insufficient disk quota to store the blob.INVALID_ARGUMENT
: The Digest does not match the provided data.
Arguments
request
- No description provided.instanceName
- The instance of the execution system to operate against. A server may support multiple instances of the execution system (with their own workers, storage, caches, etc.). The server MAY require use of this field to select between them in an implementation-defined fashion, otherwise it can be omitted.
pub fn batch_read(
&self,
request: BuildBazelRemoteExecutionV2BatchReadBlobsRequest,
instance_name: &str
) -> BlobBatchReadCall<'a, C, A>
[src]
&self,
request: BuildBazelRemoteExecutionV2BatchReadBlobsRequest,
instance_name: &str
) -> BlobBatchReadCall<'a, C, A>
Create a builder to help you perform the following task:
Download many blobs at once.
The server may enforce a limit of the combined total size of blobs to be downloaded using this API. This limit may be obtained using the Capabilities API. Requests exceeding the limit should either be split into smaller chunks or downloaded using the ByteStream API, as appropriate.
This request is equivalent to calling a Bytestream Read
request
on each individual blob, in parallel. The requests may succeed or fail
independently.
Errors:
INVALID_ARGUMENT
: The client attempted to read more than the server supported limit.
Every error on individual read will be returned in the corresponding digest status.
Arguments
request
- No description provided.instanceName
- The instance of the execution system to operate against. A server may support multiple instances of the execution system (with their own workers, storage, caches, etc.). The server MAY require use of this field to select between them in an implementation-defined fashion, otherwise it can be omitted.
pub fn get_tree(
&self,
instance_name: &str,
hash: &str,
size_bytes: &str
) -> BlobGetTreeCall<'a, C, A>
[src]
&self,
instance_name: &str,
hash: &str,
size_bytes: &str
) -> BlobGetTreeCall<'a, C, A>
Create a builder to help you perform the following task:
Fetch the entire directory tree rooted at a node.
This request must be targeted at a
Directory stored in the
ContentAddressableStorage
(CAS). The server will enumerate the Directory
tree recursively and
return every node descended from the root.
The GetTreeRequest.page_token parameter can be used to skip ahead in the stream (e.g. when retrying a partially completed and aborted request), by setting it to a value taken from GetTreeResponse.next_page_token of the last successfully processed GetTreeResponse).
The exact traversal order is unspecified and, unless retrieving subsequent
pages from an earlier request, is not guaranteed to be stable across
multiple invocations of GetTree
.
If part of the tree is missing from the CAS, the server will return the portion present and omit the rest.
NOT_FOUND
: The requested tree root is not present in the CAS.
Arguments
instanceName
- The instance of the execution system to operate against. A server may support multiple instances of the execution system (with their own workers, storage, caches, etc.). The server MAY require use of this field to select between them in an implementation-defined fashion, otherwise it can be omitted.hash
- The hash. In the case of SHA-256, it will always be a lowercase hex string exactly 64 characters long.sizeBytes
- The size of the blob, in bytes.
Trait Implementations
impl<'a, C, A> MethodsBuilder for BlobMethods<'a, C, A>
[src]
Auto Trait Implementations
impl<'a, C, A> !Send for BlobMethods<'a, C, A>
impl<'a, C, A> Unpin for BlobMethods<'a, C, A>
impl<'a, C, A> !Sync for BlobMethods<'a, C, A>
impl<'a, C, A> !UnwindSafe for BlobMethods<'a, C, A>
impl<'a, C, A> !RefUnwindSafe for BlobMethods<'a, C, A>
Blanket Implementations
impl<T> From<T> for T
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,
type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>
[src]
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Typeable for T where
T: Any,
T: Any,