Crate google_vision1

source ·
Expand description

This documentation was generated from Vision crate version 1.0.8+20181001, where 20181001 is the exact revision of the vision:v1 schema built by the mako code generator v1.0.8.

Everything else about the Vision v1 API can be found at the official documentation site. The original source code is on github.

Features

Handle the following Resources with ease from the central hub

Not what you are looking for ? Find all other Google APIs in their Rust documentation index.

Structure of this Library

The API is structured into the following primary items:

  • Hub
    • a central object to maintain state and allow accessing all Activities
    • creates Method Builders which in turn allow access to individual Call Builders
  • Resources
    • primary types that you can apply Activities to
    • a collection of properties and Parts
    • Parts
      • a collection of properties
      • never directly used in Activities
  • Activities
    • operations to apply to Resources

All structures are marked with applicable traits to further categorize them and ease browsing.

Generally speaking, you can invoke Activities like this:

let r = hub.resource().activity(...).doit()

Or specifically …

let r = hub.operations().list(...).doit()
let r = hub.locations().operations_get(...).doit()
let r = hub.operations().get(...).doit()
let r = hub.operations().cancel(...).doit()
let r = hub.operations().delete(...).doit()
let r = hub.files().async_batch_annotate(...).doit()

The resource() and activity(...) calls create builders. The second one dealing with Activities supports various methods to configure the impending operation (not shown here). It is made such that all required arguments have to be specified right away (i.e. (...)), whereas all optional ones can be build up as desired. The doit() method performs the actual communication with the server and returns the respective result.

Usage

Setting up your Project

To use this library, you would put the following lines into your Cargo.toml file:

[dependencies]
google-vision1 = "*"
hyper = "^0.10"
hyper-rustls = "^0.6"
serde = "^1.0"
serde_json = "^1.0"
yup-oauth2 = "^1.0"

A complete example

extern crate hyper;
extern crate hyper_rustls;
extern crate yup_oauth2 as oauth2;
extern crate google_vision1 as vision1;
use vision1::{Result, Error};
use std::default::Default;
use oauth2::{Authenticator, DefaultAuthenticatorDelegate, ApplicationSecret, MemoryStorage};
use vision1::Vision;
 
// Get an ApplicationSecret instance by some means. It contains the `client_id` and 
// `client_secret`, among other things.
let secret: ApplicationSecret = Default::default();
// Instantiate the authenticator. It will choose a suitable authentication flow for you, 
// unless you replace  `None` with the desired Flow.
// Provide your own `AuthenticatorDelegate` to adjust the way it operates and get feedback about 
// what's going on. You probably want to bring in your own `TokenStorage` to persist tokens and
// retrieve them from storage.
let auth = Authenticator::new(&secret, DefaultAuthenticatorDelegate,
                              hyper::Client::with_connector(hyper::net::HttpsConnector::new(hyper_rustls::TlsClient::new())),
                              <MemoryStorage as Default>::default(), None);
let mut hub = Vision::new(hyper::Client::with_connector(hyper::net::HttpsConnector::new(hyper_rustls::TlsClient::new())), auth);
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.operations().list("name")
             .page_token("dolores")
             .page_size(-63)
             .filter("accusam")
             .doit();
 
match result {
    Err(e) => match e {
        // The Error enum provides details about what exactly happened.
        // You can also just use its `Debug`, `Display` or `Error` traits
         Error::HttpError(_)
        |Error::MissingAPIKey
        |Error::MissingToken(_)
        |Error::Cancelled
        |Error::UploadSizeLimitExceeded(_, _)
        |Error::Failure(_)
        |Error::BadRequest(_)
        |Error::FieldClash(_)
        |Error::JsonDecodeError(_, _) => println!("{}", e),
    },
    Ok(res) => println!("Success: {:?}", res),
}

Handling Errors

All errors produced by the system are provided either as Result enumeration as return value of the doit() methods, or handed as possibly intermediate results to either the Hub Delegate, or the Authenticator Delegate.

When delegates handle errors or intermediate values, they may have a chance to instruct the system to retry. This makes the system potentially resilient to all kinds of errors.

Uploads and Downloads

If a method supports downloads, the response body, which is part of the Result, should be read by you to obtain the media. If such a method also supports a Response Result, it will return that by default. You can see it as meta-data for the actual media. To trigger a media download, you will have to set up the builder by making this call: .param("alt", "media").

Methods supporting uploads can do so using up to 2 different protocols: simple and resumable. The distinctiveness of each is represented by customized doit(...) methods, which are then named upload(...) and upload_resumable(...) respectively.

Customization and Callbacks

You may alter the way an doit() method is called by providing a delegate to the Method Builder before making the final doit() call. Respective methods will be called to provide progress information, as well as determine whether the system should retry on failure.

The delegate trait is default-implemented, allowing you to customize it with minimal effort.

Optional Parts in Server-Requests

All structures provided by this library are made to be enocodable and decodable via json. Optionals are used to indicate that partial requests are responses are valid. Most optionals are are considered Parts which are identifiable by name, which will be sent to the server to indicate either the set parts of the request or the desired parts in the response.

Builder Arguments

Using method builders, you are able to prepare an action call by repeatedly calling it’s methods. These will always take a single argument, for which the following statements are true.

Arguments will always be copied or cloned into the builder, to make them independent of their original life times.

Structs

Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features.
Response to an image annotation request.
An offline file annotation request.
Multiple async file annotation requests are batched into a single service call.
Multiple image annotation requests are batched into a single service call.
Response to a batch image annotation request.
Logical element on the page.
A bounding polygon for the detected image annotation.
The request message for Operations.CancelOperation.
Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness; for example, the fields of this representation can be trivially provided to the constructor of “java.awt.Color” in Java; it can also be trivially provided to UIColor’s “+colorWithRed:green:blue:alpha” method in iOS; and, with just a little work, it can be easily formatted into a CSS “rgba()” string in JavaScript, as well. Here are some examples:
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
Single crop hint that is used to generate a new crop when serving an image.
Set of crop hints that are used to generate new crops when serving images.
Parameters for crop hints annotation request.
A delegate with a conservative default implementation, which is used if no other delegate is set.
Detected start or end of a structural component.
Detected language for a structural component.
Set of dominant colors and their corresponding scores.
A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance:
Set of detected entity features.
A utility to represent detailed errors we might see in case there are BadRequests. The latter happen if the sent parameters or request structures are unsound
A face annotation object contains the results of face detection.
The type of Google Cloud Vision API detection to perform, and the maximum number of results to return for that type. Multiple Feature objects can be specified in the features list.
Run asynchronous image detection and annotation for a list of generic files, such as PDF files, which may contain multiple pages and multiple images per page. Progress and results can be retrieved through the google.longrunning.Operations interface. Operation.metadata contains OperationMetadata (metadata). Operation.response contains AsyncBatchAnnotateFilesResponse (results).
A builder providing access to all methods supported on file resources. It is not used directly, but through the Vision hub.
The Google Cloud Storage location where the output will be written to.
The Google Cloud Storage location where the input will be read from.
Client image to perform Google Cloud Vision API tasks over.
Run image detection and annotation for a batch of images.
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
Image context and/or feature-specific parameters.
A builder providing access to all methods supported on image resources. It is not used directly, but through the Vision hub.
Stores image properties, such as dominant colors.
External image source (Google Cloud Storage or web URL image location).
The desired input location and metadata.
A face-specific landmark (for example, a face feature).
An object representing a latitude/longitude pair. This is expressed as a pair of doubles representing degrees latitude and degrees longitude. Unless specified otherwise, this must conform to the WGS84 standard. Values must be within normalized ranges.
Rectangle determined by min and max LatLng pairs.
The response message for Operations.ListOperations.
Set of detected objects with bounding boxes.
Detected entity location information.
A builder providing access to all methods supported on location resources. It is not used directly, but through the Vision hub.
Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.
Contains information about an API request.
Provides a Read interface that converts multiple parts into the protocol identified by RFC2387. Note: This implementation is just as rich as it needs to be to perform uploads to google APIs, and might not be a fully-featured implementation.
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
This resource represents a long-running operation that is the result of a network API call.
Starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. If the server doesn’t support this method, it returns google.rpc.Code.UNIMPLEMENTED. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED.
Deletes a long-running operation. This method indicates that the client is no longer interested in the operation result. It does not cancel the operation. If the server doesn’t support this method, it returns google.rpc.Code.UNIMPLEMENTED.
Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.
Lists operations that match the specified filter in the request. If the server doesn’t support this method, it returns UNIMPLEMENTED.
A builder providing access to all methods supported on operation resources. It is not used directly, but through the Vision hub.
The desired output location and metadata.
Detected page from OCR.
Structural unit of text representing a number of words in certain order.
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
A Property consists of a user-supplied name/value pair.
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. The error model is designed to be:
A single symbol representation.
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
Additional information detected on the structural component.
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
Central instance to access all Vision related resource activities
Relevant information for the image from the Internet.
Parameters for web detection request.
Entity deduced from similar images on the Internet.
Metadata for online images.
Label to provide extra metadata for the web detection.
Metadata for web pages.
A word representation.

Enums

Identifies the an OAuth2 authorization scope. A scope is needed when requesting an authorization token.

Traits

Identifies types which represent builders for a particular resource method
A trait specifying functionality to help controlling any request performed by the API. The trait has a conservative default implementation.
Identifies the Hub. There is only one per library, this trait is supposed to make intended use more explicit. The hub allows to access all resource methods more easily.
Identifies types for building methods of a particular resource type
Identifies types which are only used by other types internally. They have no special meaning, this trait just marks them for completeness.
Identifies types which are only used as part of other types, which usually are carrying the Resource trait.
A utility to specify reader types which provide seeking capabilities too
Identifies types which are used in API requests.
Identifies types which can be inserted and deleted. Types with this trait are most commonly used by clients of this API.
Identifies types which are used in API responses.
A trait for all types that can convert themselves into a parts string

Functions

Type Definitions

A universal result type used as return for all calls.