Struct aws_sdk_neptunedata::client::Client

source ·
pub struct Client { /* private fields */ }
Expand description

Client for Amazon NeptuneData

Client for invoking operations on Amazon NeptuneData. Each operation on Amazon NeptuneData is a method on this this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.

§Constructing a Client

A Config is required to construct a client. For most use cases, the aws-config crate should be used to automatically resolve this config using aws_config::load_from_env(), since this will resolve an SdkConfig which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling aws_config::from_env() instead, which returns a ConfigLoader that uses the builder pattern to customize the default config.

In the simplest case, creating a client looks as follows:

let config = aws_config::load_from_env().await;
let client = aws_sdk_neptunedata::Client::new(&config);

Occasionally, SDKs may have additional service-specific values that can be set on the Config that is absent from SdkConfig, or slightly different settings for a specific client may be desired. The Config struct implements From<&SdkConfig>, so setting these specific settings can be done as follows:

let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_neptunedata::config::Builder::from(&sdk_config)
    .some_service_specific_setting("value")
    .build();

See the aws-config docs and Config for more information on customizing configuration.

Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.

§Using the Client

A client has a function for every operation that can be performed by the service. For example, the CancelGremlinQuery operation has a Client::cancel_gremlin_query, function which returns a builder for that operation. The fluent builder ultimately has a send() function that returns an async future that returns a result, as illustrated below:

let result = client.cancel_gremlin_query()
    .query_id("example")
    .send()
    .await;

The underlying HTTP requests that get made by this can be modified with the customize_operation function on the fluent builder. See the customize module for more information.

Implementations§

source§

impl Client

source

pub fn cancel_gremlin_query(&self) -> CancelGremlinQueryFluentBuilder

Constructs a fluent builder for the CancelGremlinQuery operation.

source§

impl Client

source

pub fn cancel_loader_job(&self) -> CancelLoaderJobFluentBuilder

Constructs a fluent builder for the CancelLoaderJob operation.

source§

impl Client

source

pub fn cancel_ml_data_processing_job( &self ) -> CancelMLDataProcessingJobFluentBuilder

Constructs a fluent builder for the CancelMLDataProcessingJob operation.

source§

impl Client

source

pub fn cancel_ml_model_training_job( &self ) -> CancelMLModelTrainingJobFluentBuilder

Constructs a fluent builder for the CancelMLModelTrainingJob operation.

source§

impl Client

source

pub fn cancel_ml_model_transform_job( &self ) -> CancelMLModelTransformJobFluentBuilder

Constructs a fluent builder for the CancelMLModelTransformJob operation.

source§

impl Client

source

pub fn cancel_open_cypher_query(&self) -> CancelOpenCypherQueryFluentBuilder

Constructs a fluent builder for the CancelOpenCypherQuery operation.

source§

impl Client

source

pub fn create_ml_endpoint(&self) -> CreateMLEndpointFluentBuilder

Constructs a fluent builder for the CreateMLEndpoint operation.

source§

impl Client

source

pub fn delete_ml_endpoint(&self) -> DeleteMLEndpointFluentBuilder

Constructs a fluent builder for the DeleteMLEndpoint operation.

source§

impl Client

source

pub fn delete_propertygraph_statistics( &self ) -> DeletePropertygraphStatisticsFluentBuilder

Constructs a fluent builder for the DeletePropertygraphStatistics operation.

source§

impl Client

source

pub fn delete_sparql_statistics(&self) -> DeleteSparqlStatisticsFluentBuilder

Constructs a fluent builder for the DeleteSparqlStatistics operation.

source§

impl Client

source

pub fn execute_fast_reset(&self) -> ExecuteFastResetFluentBuilder

Constructs a fluent builder for the ExecuteFastReset operation.

  • The fluent builder is configurable:
    • action(Action) / set_action(Option<Action>):
      required: true

      The fast reset action. One of the following values:

      • initiateDatabaseReset   –   This action generates a unique token needed to actually perform the fast reset.

      • performDatabaseReset   –   This action uses the token generated by the initiateDatabaseReset action to actually perform the fast reset.


    • token(impl Into<String>) / set_token(Option<String>):
      required: false

      The fast-reset token to initiate the reset.


  • On success, responds with ExecuteFastResetOutput with field(s):
    • status(String):

      The status is only returned for the performDatabaseReset action, and indicates whether or not the fast reset rquest is accepted.

    • payload(Option<FastResetToken>):

      The payload is only returned by the initiateDatabaseReset action, and contains the unique token to use with the performDatabaseReset action to make the reset occur.

  • On failure, responds with SdkError<ExecuteFastResetError>
source§

impl Client

source

pub fn execute_gremlin_explain_query( &self ) -> ExecuteGremlinExplainQueryFluentBuilder

Constructs a fluent builder for the ExecuteGremlinExplainQuery operation.

source§

impl Client

source

pub fn execute_gremlin_profile_query( &self ) -> ExecuteGremlinProfileQueryFluentBuilder

Constructs a fluent builder for the ExecuteGremlinProfileQuery operation.

source§

impl Client

source

pub fn execute_gremlin_query(&self) -> ExecuteGremlinQueryFluentBuilder

Constructs a fluent builder for the ExecuteGremlinQuery operation.

source§

impl Client

source

pub fn execute_open_cypher_explain_query( &self ) -> ExecuteOpenCypherExplainQueryFluentBuilder

Constructs a fluent builder for the ExecuteOpenCypherExplainQuery operation.

source§

impl Client

source

pub fn execute_open_cypher_query(&self) -> ExecuteOpenCypherQueryFluentBuilder

Constructs a fluent builder for the ExecuteOpenCypherQuery operation.

source§

impl Client

source

pub fn get_engine_status(&self) -> GetEngineStatusFluentBuilder

Constructs a fluent builder for the GetEngineStatus operation.

source§

impl Client

source

pub fn get_gremlin_query_status(&self) -> GetGremlinQueryStatusFluentBuilder

Constructs a fluent builder for the GetGremlinQueryStatus operation.

source§

impl Client

source

pub fn get_loader_job_status(&self) -> GetLoaderJobStatusFluentBuilder

Constructs a fluent builder for the GetLoaderJobStatus operation.

source§

impl Client

source

pub fn get_ml_data_processing_job(&self) -> GetMLDataProcessingJobFluentBuilder

Constructs a fluent builder for the GetMLDataProcessingJob operation.

source§

impl Client

source

pub fn get_ml_endpoint(&self) -> GetMLEndpointFluentBuilder

Constructs a fluent builder for the GetMLEndpoint operation.

source§

impl Client

source

pub fn get_ml_model_training_job(&self) -> GetMLModelTrainingJobFluentBuilder

Constructs a fluent builder for the GetMLModelTrainingJob operation.

source§

impl Client

source

pub fn get_ml_model_transform_job(&self) -> GetMLModelTransformJobFluentBuilder

Constructs a fluent builder for the GetMLModelTransformJob operation.

source§

impl Client

source

pub fn get_open_cypher_query_status( &self ) -> GetOpenCypherQueryStatusFluentBuilder

Constructs a fluent builder for the GetOpenCypherQueryStatus operation.

source§

impl Client

source

pub fn get_propertygraph_statistics( &self ) -> GetPropertygraphStatisticsFluentBuilder

Constructs a fluent builder for the GetPropertygraphStatistics operation.

source§

impl Client

source

pub fn get_propertygraph_stream(&self) -> GetPropertygraphStreamFluentBuilder

Constructs a fluent builder for the GetPropertygraphStream operation.

  • The fluent builder is configurable:
    • limit(i64) / set_limit(Option<i64>):
      required: false

      Specifies the maximum number of records to return. There is also a size limit of 10 MB on the response that can’t be modified and that takes precedence over the number of records specified in the limit parameter. The response does include a threshold-breaching record if the 10 MB limit was reached.

      The range for limit is 1 to 100,000, with a default of 10.


    • iterator_type(IteratorType) / set_iterator_type(Option<IteratorType>):
      required: false

      Can be one of:

      • AT_SEQUENCE_NUMBER   –   Indicates that reading should start from the event sequence number specified jointly by the commitNum and opNum parameters.

      • AFTER_SEQUENCE_NUMBER   –   Indicates that reading should start right after the event sequence number specified jointly by the commitNum and opNum parameters.

      • TRIM_HORIZON   –   Indicates that reading should start at the last untrimmed record in the system, which is the oldest unexpired (not yet deleted) record in the change-log stream.

      • LATEST   –   Indicates that reading should start at the most recent record in the system, which is the latest unexpired (not yet deleted) record in the change-log stream.


    • commit_num(i64) / set_commit_num(Option<i64>):
      required: false

      The commit number of the starting record to read from the change-log stream. This parameter is required when iteratorType isAT_SEQUENCE_NUMBER or AFTER_SEQUENCE_NUMBER, and ignored when iteratorType is TRIM_HORIZON or LATEST.


    • op_num(i64) / set_op_num(Option<i64>):
      required: false

      The operation sequence number within the specified commit to start reading from in the change-log stream data. The default is 1.


    • encoding(Encoding) / set_encoding(Option<Encoding>):
      required: false

      If set to TRUE, Neptune compresses the response using gzip encoding.


  • On success, responds with GetPropertygraphStreamOutput with field(s):
    • last_event_id(HashMap::<String, String>):

      Sequence identifier of the last change in the stream response.

      An event ID is composed of two fields: a commitNum, which identifies a transaction that changed the graph, and an opNum, which identifies a specific operation within that transaction:

    • last_trx_timestamp_in_millis(i64):

      The time at which the commit for the transaction was requested, in milliseconds from the Unix epoch.

    • format(String):

      Serialization format for the change records being returned. Currently, the only supported value is PG_JSON.

    • records(Vec::<PropertygraphRecord>):

      An array of serialized change-log stream records included in the response.

    • total_records(i32):

      The total number of records in the response.

  • On failure, responds with SdkError<GetPropertygraphStreamError>
source§

impl Client

source

pub fn get_propertygraph_summary(&self) -> GetPropertygraphSummaryFluentBuilder

Constructs a fluent builder for the GetPropertygraphSummary operation.

source§

impl Client

source

pub fn get_rdf_graph_summary(&self) -> GetRDFGraphSummaryFluentBuilder

Constructs a fluent builder for the GetRDFGraphSummary operation.

source§

impl Client

source

pub fn get_sparql_statistics(&self) -> GetSparqlStatisticsFluentBuilder

Constructs a fluent builder for the GetSparqlStatistics operation.

source§

impl Client

source

pub fn get_sparql_stream(&self) -> GetSparqlStreamFluentBuilder

Constructs a fluent builder for the GetSparqlStream operation.

  • The fluent builder is configurable:
    • limit(i64) / set_limit(Option<i64>):
      required: false

      Specifies the maximum number of records to return. There is also a size limit of 10 MB on the response that can’t be modified and that takes precedence over the number of records specified in the limit parameter. The response does include a threshold-breaching record if the 10 MB limit was reached.

      The range for limit is 1 to 100,000, with a default of 10.


    • iterator_type(IteratorType) / set_iterator_type(Option<IteratorType>):
      required: false

      Can be one of:

      • AT_SEQUENCE_NUMBER   –   Indicates that reading should start from the event sequence number specified jointly by the commitNum and opNum parameters.

      • AFTER_SEQUENCE_NUMBER   –   Indicates that reading should start right after the event sequence number specified jointly by the commitNum and opNum parameters.

      • TRIM_HORIZON   –   Indicates that reading should start at the last untrimmed record in the system, which is the oldest unexpired (not yet deleted) record in the change-log stream.

      • LATEST   –   Indicates that reading should start at the most recent record in the system, which is the latest unexpired (not yet deleted) record in the change-log stream.


    • commit_num(i64) / set_commit_num(Option<i64>):
      required: false

      The commit number of the starting record to read from the change-log stream. This parameter is required when iteratorType isAT_SEQUENCE_NUMBER or AFTER_SEQUENCE_NUMBER, and ignored when iteratorType is TRIM_HORIZON or LATEST.


    • op_num(i64) / set_op_num(Option<i64>):
      required: false

      The operation sequence number within the specified commit to start reading from in the change-log stream data. The default is 1.


    • encoding(Encoding) / set_encoding(Option<Encoding>):
      required: false

      If set to TRUE, Neptune compresses the response using gzip encoding.


  • On success, responds with GetSparqlStreamOutput with field(s):
    • last_event_id(HashMap::<String, String>):

      Sequence identifier of the last change in the stream response.

      An event ID is composed of two fields: a commitNum, which identifies a transaction that changed the graph, and an opNum, which identifies a specific operation within that transaction:

    • last_trx_timestamp_in_millis(i64):

      The time at which the commit for the transaction was requested, in milliseconds from the Unix epoch.

    • format(String):

      Serialization format for the change records being returned. Currently, the only supported value is NQUADS.

    • records(Vec::<SparqlRecord>):

      An array of serialized change-log stream records included in the response.

    • total_records(i32):

      The total number of records in the response.

  • On failure, responds with SdkError<GetSparqlStreamError>
source§

impl Client

source

pub fn list_gremlin_queries(&self) -> ListGremlinQueriesFluentBuilder

Constructs a fluent builder for the ListGremlinQueries operation.

source§

impl Client

source

pub fn list_loader_jobs(&self) -> ListLoaderJobsFluentBuilder

Constructs a fluent builder for the ListLoaderJobs operation.

source§

impl Client

source

pub fn list_ml_data_processing_jobs( &self ) -> ListMLDataProcessingJobsFluentBuilder

Constructs a fluent builder for the ListMLDataProcessingJobs operation.

source§

impl Client

source

pub fn list_ml_endpoints(&self) -> ListMLEndpointsFluentBuilder

Constructs a fluent builder for the ListMLEndpoints operation.

source§

impl Client

source

pub fn list_ml_model_training_jobs( &self ) -> ListMLModelTrainingJobsFluentBuilder

Constructs a fluent builder for the ListMLModelTrainingJobs operation.

source§

impl Client

source

pub fn list_ml_model_transform_jobs( &self ) -> ListMLModelTransformJobsFluentBuilder

Constructs a fluent builder for the ListMLModelTransformJobs operation.

source§

impl Client

source

pub fn list_open_cypher_queries(&self) -> ListOpenCypherQueriesFluentBuilder

Constructs a fluent builder for the ListOpenCypherQueries operation.

source§

impl Client

source

pub fn manage_propertygraph_statistics( &self ) -> ManagePropertygraphStatisticsFluentBuilder

Constructs a fluent builder for the ManagePropertygraphStatistics operation.

source§

impl Client

source

pub fn manage_sparql_statistics(&self) -> ManageSparqlStatisticsFluentBuilder

Constructs a fluent builder for the ManageSparqlStatistics operation.

source§

impl Client

source

pub fn start_loader_job(&self) -> StartLoaderJobFluentBuilder

Constructs a fluent builder for the StartLoaderJob operation.

  • The fluent builder is configurable:
    • source(impl Into<String>) / set_source(Option<String>):
      required: true

      The source parameter accepts an S3 URI that identifies a single file, multiple files, a folder, or multiple folders. Neptune loads every data file in any folder that is specified.

      The URI can be in any of the following formats.

      • s3://(bucket_name)/(object-key-name)

      • https://s3.amazonaws.com/(bucket_name)/(object-key-name)

      • https://s3.us-east-1.amazonaws.com/(bucket_name)/(object-key-name)

      The object-key-name element of the URI is equivalent to the prefix parameter in an S3 ListObjects API call. It identifies all the objects in the specified S3 bucket whose names begin with that prefix. That can be a single file or folder, or multiple files and/or folders.

      The specified folder or folders can contain multiple vertex files and multiple edge files.


    • format(Format) / set_format(Option<Format>):
      required: true

      The format of the data. For more information about data formats for the Neptune Loader command, see Load Data Formats.

      Allowed values


    • s3_bucket_region(S3BucketRegion) / set_s3_bucket_region(Option<S3BucketRegion>):
      required: true

      The Amazon region of the S3 bucket. This must match the Amazon Region of the DB cluster.


    • iam_role_arn(impl Into<String>) / set_iam_role_arn(Option<String>):
      required: true

      The Amazon Resource Name (ARN) for an IAM role to be assumed by the Neptune DB instance for access to the S3 bucket. The IAM role ARN provided here should be attached to the DB cluster (see Adding the IAM Role to an Amazon Neptune Cluster.


    • mode(Mode) / set_mode(Option<Mode>):
      required: false

      The load job mode.

      Allowed values: RESUME, NEW, AUTO.

      Default value: AUTO.

      • RESUME   –   In RESUME mode, the loader looks for a previous load from this source, and if it finds one, resumes that load job. If no previous load job is found, the loader stops.

        The loader avoids reloading files that were successfully loaded in a previous job. It only tries to process failed files. If you dropped previously loaded data from your Neptune cluster, that data is not reloaded in this mode. If a previous load job loaded all files from the same source successfully, nothing is reloaded, and the loader returns success.

      • NEW   –   In NEW mode, the creates a new load request regardless of any previous loads. You can use this mode to reload all the data from a source after dropping previously loaded data from your Neptune cluster, or to load new data available at the same source.

      • AUTO   –   In AUTO mode, the loader looks for a previous load job from the same source, and if it finds one, resumes that job, just as in RESUME mode.

        If the loader doesn’t find a previous load job from the same source, it loads all data from the source, just as in NEW mode.


    • fail_on_error(bool) / set_fail_on_error(Option<bool>):
      required: false

      failOnError   –   A flag to toggle a complete stop on an error.

      Allowed values: “TRUE”, “FALSE”.

      Default value: “TRUE”.

      When this parameter is set to “FALSE”, the loader tries to load all the data in the location specified, skipping any entries with errors.

      When this parameter is set to “TRUE”, the loader stops as soon as it encounters an error. Data loaded up to that point persists.


    • parallelism(Parallelism) / set_parallelism(Option<Parallelism>):
      required: false

      The optional parallelism parameter can be set to reduce the number of threads used by the bulk load process.

      Allowed values:

      • LOW –   The number of threads used is the number of available vCPUs divided by 8.

      • MEDIUM –   The number of threads used is the number of available vCPUs divided by 2.

      • HIGH –   The number of threads used is the same as the number of available vCPUs.

      • OVERSUBSCRIBE –   The number of threads used is the number of available vCPUs multiplied by 2. If this value is used, the bulk loader takes up all available resources.

        This does not mean, however, that the OVERSUBSCRIBE setting results in 100% CPU utilization. Because the load operation is I/O bound, the highest CPU utilization to expect is in the 60% to 70% range.

      Default value: HIGH

      The parallelism setting can sometimes result in a deadlock between threads when loading openCypher data. When this happens, Neptune returns the LOAD_DATA_DEADLOCK error. You can generally fix the issue by setting parallelism to a lower setting and retrying the load command.


    • parser_configuration(impl Into<String>, impl Into<String>) / set_parser_configuration(Option<HashMap::<String, String>>):
      required: false

      parserConfiguration   –   An optional object with additional parser configuration values. Each of the child parameters is also optional:

      • namedGraphUri   –   The default graph for all RDF formats when no graph is specified (for non-quads formats and NQUAD entries with no graph).

        The default is https://aws.amazon.com/neptune/vocab/v01/DefaultNamedGraph.

      • baseUri   –   The base URI for RDF/XML and Turtle formats.

        The default is https://aws.amazon.com/neptune/default.

      • allowEmptyStrings   –   Gremlin users need to be able to pass empty string values(“”) as node and edge properties when loading CSV data. If allowEmptyStrings is set to false (the default), such empty strings are treated as nulls and are not loaded.

        If allowEmptyStrings is set to true, the loader treats empty strings as valid property values and loads them accordingly.


    • update_single_cardinality_properties(bool) / set_update_single_cardinality_properties(Option<bool>):
      required: false

      updateSingleCardinalityProperties is an optional parameter that controls how the bulk loader treats a new value for single-cardinality vertex or edge properties. This is not supported for loading openCypher data.

      Allowed values: “TRUE”, “FALSE”.

      Default value: “FALSE”.

      By default, or when updateSingleCardinalityProperties is explicitly set to “FALSE”, the loader treats a new value as an error, because it violates single cardinality.

      When updateSingleCardinalityProperties is set to “TRUE”, on the other hand, the bulk loader replaces the existing value with the new one. If multiple edge or single-cardinality vertex property values are provided in the source file(s) being loaded, the final value at the end of the bulk load could be any one of those new values. The loader only guarantees that the existing value has been replaced by one of the new ones.


    • queue_request(bool) / set_queue_request(Option<bool>):
      required: false

      This is an optional flag parameter that indicates whether the load request can be queued up or not.

      You don’t have to wait for one load job to complete before issuing the next one, because Neptune can queue up as many as 64 jobs at a time, provided that their queueRequest parameters are all set to “TRUE”. The queue order of the jobs will be first-in-first-out (FIFO).

      If the queueRequest parameter is omitted or set to “FALSE”, the load request will fail if another load job is already running.

      Allowed values: “TRUE”, “FALSE”.

      Default value: “FALSE”.


    • dependencies(impl Into<String>) / set_dependencies(Option<Vec::<String>>):
      required: false

      This is an optional parameter that can make a queued load request contingent on the successful completion of one or more previous jobs in the queue.

      Neptune can queue up as many as 64 load requests at a time, if their queueRequest parameters are set to “TRUE”. The dependencies parameter lets you make execution of such a queued request dependent on the successful completion of one or more specified previous requests in the queue.

      For example, if load Job-A and Job-B are independent of each other, but load Job-C needs Job-A and Job-B to be finished before it begins, proceed as follows:

      1. Submit load-job-A and load-job-B one after another in any order, and save their load-ids.

      2. Submit load-job-C with the load-ids of the two jobs in its dependencies field:

      Because of the dependencies parameter, the bulk loader will not start Job-C until Job-A and Job-B have completed successfully. If either one of them fails, Job-C will not be executed, and its status will be set to LOAD_FAILED_BECAUSE_DEPENDENCY_NOT_SATISFIED.

      You can set up multiple levels of dependency in this way, so that the failure of one job will cause all requests that are directly or indirectly dependent on it to be cancelled.


    • user_provided_edge_ids(bool) / set_user_provided_edge_ids(Option<bool>):
      required: false

      This parameter is required only when loading openCypher data that contains relationship IDs. It must be included and set to True when openCypher relationship IDs are explicitly provided in the load data (recommended).

      When userProvidedEdgeIds is absent or set to True, an :ID column must be present in every relationship file in the load.

      When userProvidedEdgeIds is present and set to False, relationship files in the load must not contain an :ID column. Instead, the Neptune loader automatically generates an ID for each relationship.

      It’s useful to provide relationship IDs explicitly so that the loader can resume loading after error in the CSV data have been fixed, without having to reload any relationships that have already been loaded. If relationship IDs have not been explicitly assigned, the loader cannot resume a failed load if any relationship file has had to be corrected, and must instead reload all the relationships.


  • On success, responds with StartLoaderJobOutput with field(s):
  • On failure, responds with SdkError<StartLoaderJobError>
source§

impl Client

source

pub fn start_ml_data_processing_job( &self ) -> StartMLDataProcessingJobFluentBuilder

Constructs a fluent builder for the StartMLDataProcessingJob operation.

source§

impl Client

source

pub fn start_ml_model_training_job( &self ) -> StartMLModelTrainingJobFluentBuilder

Constructs a fluent builder for the StartMLModelTrainingJob operation.

source§

impl Client

source

pub fn start_ml_model_transform_job( &self ) -> StartMLModelTransformJobFluentBuilder

Constructs a fluent builder for the StartMLModelTransformJob operation.

source§

impl Client

source

pub fn from_conf(conf: Config) -> Self

Creates a new client from the service Config.

§Panics

This method will panic in the following cases:

  • Retries or timeouts are enabled without a sleep_impl configured.
  • Identity caching is enabled without a sleep_impl and time_source configured.
  • No behavior_version is provided.

The panic message for each of these will have instructions on how to resolve them.

source

pub fn config(&self) -> &Config

Returns the client’s configuration.

source§

impl Client

source

pub fn new(sdk_config: &SdkConfig) -> Self

Creates a new client from an SDK Config.

§Panics
  • This method will panic if the sdk_config is missing an async sleep implementation. If you experience this panic, set the sleep_impl on the Config passed into this function to fix it.
  • This method will panic if the sdk_config is missing an HTTP connector. If you experience this panic, set the http_connector on the Config passed into this function to fix it.
  • This method will panic if no BehaviorVersion is provided. If you experience this panic, set behavior_version on the Config or enable the behavior-version-latest Cargo feature.

Trait Implementations§

source§

impl Clone for Client

source§

fn clone(&self) -> Client

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for Client

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

§

impl Freeze for Client

§

impl !RefUnwindSafe for Client

§

impl Send for Client

§

impl Sync for Client

§

impl Unpin for Client

§

impl !UnwindSafe for Client

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more