Struct aws_sdk_timestreamwrite::client::Client
source · pub struct Client { /* private fields */ }
Expand description
Client for Amazon Timestream Write
Client for invoking operations on Amazon Timestream Write. Each operation on Amazon Timestream Write is a method on this
this struct. .send()
MUST be invoked on the generated operations to dispatch the request to the service.
§Constructing a Client
A Config
is required to construct a client. For most use cases, the aws-config
crate should be used to automatically resolve this config using
aws_config::load_from_env()
, since this will resolve an SdkConfig
which can be shared
across multiple different AWS SDK clients. This config resolution process can be customized
by calling aws_config::from_env()
instead, which returns a ConfigLoader
that uses
the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
let config = aws_config::load_from_env().await;
// You MUST call `with_endpoint_discovery_enabled` to produce a working client for this service.
let client = aws_sdk_timestreamwrite::Client::new(&config).with_endpoint_discovery_enabled().await;
Occasionally, SDKs may have additional service-specific values that can be set on the Config
that
is absent from SdkConfig
, or slightly different settings for a specific client may be desired.
The Config
struct implements From<&SdkConfig>
, so setting these specific settings can be
done as follows:
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_timestreamwrite::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
See the aws-config
docs and Config
for more information on customizing configuration.
Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
§Using the Client
A client has a function for every operation that can be performed by the service.
For example, the CreateBatchLoadTask
operation has
a Client::create_batch_load_task
, function which returns a builder for that operation.
The fluent builder ultimately has a send()
function that returns an async future that
returns a result, as illustrated below:
let result = client.create_batch_load_task()
.client_token("example")
.send()
.await;
The underlying HTTP requests that get made by this can be modified with the customize_operation
function on the fluent builder. See the customize
module for more
information.
Implementations§
source§impl Client
impl Client
sourcepub fn create_batch_load_task(&self) -> CreateBatchLoadTaskFluentBuilder
pub fn create_batch_load_task(&self) -> CreateBatchLoadTaskFluentBuilder
Constructs a fluent builder for the CreateBatchLoadTask
operation.
- The fluent builder is configurable:
client_token(impl Into<String>)
/set_client_token(Option<String>)
:
required: falsedata_model_configuration(DataModelConfiguration)
/set_data_model_configuration(Option<DataModelConfiguration>)
:
required: falsedata_source_configuration(DataSourceConfiguration)
/set_data_source_configuration(Option<DataSourceConfiguration>)
:
required: trueDefines configuration details about the data source for a batch load task.
report_configuration(ReportConfiguration)
/set_report_configuration(Option<ReportConfiguration>)
:
required: trueReport configuration for a batch load task. This contains details about where error reports are stored.
target_database_name(impl Into<String>)
/set_target_database_name(Option<String>)
:
required: trueTarget Timestream database for a batch load task.
target_table_name(impl Into<String>)
/set_target_table_name(Option<String>)
:
required: trueTarget Timestream table for a batch load task.
record_version(i64)
/set_record_version(Option<i64>)
:
required: false
- On success, responds with
CreateBatchLoadTaskOutput
with field(s):task_id(String)
:The ID of the batch load task.
- On failure, responds with
SdkError<CreateBatchLoadTaskError>
source§impl Client
impl Client
sourcepub fn create_database(&self) -> CreateDatabaseFluentBuilder
pub fn create_database(&self) -> CreateDatabaseFluentBuilder
Constructs a fluent builder for the CreateDatabase
operation.
- The fluent builder is configurable:
database_name(impl Into<String>)
/set_database_name(Option<String>)
:
required: trueThe name of the Timestream database.
kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:
required: falseThe KMS key for the database. If the KMS key is not specified, the database will be encrypted with a Timestream managed KMS key located in your account. For more information, see Amazon Web Services managed keys.
tags(Tag)
/set_tags(Option<Vec::<Tag>>)
:
required: falseA list of key-value pairs to label the table.
- On success, responds with
CreateDatabaseOutput
with field(s):database(Option<Database>)
:The newly created Timestream database.
- On failure, responds with
SdkError<CreateDatabaseError>
source§impl Client
impl Client
sourcepub fn create_table(&self) -> CreateTableFluentBuilder
pub fn create_table(&self) -> CreateTableFluentBuilder
Constructs a fluent builder for the CreateTable
operation.
- The fluent builder is configurable:
database_name(impl Into<String>)
/set_database_name(Option<String>)
:
required: trueThe name of the Timestream database.
table_name(impl Into<String>)
/set_table_name(Option<String>)
:
required: trueThe name of the Timestream table.
retention_properties(RetentionProperties)
/set_retention_properties(Option<RetentionProperties>)
:
required: falseThe duration for which your time-series data must be stored in the memory store and the magnetic store.
tags(Tag)
/set_tags(Option<Vec::<Tag>>)
:
required: falseA list of key-value pairs to label the table.
magnetic_store_write_properties(MagneticStoreWriteProperties)
/set_magnetic_store_write_properties(Option<MagneticStoreWriteProperties>)
:
required: falseContains properties to set on the table when enabling magnetic store writes.
schema(Schema)
/set_schema(Option<Schema>)
:
required: falseThe schema of the table.
- On success, responds with
CreateTableOutput
with field(s):table(Option<Table>)
:The newly created Timestream table.
- On failure, responds with
SdkError<CreateTableError>
source§impl Client
impl Client
sourcepub fn delete_database(&self) -> DeleteDatabaseFluentBuilder
pub fn delete_database(&self) -> DeleteDatabaseFluentBuilder
Constructs a fluent builder for the DeleteDatabase
operation.
- The fluent builder is configurable:
database_name(impl Into<String>)
/set_database_name(Option<String>)
:
required: trueThe name of the Timestream database to be deleted.
- On success, responds with
DeleteDatabaseOutput
- On failure, responds with
SdkError<DeleteDatabaseError>
source§impl Client
impl Client
sourcepub fn delete_table(&self) -> DeleteTableFluentBuilder
pub fn delete_table(&self) -> DeleteTableFluentBuilder
Constructs a fluent builder for the DeleteTable
operation.
- The fluent builder is configurable:
database_name(impl Into<String>)
/set_database_name(Option<String>)
:
required: trueThe name of the database where the Timestream database is to be deleted.
table_name(impl Into<String>)
/set_table_name(Option<String>)
:
required: trueThe name of the Timestream table to be deleted.
- On success, responds with
DeleteTableOutput
- On failure, responds with
SdkError<DeleteTableError>
source§impl Client
impl Client
sourcepub fn describe_batch_load_task(&self) -> DescribeBatchLoadTaskFluentBuilder
pub fn describe_batch_load_task(&self) -> DescribeBatchLoadTaskFluentBuilder
Constructs a fluent builder for the DescribeBatchLoadTask
operation.
- The fluent builder is configurable:
task_id(impl Into<String>)
/set_task_id(Option<String>)
:
required: trueThe ID of the batch load task.
- On success, responds with
DescribeBatchLoadTaskOutput
with field(s):batch_load_task_description(Option<BatchLoadTaskDescription>)
:Description of the batch load task.
- On failure, responds with
SdkError<DescribeBatchLoadTaskError>
source§impl Client
impl Client
sourcepub fn describe_database(&self) -> DescribeDatabaseFluentBuilder
pub fn describe_database(&self) -> DescribeDatabaseFluentBuilder
Constructs a fluent builder for the DescribeDatabase
operation.
- The fluent builder is configurable:
database_name(impl Into<String>)
/set_database_name(Option<String>)
:
required: trueThe name of the Timestream database.
- On success, responds with
DescribeDatabaseOutput
with field(s):database(Option<Database>)
:The name of the Timestream table.
- On failure, responds with
SdkError<DescribeDatabaseError>
source§impl Client
impl Client
sourcepub fn describe_endpoints(&self) -> DescribeEndpointsFluentBuilder
pub fn describe_endpoints(&self) -> DescribeEndpointsFluentBuilder
Constructs a fluent builder for the DescribeEndpoints
operation.
- The fluent builder takes no input, just
send
it. - On success, responds with
DescribeEndpointsOutput
with field(s):endpoints(Vec::<Endpoint>)
:An
Endpoints
object is returned when aDescribeEndpoints
request is made.
- On failure, responds with
SdkError<DescribeEndpointsError>
source§impl Client
impl Client
sourcepub fn describe_table(&self) -> DescribeTableFluentBuilder
pub fn describe_table(&self) -> DescribeTableFluentBuilder
Constructs a fluent builder for the DescribeTable
operation.
- The fluent builder is configurable:
database_name(impl Into<String>)
/set_database_name(Option<String>)
:
required: trueThe name of the Timestream database.
table_name(impl Into<String>)
/set_table_name(Option<String>)
:
required: trueThe name of the Timestream table.
- On success, responds with
DescribeTableOutput
with field(s):table(Option<Table>)
:The Timestream table.
- On failure, responds with
SdkError<DescribeTableError>
source§impl Client
impl Client
sourcepub fn list_batch_load_tasks(&self) -> ListBatchLoadTasksFluentBuilder
pub fn list_batch_load_tasks(&self) -> ListBatchLoadTasksFluentBuilder
Constructs a fluent builder for the ListBatchLoadTasks
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseA token to specify where to start paginating. This is the NextToken from a previously truncated response.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe total number of items to return in the output. If the total number of items available is more than the value specified, a NextToken is provided in the output. To resume pagination, provide the NextToken value as argument of a subsequent API invocation.
task_status(BatchLoadStatus)
/set_task_status(Option<BatchLoadStatus>)
:
required: falseStatus of the batch load task.
- On success, responds with
ListBatchLoadTasksOutput
with field(s):next_token(Option<String>)
:A token to specify where to start paginating. Provide the next ListBatchLoadTasksRequest.
batch_load_tasks(Option<Vec::<BatchLoadTask>>)
:A list of batch load task details.
- On failure, responds with
SdkError<ListBatchLoadTasksError>
source§impl Client
impl Client
sourcepub fn list_databases(&self) -> ListDatabasesFluentBuilder
pub fn list_databases(&self) -> ListDatabasesFluentBuilder
Constructs a fluent builder for the ListDatabases
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe pagination token. To resume pagination, provide the NextToken value as argument of a subsequent API invocation.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe total number of items to return in the output. If the total number of items available is more than the value specified, a NextToken is provided in the output. To resume pagination, provide the NextToken value as argument of a subsequent API invocation.
- On success, responds with
ListDatabasesOutput
with field(s):databases(Option<Vec::<Database>>)
:A list of database names.
next_token(Option<String>)
:The pagination token. This parameter is returned when the response is truncated.
- On failure, responds with
SdkError<ListDatabasesError>
source§impl Client
impl Client
sourcepub fn list_tables(&self) -> ListTablesFluentBuilder
pub fn list_tables(&self) -> ListTablesFluentBuilder
Constructs a fluent builder for the ListTables
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
database_name(impl Into<String>)
/set_database_name(Option<String>)
:
required: falseThe name of the Timestream database.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe pagination token. To resume pagination, provide the NextToken value as argument of a subsequent API invocation.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe total number of items to return in the output. If the total number of items available is more than the value specified, a NextToken is provided in the output. To resume pagination, provide the NextToken value as argument of a subsequent API invocation.
- On success, responds with
ListTablesOutput
with field(s):tables(Option<Vec::<Table>>)
:A list of tables.
next_token(Option<String>)
:A token to specify where to start paginating. This is the NextToken from a previously truncated response.
- On failure, responds with
SdkError<ListTablesError>
source§impl Client
impl Client
Constructs a fluent builder for the ListTagsForResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueThe Timestream resource with tags to be listed. This value is an Amazon Resource Name (ARN).
- On success, responds with
ListTagsForResourceOutput
with field(s):tags(Option<Vec::<Tag>>)
:The tags currently associated with the Timestream resource.
- On failure, responds with
SdkError<ListTagsForResourceError>
source§impl Client
impl Client
sourcepub fn resume_batch_load_task(&self) -> ResumeBatchLoadTaskFluentBuilder
pub fn resume_batch_load_task(&self) -> ResumeBatchLoadTaskFluentBuilder
Constructs a fluent builder for the ResumeBatchLoadTask
operation.
- The fluent builder is configurable:
task_id(impl Into<String>)
/set_task_id(Option<String>)
:
required: trueThe ID of the batch load task to resume.
- On success, responds with
ResumeBatchLoadTaskOutput
- On failure, responds with
SdkError<ResumeBatchLoadTaskError>
source§impl Client
impl Client
sourcepub fn tag_resource(&self) -> TagResourceFluentBuilder
pub fn tag_resource(&self) -> TagResourceFluentBuilder
Constructs a fluent builder for the TagResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueIdentifies the Timestream resource to which tags should be added. This value is an Amazon Resource Name (ARN).
tags(Tag)
/set_tags(Option<Vec::<Tag>>)
:
required: trueThe tags to be assigned to the Timestream resource.
- On success, responds with
TagResourceOutput
- On failure, responds with
SdkError<TagResourceError>
source§impl Client
impl Client
sourcepub fn untag_resource(&self) -> UntagResourceFluentBuilder
pub fn untag_resource(&self) -> UntagResourceFluentBuilder
Constructs a fluent builder for the UntagResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueThe Timestream resource that the tags will be removed from. This value is an Amazon Resource Name (ARN).
tag_keys(impl Into<String>)
/set_tag_keys(Option<Vec::<String>>)
:
required: trueA list of tags keys. Existing tags of the resource whose keys are members of this list will be removed from the Timestream resource.
- On success, responds with
UntagResourceOutput
- On failure, responds with
SdkError<UntagResourceError>
source§impl Client
impl Client
sourcepub fn update_database(&self) -> UpdateDatabaseFluentBuilder
pub fn update_database(&self) -> UpdateDatabaseFluentBuilder
Constructs a fluent builder for the UpdateDatabase
operation.
- The fluent builder is configurable:
database_name(impl Into<String>)
/set_database_name(Option<String>)
:
required: trueThe name of the database.
kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:
required: trueThe identifier of the new KMS key (
KmsKeyId
) to be used to encrypt the data stored in the database. If theKmsKeyId
currently registered with the database is the same as theKmsKeyId
in the request, there will not be any update.You can specify the
KmsKeyId
using any of the following:-
Key ID:
1234abcd-12ab-34cd-56ef-1234567890ab
-
Key ARN:
arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
-
Alias name:
alias/ExampleAlias
-
Alias ARN:
arn:aws:kms:us-east-1:111122223333:alias/ExampleAlias
-
- On success, responds with
UpdateDatabaseOutput
with field(s):database(Option<Database>)
:A top-level container for a table. Databases and tables are the fundamental management concepts in Amazon Timestream. All tables in a database are encrypted with the same KMS key.
- On failure, responds with
SdkError<UpdateDatabaseError>
source§impl Client
impl Client
sourcepub fn update_table(&self) -> UpdateTableFluentBuilder
pub fn update_table(&self) -> UpdateTableFluentBuilder
Constructs a fluent builder for the UpdateTable
operation.
- The fluent builder is configurable:
database_name(impl Into<String>)
/set_database_name(Option<String>)
:
required: trueThe name of the Timestream database.
table_name(impl Into<String>)
/set_table_name(Option<String>)
:
required: trueThe name of the Timestream table.
retention_properties(RetentionProperties)
/set_retention_properties(Option<RetentionProperties>)
:
required: falseThe retention duration of the memory store and the magnetic store.
magnetic_store_write_properties(MagneticStoreWriteProperties)
/set_magnetic_store_write_properties(Option<MagneticStoreWriteProperties>)
:
required: falseContains properties to set on the table when enabling magnetic store writes.
schema(Schema)
/set_schema(Option<Schema>)
:
required: falseThe schema of the table.
- On success, responds with
UpdateTableOutput
with field(s):table(Option<Table>)
:The updated Timestream table.
- On failure, responds with
SdkError<UpdateTableError>
source§impl Client
impl Client
sourcepub fn write_records(&self) -> WriteRecordsFluentBuilder
pub fn write_records(&self) -> WriteRecordsFluentBuilder
Constructs a fluent builder for the WriteRecords
operation.
- The fluent builder is configurable:
database_name(impl Into<String>)
/set_database_name(Option<String>)
:
required: trueThe name of the Timestream database.
table_name(impl Into<String>)
/set_table_name(Option<String>)
:
required: trueThe name of the Timestream table.
common_attributes(Record)
/set_common_attributes(Option<Record>)
:
required: falseA record that contains the common measure, dimension, time, and version attributes shared across all the records in the request. The measure and dimension attributes specified will be merged with the measure and dimension attributes in the records object when the data is written into Timestream. Dimensions may not overlap, or a
ValidationException
will be thrown. In other words, a record must contain dimensions with unique names.records(Record)
/set_records(Option<Vec::<Record>>)
:
required: trueAn array of records that contain the unique measure, dimension, time, and version attributes for each time-series data point.
- On success, responds with
WriteRecordsOutput
with field(s):records_ingested(Option<RecordsIngested>)
:Information on the records ingested by this request.
- On failure, responds with
SdkError<WriteRecordsError>
source§impl Client
impl Client
sourcepub async fn with_endpoint_discovery_enabled(
self
) -> Result<(Self, ReloadEndpoint), BoxError>
pub async fn with_endpoint_discovery_enabled( self ) -> Result<(Self, ReloadEndpoint), BoxError>
Enable endpoint discovery for this client
This method MUST be called to construct a working client.
source§impl Client
impl Client
sourcepub fn from_conf(conf: Config) -> Self
pub fn from_conf(conf: Config) -> Self
Creates a new client from the service Config
.
§Panics
This method will panic in the following cases:
- Retries or timeouts are enabled without a
sleep_impl
configured. - Identity caching is enabled without a
sleep_impl
andtime_source
configured. - No
behavior_version
is provided.
The panic message for each of these will have instructions on how to resolve them.
source§impl Client
impl Client
sourcepub fn new(sdk_config: &SdkConfig) -> Self
pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
§Panics
- This method will panic if the
sdk_config
is missing an async sleep implementation. If you experience this panic, set thesleep_impl
on the Config passed into this function to fix it. - This method will panic if the
sdk_config
is missing an HTTP connector. If you experience this panic, set thehttp_connector
on the Config passed into this function to fix it. - This method will panic if no
BehaviorVersion
is provided. If you experience this panic, setbehavior_version
on the Config or enable thebehavior-version-latest
Cargo feature.
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Client
impl !RefUnwindSafe for Client
impl Send for Client
impl Sync for Client
impl Unpin for Client
impl !UnwindSafe for Client
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more