pub struct Client<C = DynConnector, M = DefaultMiddleware, R = Standard> { /* private fields */ }
Expand description

Client for Amazon Kinesis

Client for invoking operations on Amazon Kinesis. Each operation on Amazon Kinesis is a method on this this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.

Examples

Constructing a client and invoking an operation

    // create a shared configuration. This can be used & shared between multiple service clients.
    let shared_config = aws_config::load_from_env().await;
    let client = aws_sdk_kinesis::Client::new(&shared_config);
    // invoke an operation
    /* let rsp = client
        .<operation_name>().
        .<param>("some value")
        .send().await; */

Constructing a client with custom configuration

use aws_config::RetryConfig;
    let shared_config = aws_config::load_from_env().await;
    let config = aws_sdk_kinesis::config::Builder::from(&shared_config)
        .retry_config(RetryConfig::disabled())
        .build();
    let client = aws_sdk_kinesis::Client::from_conf(config);

Implementations

Creates a client with the given service configuration.

Returns the client’s configuration.

Constructs a fluent builder for the AddTagsToStream operation.

Constructs a fluent builder for the CreateStream operation.

Constructs a fluent builder for the DecreaseStreamRetentionPeriod operation.

Constructs a fluent builder for the DeleteStream operation.

Constructs a fluent builder for the DeregisterStreamConsumer operation.

Constructs a fluent builder for the DescribeLimits operation.

Constructs a fluent builder for the DescribeStream operation.

Constructs a fluent builder for the DescribeStreamConsumer operation.

Constructs a fluent builder for the DescribeStreamSummary operation.

Constructs a fluent builder for the DisableEnhancedMonitoring operation.

Constructs a fluent builder for the EnableEnhancedMonitoring operation.

Constructs a fluent builder for the GetRecords operation.

  • The fluent builder is configurable:
    • shard_iterator(impl Into<String>) / set_shard_iterator(Option<String>):

      The position in the shard from which you want to start sequentially reading data records. A shard iterator specifies this position using the sequence number of a data record in the shard.

    • limit(i32) / set_limit(Option<i32>):

      The maximum number of records to return. Specify a value of up to 10,000. If you specify a value that is greater than 10,000, GetRecords throws InvalidArgumentException. The default value is 10,000.

  • On success, responds with GetRecordsOutput with field(s):
    • records(Option<Vec<Record>>):

      The data records retrieved from the shard.

    • next_shard_iterator(Option<String>):

      The next position in the shard from which to start sequentially reading data records. If set to null, the shard has been closed and the requested iterator does not return any more data.

    • millis_behind_latest(Option<i64>):

      The number of milliseconds the GetRecords response is from the tip of the stream, indicating how far behind current time the consumer is. A value of zero indicates that record processing is caught up, and there are no new records to process at this moment.

    • child_shards(Option<Vec<ChildShard>>):

      The list of the current shard’s child shards, returned in the GetRecords API’s response only when the end of the current shard is reached.

  • On failure, responds with SdkError<GetRecordsError>

Constructs a fluent builder for the GetShardIterator operation.

  • The fluent builder is configurable:
    • stream_name(impl Into<String>) / set_stream_name(Option<String>):

      The name of the Amazon Kinesis data stream.

    • shard_id(impl Into<String>) / set_shard_id(Option<String>):

      The shard ID of the Kinesis Data Streams shard to get the iterator for.

    • shard_iterator_type(ShardIteratorType) / set_shard_iterator_type(Option<ShardIteratorType>):

      Determines how the shard iterator is used to start reading data records from the shard.

      The following are the valid Amazon Kinesis shard iterator types:

      • AT_SEQUENCE_NUMBER - Start reading from the position denoted by a specific sequence number, provided in the value StartingSequenceNumber.

      • AFTER_SEQUENCE_NUMBER - Start reading right after the position denoted by a specific sequence number, provided in the value StartingSequenceNumber.

      • AT_TIMESTAMP - Start reading from the position denoted by a specific time stamp, provided in the value Timestamp.

      • TRIM_HORIZON - Start reading at the last untrimmed record in the shard in the system, which is the oldest data record in the shard.

      • LATEST - Start reading just after the most recent record in the shard, so that you always read the most recent data in the shard.

    • starting_sequence_number(impl Into<String>) / set_starting_sequence_number(Option<String>):

      The sequence number of the data record in the shard from which to start reading. Used with shard iterator type AT_SEQUENCE_NUMBER and AFTER_SEQUENCE_NUMBER.

    • timestamp(DateTime) / set_timestamp(Option<DateTime>):

      The time stamp of the data record from which to start reading. Used with shard iterator type AT_TIMESTAMP. A time stamp is the Unix epoch date with precision in milliseconds. For example, 2016-04-04T19:58:46.480-00:00 or 1459799926.480. If a record with this exact time stamp does not exist, the iterator returned is for the next (later) record. If the time stamp is older than the current trim horizon, the iterator returned is for the oldest untrimmed data record (TRIM_HORIZON).

  • On success, responds with GetShardIteratorOutput with field(s):
    • shard_iterator(Option<String>):

      The position in the shard from which to start reading data records sequentially. A shard iterator specifies this position using the sequence number of a data record in a shard.

  • On failure, responds with SdkError<GetShardIteratorError>

Constructs a fluent builder for the IncreaseStreamRetentionPeriod operation.

Constructs a fluent builder for the ListShards operation.

  • The fluent builder is configurable:
    • stream_name(impl Into<String>) / set_stream_name(Option<String>):

      The name of the data stream whose shards you want to list.

      You cannot specify this parameter if you specify the NextToken parameter.

    • next_token(impl Into<String>) / set_next_token(Option<String>):

      When the number of shards in the data stream is greater than the default value for the MaxResults parameter, or if you explicitly specify a value for MaxResults that is less than the number of shards in the data stream, the response includes a pagination token named NextToken. You can specify this NextToken value in a subsequent call to ListShards to list the next set of shards.

      Don’t specify StreamName or StreamCreationTimestamp if you specify NextToken because the latter unambiguously identifies the stream.

      You can optionally specify a value for the MaxResults parameter when you specify NextToken. If you specify a MaxResults value that is less than the number of shards that the operation returns if you don’t specify MaxResults, the response will contain a new NextToken value. You can use the new NextToken value in a subsequent call to the ListShards operation.

      Tokens expire after 300 seconds. When you obtain a value for NextToken in the response to a call to ListShards, you have 300 seconds to use that value. If you specify an expired token in a call to ListShards, you get ExpiredNextTokenException.

    • exclusive_start_shard_id(impl Into<String>) / set_exclusive_start_shard_id(Option<String>):

      Specify this parameter to indicate that you want to list the shards starting with the shard whose ID immediately follows ExclusiveStartShardId.

      If you don’t specify this parameter, the default behavior is for ListShards to list the shards starting with the first one in the stream.

      You cannot specify this parameter if you specify NextToken.

    • max_results(i32) / set_max_results(Option<i32>):

      The maximum number of shards to return in a single call to ListShards. The maximum number of shards to return in a single call. The default value is 1000. If you specify a value greater than 1000, at most 1000 results are returned.

      When the number of shards to be listed is greater than the value of MaxResults, the response contains a NextToken value that you can use in a subsequent call to ListShards to list the next set of shards.

    • stream_creation_timestamp(DateTime) / set_stream_creation_timestamp(Option<DateTime>):

      Specify this input parameter to distinguish data streams that have the same name. For example, if you create a data stream and then delete it, and you later create another data stream with the same name, you can use this input parameter to specify which of the two streams you want to list the shards for.

      You cannot specify this parameter if you specify the NextToken parameter.

    • shard_filter(ShardFilter) / set_shard_filter(Option<ShardFilter>):

      Enables you to filter out the response of the ListShards API. You can only specify one filter at a time.

      If you use the ShardFilter parameter when invoking the ListShards API, the Type is the required property and must be specified. If you specify the AT_TRIM_HORIZON, FROM_TRIM_HORIZON, or AT_LATEST types, you do not need to specify either the ShardId or the Timestamp optional properties.

      If you specify the AFTER_SHARD_ID type, you must also provide the value for the optional ShardId property. The ShardId property is identical in fuctionality to the ExclusiveStartShardId parameter of the ListShards API. When ShardId property is specified, the response includes the shards starting with the shard whose ID immediately follows the ShardId that you provided.

      If you specify the AT_TIMESTAMP or FROM_TIMESTAMP_ID type, you must also provide the value for the optional Timestamp property. If you specify the AT_TIMESTAMP type, then all shards that were open at the provided timestamp are returned. If you specify the FROM_TIMESTAMP type, then all shards starting from the provided timestamp to TIP are returned.

  • On success, responds with ListShardsOutput with field(s):
    • shards(Option<Vec<Shard>>):

      An array of JSON objects. Each object represents one shard and specifies the IDs of the shard, the shard’s parent, and the shard that’s adjacent to the shard’s parent. Each object also contains the starting and ending hash keys and the starting and ending sequence numbers for the shard.

    • next_token(Option<String>):

      When the number of shards in the data stream is greater than the default value for the MaxResults parameter, or if you explicitly specify a value for MaxResults that is less than the number of shards in the data stream, the response includes a pagination token named NextToken. You can specify this NextToken value in a subsequent call to ListShards to list the next set of shards. For more information about the use of this pagination token when calling the ListShards operation, see ListShardsInput$NextToken.

      Tokens expire after 300 seconds. When you obtain a value for NextToken in the response to a call to ListShards, you have 300 seconds to use that value. If you specify an expired token in a call to ListShards, you get ExpiredNextTokenException.

  • On failure, responds with SdkError<ListShardsError>

Constructs a fluent builder for the ListStreamConsumers operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • stream_arn(impl Into<String>) / set_stream_arn(Option<String>):

      The ARN of the Kinesis data stream for which you want to list the registered consumers. For more information, see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces.

    • next_token(impl Into<String>) / set_next_token(Option<String>):

      When the number of consumers that are registered with the data stream is greater than the default value for the MaxResults parameter, or if you explicitly specify a value for MaxResults that is less than the number of consumers that are registered with the data stream, the response includes a pagination token named NextToken. You can specify this NextToken value in a subsequent call to ListStreamConsumers to list the next set of registered consumers.

      Don’t specify StreamName or StreamCreationTimestamp if you specify NextToken because the latter unambiguously identifies the stream.

      You can optionally specify a value for the MaxResults parameter when you specify NextToken. If you specify a MaxResults value that is less than the number of consumers that the operation returns if you don’t specify MaxResults, the response will contain a new NextToken value. You can use the new NextToken value in a subsequent call to the ListStreamConsumers operation to list the next set of consumers.

      Tokens expire after 300 seconds. When you obtain a value for NextToken in the response to a call to ListStreamConsumers, you have 300 seconds to use that value. If you specify an expired token in a call to ListStreamConsumers, you get ExpiredNextTokenException.

    • max_results(i32) / set_max_results(Option<i32>):

      The maximum number of consumers that you want a single call of ListStreamConsumers to return. The default value is 100. If you specify a value greater than 100, at most 100 results are returned.

    • stream_creation_timestamp(DateTime) / set_stream_creation_timestamp(Option<DateTime>):

      Specify this input parameter to distinguish data streams that have the same name. For example, if you create a data stream and then delete it, and you later create another data stream with the same name, you can use this input parameter to specify which of the two streams you want to list the consumers for.

      You can’t specify this parameter if you specify the NextToken parameter.

  • On success, responds with ListStreamConsumersOutput with field(s):
    • consumers(Option<Vec<Consumer>>):

      An array of JSON objects. Each object represents one registered consumer.

    • next_token(Option<String>):

      When the number of consumers that are registered with the data stream is greater than the default value for the MaxResults parameter, or if you explicitly specify a value for MaxResults that is less than the number of registered consumers, the response includes a pagination token named NextToken. You can specify this NextToken value in a subsequent call to ListStreamConsumers to list the next set of registered consumers. For more information about the use of this pagination token when calling the ListStreamConsumers operation, see ListStreamConsumersInput$NextToken.

      Tokens expire after 300 seconds. When you obtain a value for NextToken in the response to a call to ListStreamConsumers, you have 300 seconds to use that value. If you specify an expired token in a call to ListStreamConsumers, you get ExpiredNextTokenException.

  • On failure, responds with SdkError<ListStreamConsumersError>

Constructs a fluent builder for the ListStreams operation.

Constructs a fluent builder for the ListTagsForStream operation.

Constructs a fluent builder for the MergeShards operation.

Constructs a fluent builder for the PutRecord operation.

  • The fluent builder is configurable:
    • stream_name(impl Into<String>) / set_stream_name(Option<String>):

      The name of the stream to put the data record into.

    • data(Blob) / set_data(Option<Blob>):

      The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).

    • partition_key(impl Into<String>) / set_partition_key(Option<String>):

      Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.

    • explicit_hash_key(impl Into<String>) / set_explicit_hash_key(Option<String>):

      The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.

    • sequence_number_for_ordering(impl Into<String>) / set_sequence_number_for_ordering(Option<String>):

      Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the result when putting record n-1). If this parameter is not set, records are coarsely ordered based on arrival time.

  • On success, responds with PutRecordOutput with field(s):
    • shard_id(Option<String>):

      The shard ID of the shard where the data record was placed.

    • sequence_number(Option<String>):

      The sequence number identifier that was assigned to the put data record. The sequence number for the record is unique across all records in the stream. A sequence number is the identifier associated with every record put into the stream.

    • encryption_type(Option<EncryptionType>):

      The encryption type to use on the record. This parameter can be one of the following values:

      • NONE: Do not encrypt the records in the stream.

      • KMS: Use server-side encryption on the records in the stream using a customer-managed Amazon Web Services KMS key.

  • On failure, responds with SdkError<PutRecordError>

Constructs a fluent builder for the PutRecords operation.

Constructs a fluent builder for the RegisterStreamConsumer operation.

Constructs a fluent builder for the RemoveTagsFromStream operation.

Constructs a fluent builder for the SplitShard operation.

Constructs a fluent builder for the StartStreamEncryption operation.

Constructs a fluent builder for the StopStreamEncryption operation.

Constructs a fluent builder for the UpdateShardCount operation.

Constructs a fluent builder for the UpdateStreamMode operation.

Creates a client with the given service config and connector override.

Creates a new client from a shared config.

Creates a new client from the service Config.

Trait Implementations

Returns a copy of the value. Read more

Performs copy-assignment from source. Read more

Formats the value using the given formatter. Read more

Performs the conversion.

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Performs the conversion.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more

Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Performs the conversion.

The resulting type after obtaining ownership.

Creates owned data from borrowed data, usually by cloning. Read more

🔬 This is a nightly-only experimental API. (toowned_clone_into)

Uses borrowed data to replace owned data, usually by cloning. Read more

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more