Struct aws_sdk_lambda::client::fluent_builders::CreateEventSourceMapping [−][src]
pub struct CreateEventSourceMapping<C = DynConnector, M = AwsMiddleware, R = Standard> { /* fields omitted */ }
Expand description
Fluent builder constructing a request to CreateEventSourceMapping
.
Creates a mapping between an event source and an Lambda function. Lambda reads items from the event source and triggers the function.
For details about each event source type, see the following topics.
The following error handling options are only available for stream sources (DynamoDB and Kinesis):
-
BisectBatchOnFunctionError
- If the function returns an error, split the batch in two and retry. -
DestinationConfig
- Send discarded records to an Amazon SQS queue or Amazon SNS topic. -
MaximumRecordAgeInSeconds
- Discard records older than the specified age. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires -
MaximumRetryAttempts
- Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires. -
ParallelizationFactor
- Process multiple batches from each shard concurrently.
Implementations
impl<C, M, R> CreateEventSourceMapping<C, M, R> where
C: SmithyConnector,
M: SmithyMiddleware<C>,
R: NewRequestPolicy,
impl<C, M, R> CreateEventSourceMapping<C, M, R> where
C: SmithyConnector,
M: SmithyMiddleware<C>,
R: NewRequestPolicy,
pub async fn send(
self
) -> Result<CreateEventSourceMappingOutput, SdkError<CreateEventSourceMappingError>> where
R::Policy: SmithyRetryPolicy<CreateEventSourceMappingInputOperationOutputAlias, CreateEventSourceMappingOutput, CreateEventSourceMappingError, CreateEventSourceMappingInputOperationRetryAlias>,
pub async fn send(
self
) -> Result<CreateEventSourceMappingOutput, SdkError<CreateEventSourceMappingError>> where
R::Policy: SmithyRetryPolicy<CreateEventSourceMappingInputOperationOutputAlias, CreateEventSourceMappingOutput, CreateEventSourceMappingError, CreateEventSourceMappingInputOperationRetryAlias>,
Sends the request and returns the response.
If an error occurs, an SdkError
will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
The Amazon Resource Name (ARN) of the event source.
-
Amazon Kinesis - The ARN of the data stream or a stream consumer.
-
Amazon DynamoDB Streams - The ARN of the stream.
-
Amazon Simple Queue Service - The ARN of the queue.
-
Amazon Managed Streaming for Apache Kafka - The ARN of the cluster.
The Amazon Resource Name (ARN) of the event source.
-
Amazon Kinesis - The ARN of the data stream or a stream consumer.
-
Amazon DynamoDB Streams - The ARN of the stream.
-
Amazon Simple Queue Service - The ARN of the queue.
-
Amazon Managed Streaming for Apache Kafka - The ARN of the cluster.
The name of the Lambda function.
Name formats
-
Function name -
MyFunction
. -
Function ARN -
arn:aws:lambda:us-west-2:123456789012:function:MyFunction
. -
Version or Alias ARN -
arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
. -
Partial ARN -
123456789012:function:MyFunction
.
The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.
The name of the Lambda function.
Name formats
-
Function name -
MyFunction
. -
Function ARN -
arn:aws:lambda:us-west-2:123456789012:function:MyFunction
. -
Version or Alias ARN -
arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
. -
Partial ARN -
123456789012:function:MyFunction
.
The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.
When true, the event source mapping is active. When false, Lambda pauses polling and invocation.
Default: True
When true, the event source mapping is active. When false, Lambda pauses polling and invocation.
Default: True
The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).
-
Amazon Kinesis - Default 100. Max 10,000.
-
Amazon DynamoDB Streams - Default 100. Max 1,000.
-
Amazon Simple Queue Service - Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
-
Amazon Managed Streaming for Apache Kafka - Default 100. Max 10,000.
-
Self-Managed Apache Kafka - Default 100. Max 10,000.
The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).
-
Amazon Kinesis - Default 100. Max 10,000.
-
Amazon DynamoDB Streams - Default 100. Max 1,000.
-
Amazon Simple Queue Service - Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
-
Amazon Managed Streaming for Apache Kafka - Default 100. Max 10,000.
-
Self-Managed Apache Kafka - Default 100. Max 10,000.
(Streams and Amazon SQS standard queues) The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function.
Default: 0
Related setting: When you set BatchSize
to a value greater than 10, you must set MaximumBatchingWindowInSeconds
to at least 1.
(Streams and Amazon SQS standard queues) The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function.
Default: 0
Related setting: When you set BatchSize
to a value greater than 10, you must set MaximumBatchingWindowInSeconds
to at least 1.
(Streams only) The number of batches to process from each shard concurrently.
(Streams only) The number of batches to process from each shard concurrently.
The position in a stream from which to start reading. Required for Amazon Kinesis, Amazon DynamoDB, and Amazon
MSK Streams sources. AT_TIMESTAMP
is only supported for Amazon Kinesis streams.
The position in a stream from which to start reading. Required for Amazon Kinesis, Amazon DynamoDB, and Amazon
MSK Streams sources. AT_TIMESTAMP
is only supported for Amazon Kinesis streams.
With StartingPosition
set to AT_TIMESTAMP
, the time from which to start
reading.
With StartingPosition
set to AT_TIMESTAMP
, the time from which to start
reading.
(Streams only) An Amazon SQS queue or Amazon SNS topic destination for discarded records.
(Streams only) An Amazon SQS queue or Amazon SNS topic destination for discarded records.
(Streams only) Discard records older than the specified age. The default value is infinite (-1).
(Streams only) Discard records older than the specified age. The default value is infinite (-1).
(Streams only) If the function returns an error, split the batch in two and retry.
(Streams only) If the function returns an error, split the batch in two and retry.
(Streams only) Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records will be retried until the record expires.
(Streams only) Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records will be retried until the record expires.
(Streams only) The duration in seconds of a processing window. The range is between 1 second up to 900 seconds.
(Streams only) The duration in seconds of a processing window. The range is between 1 second up to 900 seconds.
Appends an item to Topics
.
To override the contents of this collection use set_topics
.
The name of the Kafka topic.
The name of the Kafka topic.
Appends an item to Queues
.
To override the contents of this collection use set_queues
.
(MQ) The name of the Amazon MQ broker destination queue to consume.
(MQ) The name of the Amazon MQ broker destination queue to consume.
Appends an item to SourceAccessConfigurations
.
To override the contents of this collection use set_source_access_configurations
.
An array of authentication protocols or VPC components required to secure your event source.
pub fn set_source_access_configurations(
self,
input: Option<Vec<SourceAccessConfiguration>>
) -> Self
pub fn set_source_access_configurations(
self,
input: Option<Vec<SourceAccessConfiguration>>
) -> Self
An array of authentication protocols or VPC components required to secure your event source.
The Self-Managed Apache Kafka cluster to send records.
The Self-Managed Apache Kafka cluster to send records.
Appends an item to FunctionResponseTypes
.
To override the contents of this collection use set_function_response_types
.
(Streams only) A list of current response type enums applied to the event source mapping.
(Streams only) A list of current response type enums applied to the event source mapping.
Trait Implementations
Auto Trait Implementations
impl<C = DynConnector, M = AwsMiddleware, R = Standard> !RefUnwindSafe for CreateEventSourceMapping<C, M, R>
impl<C, M, R> Send for CreateEventSourceMapping<C, M, R> where
C: Send + Sync,
M: Send + Sync,
R: Send + Sync,
impl<C, M, R> Sync for CreateEventSourceMapping<C, M, R> where
C: Send + Sync,
M: Send + Sync,
R: Send + Sync,
impl<C, M, R> Unpin for CreateEventSourceMapping<C, M, R>
impl<C = DynConnector, M = AwsMiddleware, R = Standard> !UnwindSafe for CreateEventSourceMapping<C, M, R>
Blanket Implementations
Mutably borrows from an owned value. Read more
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more