pub struct CreateEventSourceMapping { /* private fields */ }
Expand description

Fluent builder constructing a request to CreateEventSourceMapping.

Creates a mapping between an event source and an Lambda function. Lambda reads items from the event source and invokes the function.

For details about how to configure different event sources, see the following topics.

The following error handling options are available only for stream sources (DynamoDB and Kinesis):

  • BisectBatchOnFunctionError – If the function returns an error, split the batch in two and retry.

  • DestinationConfig – Send discarded records to an Amazon SQS queue or Amazon SNS topic.

  • MaximumRecordAgeInSeconds – Discard records older than the specified age. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires

  • MaximumRetryAttempts – Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires.

  • ParallelizationFactor – Process multiple batches from each shard concurrently.

For information about which configuration parameters apply to each event source, see the following topics.

Implementations§

Consume this builder, creating a customizable operation that can be modified before being sent. The operation’s inner http::Request can be modified as well.

Sends the request and returns the response.

If an error occurs, an SdkError will be returned with additional details that can be matched against.

By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.

The Amazon Resource Name (ARN) of the event source.

  • Amazon Kinesis – The ARN of the data stream or a stream consumer.

  • Amazon DynamoDB Streams – The ARN of the stream.

  • Amazon Simple Queue Service – The ARN of the queue.

  • Amazon Managed Streaming for Apache Kafka – The ARN of the cluster.

  • Amazon MQ – The ARN of the broker.

The Amazon Resource Name (ARN) of the event source.

  • Amazon Kinesis – The ARN of the data stream or a stream consumer.

  • Amazon DynamoDB Streams – The ARN of the stream.

  • Amazon Simple Queue Service – The ARN of the queue.

  • Amazon Managed Streaming for Apache Kafka – The ARN of the cluster.

  • Amazon MQ – The ARN of the broker.

The name of the Lambda function.

Name formats

  • Function nameMyFunction.

  • Function ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction.

  • Version or Alias ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD.

  • Partial ARN123456789012:function:MyFunction.

The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.

The name of the Lambda function.

Name formats

  • Function nameMyFunction.

  • Function ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction.

  • Version or Alias ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD.

  • Partial ARN123456789012:function:MyFunction.

The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.

When true, the event source mapping is active. When false, Lambda pauses polling and invocation.

Default: True

When true, the event source mapping is active. When false, Lambda pauses polling and invocation.

Default: True

The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

  • Amazon Kinesis – Default 100. Max 10,000.

  • Amazon DynamoDB Streams – Default 100. Max 10,000.

  • Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.

  • Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.

  • Self-managed Apache Kafka – Default 100. Max 10,000.

  • Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.

The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

  • Amazon Kinesis – Default 100. Max 10,000.

  • Amazon DynamoDB Streams – Default 100. Max 10,000.

  • Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.

  • Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.

  • Self-managed Apache Kafka – Default 100. Max 10,000.

  • Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.

An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.

An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.

The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. You can configure MaximumBatchingWindowInSeconds to any value from 0 seconds to 300 seconds in increments of seconds.

For streams and Amazon SQS event sources, the default batching window is 0 seconds. For Amazon MSK, Self-managed Apache Kafka, and Amazon MQ event sources, the default batching window is 500 ms. Note that because you can only change MaximumBatchingWindowInSeconds in increments of seconds, you cannot revert back to the 500 ms default batching window after you have changed it. To restore the default batching window, you must create a new event source mapping.

Related setting: For streams and Amazon SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.

The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. You can configure MaximumBatchingWindowInSeconds to any value from 0 seconds to 300 seconds in increments of seconds.

For streams and Amazon SQS event sources, the default batching window is 0 seconds. For Amazon MSK, Self-managed Apache Kafka, and Amazon MQ event sources, the default batching window is 500 ms. Note that because you can only change MaximumBatchingWindowInSeconds in increments of seconds, you cannot revert back to the 500 ms default batching window after you have changed it. To restore the default batching window, you must create a new event source mapping.

Related setting: For streams and Amazon SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.

(Streams only) The number of batches to process from each shard concurrently.

(Streams only) The number of batches to process from each shard concurrently.

The position in a stream from which to start reading. Required for Amazon Kinesis, Amazon DynamoDB, and Amazon MSK Streams sources. AT_TIMESTAMP is supported only for Amazon Kinesis streams.

The position in a stream from which to start reading. Required for Amazon Kinesis, Amazon DynamoDB, and Amazon MSK Streams sources. AT_TIMESTAMP is supported only for Amazon Kinesis streams.

With StartingPosition set to AT_TIMESTAMP, the time from which to start reading.

With StartingPosition set to AT_TIMESTAMP, the time from which to start reading.

(Streams only) An Amazon SQS queue or Amazon SNS topic destination for discarded records.

(Streams only) An Amazon SQS queue or Amazon SNS topic destination for discarded records.

(Streams only) Discard records older than the specified age. The default value is infinite (-1).

(Streams only) Discard records older than the specified age. The default value is infinite (-1).

(Streams only) If the function returns an error, split the batch in two and retry.

(Streams only) If the function returns an error, split the batch in two and retry.

(Streams only) Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires.

(Streams only) Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires.

(Streams only) The duration in seconds of a processing window. The range is between 1 second and 900 seconds.

(Streams only) The duration in seconds of a processing window. The range is between 1 second and 900 seconds.

Appends an item to Topics.

To override the contents of this collection use set_topics.

The name of the Kafka topic.

The name of the Kafka topic.

Appends an item to Queues.

To override the contents of this collection use set_queues.

(MQ) The name of the Amazon MQ broker destination queue to consume.

(MQ) The name of the Amazon MQ broker destination queue to consume.

Appends an item to SourceAccessConfigurations.

To override the contents of this collection use set_source_access_configurations.

An array of authentication protocols or VPC components required to secure your event source.

An array of authentication protocols or VPC components required to secure your event source.

The self-managed Apache Kafka cluster to receive records from.

The self-managed Apache Kafka cluster to receive records from.

Appends an item to FunctionResponseTypes.

To override the contents of this collection use set_function_response_types.

(Streams and Amazon SQS) A list of current response type enums applied to the event source mapping.

(Streams and Amazon SQS) A list of current response type enums applied to the event source mapping.

Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.

Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.

Specific configuration settings for a self-managed Apache Kafka event source.

Specific configuration settings for a self-managed Apache Kafka event source.

(Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.

(Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.

Trait Implementations§

Returns a copy of the value. Read more
Performs copy-assignment from source. Read more
Formats the value using the given formatter. Read more

Auto Trait Implementations§

Blanket Implementations§

Gets the TypeId of self. Read more
Immutably borrows from an owned value. Read more
Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Should always be Self
The resulting type after obtaining ownership.
Creates owned data from borrowed data, usually by cloning. Read more
Uses borrowed data to replace owned data, usually by cloning. Read more
The type returned in the event of a conversion error.
Performs the conversion.
The type returned in the event of a conversion error.
Performs the conversion.
Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more