#[non_exhaustive]
pub struct CreateEventSourceMappingInput { /* private fields */ }

Implementations§

Consumes the builder and constructs an Operation<CreateEventSourceMapping>

Creates a new builder-style object to manufacture CreateEventSourceMappingInput.

The Amazon Resource Name (ARN) of the event source.

  • Amazon Kinesis – The ARN of the data stream or a stream consumer.

  • Amazon DynamoDB Streams – The ARN of the stream.

  • Amazon Simple Queue Service – The ARN of the queue.

  • Amazon Managed Streaming for Apache Kafka – The ARN of the cluster.

  • Amazon MQ – The ARN of the broker.

The name of the Lambda function.

Name formats

  • Function nameMyFunction.

  • Function ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction.

  • Version or Alias ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD.

  • Partial ARN123456789012:function:MyFunction.

The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.

When true, the event source mapping is active. When false, Lambda pauses polling and invocation.

Default: True

The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

  • Amazon Kinesis – Default 100. Max 10,000.

  • Amazon DynamoDB Streams – Default 100. Max 10,000.

  • Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.

  • Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.

  • Self-managed Apache Kafka – Default 100. Max 10,000.

  • Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.

An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.

The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. You can configure MaximumBatchingWindowInSeconds to any value from 0 seconds to 300 seconds in increments of seconds.

For streams and Amazon SQS event sources, the default batching window is 0 seconds. For Amazon MSK, Self-managed Apache Kafka, and Amazon MQ event sources, the default batching window is 500 ms. Note that because you can only change MaximumBatchingWindowInSeconds in increments of seconds, you cannot revert back to the 500 ms default batching window after you have changed it. To restore the default batching window, you must create a new event source mapping.

Related setting: For streams and Amazon SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.

(Streams only) The number of batches to process from each shard concurrently.

The position in a stream from which to start reading. Required for Amazon Kinesis, Amazon DynamoDB, and Amazon MSK Streams sources. AT_TIMESTAMP is supported only for Amazon Kinesis streams.

With StartingPosition set to AT_TIMESTAMP, the time from which to start reading.

(Streams only) An Amazon SQS queue or Amazon SNS topic destination for discarded records.

(Streams only) Discard records older than the specified age. The default value is infinite (-1).

(Streams only) If the function returns an error, split the batch in two and retry.

(Streams only) Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires.

(Streams only) The duration in seconds of a processing window. The range is between 1 second and 900 seconds.

The name of the Kafka topic.

(MQ) The name of the Amazon MQ broker destination queue to consume.

An array of authentication protocols or VPC components required to secure your event source.

The self-managed Apache Kafka cluster to receive records from.

(Streams and Amazon SQS) A list of current response type enums applied to the event source mapping.

Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.

Specific configuration settings for a self-managed Apache Kafka event source.

(Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.

Trait Implementations§

Returns a copy of the value. Read more
Performs copy-assignment from source. Read more
Formats the value using the given formatter. Read more
This method tests for self and other values to be equal, and is used by ==.
This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.

Auto Trait Implementations§

Blanket Implementations§

Gets the TypeId of self. Read more
Immutably borrows from an owned value. Read more
Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Should always be Self
The resulting type after obtaining ownership.
Creates owned data from borrowed data, usually by cloning. Read more
Uses borrowed data to replace owned data, usually by cloning. Read more
The type returned in the event of a conversion error.
Performs the conversion.
The type returned in the event of a conversion error.
Performs the conversion.
Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more