#[non_exhaustive]
pub struct UpdateEventSourceMappingOutput {
Show 27 fields pub uuid: Option<String>, pub starting_position: Option<EventSourcePosition>, pub starting_position_timestamp: Option<DateTime>, pub batch_size: Option<i32>, pub maximum_batching_window_in_seconds: Option<i32>, pub parallelization_factor: Option<i32>, pub event_source_arn: Option<String>, pub filter_criteria: Option<FilterCriteria>, pub function_arn: Option<String>, pub last_modified: Option<DateTime>, pub last_processing_result: Option<String>, pub state: Option<String>, pub state_transition_reason: Option<String>, pub destination_config: Option<DestinationConfig>, pub topics: Option<Vec<String>>, pub queues: Option<Vec<String>>, pub source_access_configurations: Option<Vec<SourceAccessConfiguration>>, pub self_managed_event_source: Option<SelfManagedEventSource>, pub maximum_record_age_in_seconds: Option<i32>, pub bisect_batch_on_function_error: Option<bool>, pub maximum_retry_attempts: Option<i32>, pub tumbling_window_in_seconds: Option<i32>, pub function_response_types: Option<Vec<FunctionResponseType>>, pub amazon_managed_kafka_event_source_config: Option<AmazonManagedKafkaEventSourceConfig>, pub self_managed_kafka_event_source_config: Option<SelfManagedKafkaEventSourceConfig>, pub scaling_config: Option<ScalingConfig>, pub document_db_event_source_config: Option<DocumentDbEventSourceConfig>, /* private fields */
}
Expand description

A mapping between an Amazon Web Services resource and a Lambda function. For details, see CreateEventSourceMapping.

Fields (Non-exhaustive)§

This struct is marked as non-exhaustive
Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.
§uuid: Option<String>

The identifier of the event source mapping.

§starting_position: Option<EventSourcePosition>

The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB Stream event sources. AT_TIMESTAMP is supported only for Amazon Kinesis streams, Amazon DocumentDB, Amazon MSK, and self-managed Apache Kafka.

§starting_position_timestamp: Option<DateTime>

With StartingPosition set to AT_TIMESTAMP, the time from which to start reading. StartingPositionTimestamp cannot be in the future.

§batch_size: Option<i32>

The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

Default value: Varies by service. For Amazon SQS, the default is 10. For all other services, the default is 100.

Related setting: When you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.

§maximum_batching_window_in_seconds: Option<i32>

The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. You can configure MaximumBatchingWindowInSeconds to any value from 0 seconds to 300 seconds in increments of seconds.

For streams and Amazon SQS event sources, the default batching window is 0 seconds. For Amazon MSK, Self-managed Apache Kafka, Amazon MQ, and DocumentDB event sources, the default batching window is 500 ms. Note that because you can only change MaximumBatchingWindowInSeconds in increments of seconds, you cannot revert back to the 500 ms default batching window after you have changed it. To restore the default batching window, you must create a new event source mapping.

Related setting: For streams and Amazon SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.

§parallelization_factor: Option<i32>

(Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.

§event_source_arn: Option<String>

The Amazon Resource Name (ARN) of the event source.

§filter_criteria: Option<FilterCriteria>

An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.

§function_arn: Option<String>

The ARN of the Lambda function.

§last_modified: Option<DateTime>

The date that the event source mapping was last updated or that its state changed.

§last_processing_result: Option<String>

The result of the last Lambda invocation of your function.

§state: Option<String>

The state of the event source mapping. It can be one of the following: Creating, Enabling, Enabled, Disabling, Disabled, Updating, or Deleting.

§state_transition_reason: Option<String>

Indicates whether a user or Lambda made the last change to the event source mapping.

§destination_config: Option<DestinationConfig>

(Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.

§topics: Option<Vec<String>>

The name of the Kafka topic.

§queues: Option<Vec<String>>

(Amazon MQ) The name of the Amazon MQ broker destination queue to consume.

§source_access_configurations: Option<Vec<SourceAccessConfiguration>>

An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.

§self_managed_event_source: Option<SelfManagedEventSource>

The self-managed Apache Kafka cluster for your event source.

§maximum_record_age_in_seconds: Option<i32>

(Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records.

The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed

§bisect_batch_on_function_error: Option<bool>

(Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.

§maximum_retry_attempts: Option<i32>

(Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.

§tumbling_window_in_seconds: Option<i32>

(Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.

§function_response_types: Option<Vec<FunctionResponseType>>

(Kinesis, DynamoDB Streams, and Amazon SQS) A list of current response type enums applied to the event source mapping.

§amazon_managed_kafka_event_source_config: Option<AmazonManagedKafkaEventSourceConfig>

Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.

§self_managed_kafka_event_source_config: Option<SelfManagedKafkaEventSourceConfig>

Specific configuration settings for a self-managed Apache Kafka event source.

§scaling_config: Option<ScalingConfig>

(Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.

§document_db_event_source_config: Option<DocumentDbEventSourceConfig>

Specific configuration settings for a DocumentDB event source.

Implementations§

source§

impl UpdateEventSourceMappingOutput

source

pub fn uuid(&self) -> Option<&str>

The identifier of the event source mapping.

source

pub fn starting_position(&self) -> Option<&EventSourcePosition>

The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB Stream event sources. AT_TIMESTAMP is supported only for Amazon Kinesis streams, Amazon DocumentDB, Amazon MSK, and self-managed Apache Kafka.

source

pub fn starting_position_timestamp(&self) -> Option<&DateTime>

With StartingPosition set to AT_TIMESTAMP, the time from which to start reading. StartingPositionTimestamp cannot be in the future.

source

pub fn batch_size(&self) -> Option<i32>

The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

Default value: Varies by service. For Amazon SQS, the default is 10. For all other services, the default is 100.

Related setting: When you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.

source

pub fn maximum_batching_window_in_seconds(&self) -> Option<i32>

The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. You can configure MaximumBatchingWindowInSeconds to any value from 0 seconds to 300 seconds in increments of seconds.

For streams and Amazon SQS event sources, the default batching window is 0 seconds. For Amazon MSK, Self-managed Apache Kafka, Amazon MQ, and DocumentDB event sources, the default batching window is 500 ms. Note that because you can only change MaximumBatchingWindowInSeconds in increments of seconds, you cannot revert back to the 500 ms default batching window after you have changed it. To restore the default batching window, you must create a new event source mapping.

Related setting: For streams and Amazon SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.

source

pub fn parallelization_factor(&self) -> Option<i32>

(Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.

source

pub fn event_source_arn(&self) -> Option<&str>

The Amazon Resource Name (ARN) of the event source.

source

pub fn filter_criteria(&self) -> Option<&FilterCriteria>

An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.

source

pub fn function_arn(&self) -> Option<&str>

The ARN of the Lambda function.

source

pub fn last_modified(&self) -> Option<&DateTime>

The date that the event source mapping was last updated or that its state changed.

source

pub fn last_processing_result(&self) -> Option<&str>

The result of the last Lambda invocation of your function.

source

pub fn state(&self) -> Option<&str>

The state of the event source mapping. It can be one of the following: Creating, Enabling, Enabled, Disabling, Disabled, Updating, or Deleting.

source

pub fn state_transition_reason(&self) -> Option<&str>

Indicates whether a user or Lambda made the last change to the event source mapping.

source

pub fn destination_config(&self) -> Option<&DestinationConfig>

(Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.

source

pub fn topics(&self) -> &[String]

The name of the Kafka topic.

If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .topics.is_none().

source

pub fn queues(&self) -> &[String]

(Amazon MQ) The name of the Amazon MQ broker destination queue to consume.

If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .queues.is_none().

source

pub fn source_access_configurations(&self) -> &[SourceAccessConfiguration]

An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.

If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .source_access_configurations.is_none().

source

pub fn self_managed_event_source(&self) -> Option<&SelfManagedEventSource>

The self-managed Apache Kafka cluster for your event source.

source

pub fn maximum_record_age_in_seconds(&self) -> Option<i32>

(Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records.

The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed

source

pub fn bisect_batch_on_function_error(&self) -> Option<bool>

(Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.

source

pub fn maximum_retry_attempts(&self) -> Option<i32>

(Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.

source

pub fn tumbling_window_in_seconds(&self) -> Option<i32>

(Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.

source

pub fn function_response_types(&self) -> &[FunctionResponseType]

(Kinesis, DynamoDB Streams, and Amazon SQS) A list of current response type enums applied to the event source mapping.

If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .function_response_types.is_none().

source

pub fn amazon_managed_kafka_event_source_config( &self ) -> Option<&AmazonManagedKafkaEventSourceConfig>

Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.

source

pub fn self_managed_kafka_event_source_config( &self ) -> Option<&SelfManagedKafkaEventSourceConfig>

Specific configuration settings for a self-managed Apache Kafka event source.

source

pub fn scaling_config(&self) -> Option<&ScalingConfig>

(Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.

source

pub fn document_db_event_source_config( &self ) -> Option<&DocumentDbEventSourceConfig>

Specific configuration settings for a DocumentDB event source.

source§

impl UpdateEventSourceMappingOutput

source

pub fn builder() -> UpdateEventSourceMappingOutputBuilder

Creates a new builder-style object to manufacture UpdateEventSourceMappingOutput.

Trait Implementations§

source§

impl Clone for UpdateEventSourceMappingOutput

source§

fn clone(&self) -> UpdateEventSourceMappingOutput

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for UpdateEventSourceMappingOutput

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl PartialEq for UpdateEventSourceMappingOutput

source§

fn eq(&self, other: &UpdateEventSourceMappingOutput) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl RequestId for UpdateEventSourceMappingOutput

source§

fn request_id(&self) -> Option<&str>

Returns the request ID, or None if the service could not be reached.
source§

impl StructuralPartialEq for UpdateEventSourceMappingOutput

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more