Module types

Module types 

Source
Expand description

Data structures used by operation inputs/outputs.

Modules§

builders
Builders
error
Error types that Amazon Kinesis Firehose can respond with.

Structs§

AmazonOpenSearchServerlessBufferingHints

Describes the buffering to perform before delivering data to the Serverless offering for Amazon OpenSearch Service destination.

AmazonOpenSearchServerlessDestinationConfiguration

Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.

AmazonOpenSearchServerlessDestinationDescription

The destination description in the Serverless offering for Amazon OpenSearch Service.

AmazonOpenSearchServerlessDestinationUpdate

Describes an update for a destination in the Serverless offering for Amazon OpenSearch Service.

AmazonOpenSearchServerlessRetryOptions

Configures retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service.

AmazonopensearchserviceBufferingHints

Describes the buffering to perform before delivering data to the Amazon OpenSearch Service destination.

AmazonopensearchserviceDestinationConfiguration

Describes the configuration of a destination in Amazon OpenSearch Service

AmazonopensearchserviceDestinationDescription

The destination description in Amazon OpenSearch Service.

AmazonopensearchserviceDestinationUpdate

Describes an update for a destination in Amazon OpenSearch Service.

AmazonopensearchserviceRetryOptions

Configures retry behavior in case Firehose is unable to deliver documents to Amazon OpenSearch Service.

AuthenticationConfiguration

The authentication configuration of the Amazon MSK cluster.

BufferingHints

Describes hints for the buffering to perform before delivering data to the destination. These options are treated as hints, and therefore Firehose might choose to use different values when it is optimal. The SizeInMBs and IntervalInSeconds parameters are optional. However, if specify a value for one of them, you must also provide a value for the other.

CatalogConfiguration

Describes the containers where the destination Apache Iceberg Tables are persisted.

CloudWatchLoggingOptions

Describes the Amazon CloudWatch logging options for your Firehose stream.

CopyCommand

Describes a COPY command for Amazon Redshift.

DataFormatConversionConfiguration

Specifies that you want Firehose to convert data from the JSON format to the Parquet or ORC format before writing it to Amazon S3. Firehose uses the serializer and deserializer that you specify, in addition to the column information from the Amazon Web Services Glue table, to deserialize your input data from JSON and then serialize it to the Parquet or ORC format. For more information, see Firehose Record Format Conversion.

DatabaseColumnList

The structure used to configure the list of column patterns in source database endpoint for Firehose to read from.

Amazon Data Firehose is in preview release and is subject to change.

DatabaseList

The structure used to configure the list of database patterns in source database endpoint for Firehose to read from.

Amazon Data Firehose is in preview release and is subject to change.

DatabaseSnapshotInfo

The structure that describes the snapshot information of a table in source database endpoint that Firehose reads.

Amazon Data Firehose is in preview release and is subject to change.

DatabaseSourceAuthenticationConfiguration

The structure to configure the authentication methods for Firehose to connect to source database endpoint.

Amazon Data Firehose is in preview release and is subject to change.

DatabaseSourceConfiguration

The top level object for configuring streams with database as a source.

Amazon Data Firehose is in preview release and is subject to change.

DatabaseSourceDescription

The top level object for database source description.

Amazon Data Firehose is in preview release and is subject to change.

DatabaseSourceVpcConfiguration

The structure for details of the VPC Endpoint Service which Firehose uses to create a PrivateLink to the database.

Amazon Data Firehose is in preview release and is subject to change.

DatabaseTableList

The structure used to configure the list of table patterns in source database endpoint for Firehose to read from.

Amazon Data Firehose is in preview release and is subject to change.

DeliveryStreamDescription

Contains information about a Firehose stream.

DeliveryStreamEncryptionConfiguration

Contains information about the server-side encryption (SSE) status for the delivery stream, the type customer master key (CMK) in use, if any, and the ARN of the CMK. You can get DeliveryStreamEncryptionConfiguration by invoking the DescribeDeliveryStream operation.

DeliveryStreamEncryptionConfigurationInput

Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).

Deserializer

The deserializer you want Firehose to use for converting the input data from JSON. Firehose then serializes the data to its final format using the Serializer. Firehose supports two types of deserializers: the Apache Hive JSON SerDe and the OpenX JSON SerDe.

DestinationDescription

Describes the destination for a Firehose stream.

DestinationTableConfiguration

Describes the configuration of a destination in Apache Iceberg Tables.

DirectPutSourceConfiguration

The structure that configures parameters such as ThroughputHintInMBs for a stream configured with Direct PUT as a source.

DirectPutSourceDescription

The structure that configures parameters such as ThroughputHintInMBs for a stream configured with Direct PUT as a source.

DocumentIdOptions

Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.

DynamicPartitioningConfiguration

The configuration of the dynamic partitioning mechanism that creates smaller data sets from the streaming data by partitioning it based on partition keys. Currently, dynamic partitioning is only supported for Amazon S3 destinations.

ElasticsearchBufferingHints

Describes the buffering to perform before delivering data to the Amazon OpenSearch Service destination.

ElasticsearchDestinationConfiguration

Describes the configuration of a destination in Amazon OpenSearch Service.

ElasticsearchDestinationDescription

The destination description in Amazon OpenSearch Service.

ElasticsearchDestinationUpdate

Describes an update for a destination in Amazon OpenSearch Service.

ElasticsearchRetryOptions

Configures retry behavior in case Firehose is unable to deliver documents to Amazon OpenSearch Service.

EncryptionConfiguration

Describes the encryption for a destination in Amazon S3.

ExtendedS3DestinationConfiguration

Describes the configuration of a destination in Amazon S3.

ExtendedS3DestinationDescription

Describes a destination in Amazon S3.

ExtendedS3DestinationUpdate

Describes an update for a destination in Amazon S3.

FailureDescription

Provides details in case one of the following operations fails due to an error related to KMS: CreateDeliveryStream, DeleteDeliveryStream, StartDeliveryStreamEncryption, StopDeliveryStreamEncryption.

HiveJsonSerDe

The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.

HttpEndpointBufferingHints

Describes the buffering options that can be applied before data is delivered to the HTTP endpoint destination. Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if specify a value for one of them, you must also provide a value for the other.

HttpEndpointCommonAttribute

Describes the metadata that's delivered to the specified HTTP endpoint destination.

HttpEndpointConfiguration

Describes the configuration of the HTTP endpoint to which Kinesis Firehose delivers data.

HttpEndpointDescription

Describes the HTTP endpoint selected as the destination.

HttpEndpointDestinationConfiguration

Describes the configuration of the HTTP endpoint destination.

HttpEndpointDestinationDescription

Describes the HTTP endpoint destination.

HttpEndpointDestinationUpdate

Updates the specified HTTP endpoint destination.

HttpEndpointRequestConfiguration

The configuration of the HTTP endpoint request.

HttpEndpointRetryOptions

Describes the retry behavior in case Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.

IcebergDestinationConfiguration

Specifies the destination configure settings for Apache Iceberg Table.

IcebergDestinationDescription

Describes a destination in Apache Iceberg Tables.

IcebergDestinationUpdate

Describes an update for a destination in Apache Iceberg Tables.

InputFormatConfiguration

Specifies the deserializer you want to use to convert the format of the input data. This parameter is required if Enabled is set to true.

KinesisStreamSourceConfiguration

The stream and role Amazon Resource Names (ARNs) for a Kinesis data stream used as the source for a Firehose stream.

KinesisStreamSourceDescription

Details about a Kinesis data stream used as the source for a Firehose stream.

KmsEncryptionConfig

Describes an encryption key for a destination in Amazon S3.

MskSourceConfiguration

The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.

MskSourceDescription

Details about the Amazon MSK cluster used as the source for a Firehose stream.

OpenXJsonSerDe

The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.

OrcSerDe

A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC.

OutputFormatConfiguration

Specifies the serializer that you want Firehose to use to convert the format of your data before it writes it to Amazon S3. This parameter is required if Enabled is set to true.

ParquetSerDe

A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet.

PartitionField

Represents a single field in a PartitionSpec.

Amazon Data Firehose is in preview release and is subject to change.

PartitionSpec

Represents how to produce partition data for a table. Partition data is produced by transforming columns in a table. Each column transform is represented by a named PartitionField.

Here is an example of the schema in JSON.

"partitionSpec": { "identity": \[ {"sourceName": "column1"}, {"sourceName": "column2"}, {"sourceName": "column3"} \] }

Amazon Data Firehose is in preview release and is subject to change.

ProcessingConfiguration

Describes a data processing configuration.

Processor

Describes a data processor.

If you want to add a new line delimiter between records in objects that are delivered to Amazon S3, choose AppendDelimiterToRecord as a processor type. You don’t have to put a processor parameter when you select AppendDelimiterToRecord.

ProcessorParameter

Describes the processor parameter.

PutRecordBatchResponseEntry

Contains the result for an individual record from a PutRecordBatch request. If the record is successfully added to your Firehose stream, it receives a record ID. If the record fails to be added to your Firehose stream, the result includes an error code and an error message.

Record

The unit of data in a Firehose stream.

RedshiftDestinationConfiguration

Describes the configuration of a destination in Amazon Redshift.

RedshiftDestinationDescription

Describes a destination in Amazon Redshift.

RedshiftDestinationUpdate

Describes an update for a destination in Amazon Redshift.

RedshiftRetryOptions

Configures retry behavior in case Firehose is unable to deliver documents to Amazon Redshift.

RetryOptions

The retry behavior in case Firehose is unable to deliver data to a destination.

S3DestinationConfiguration

Describes the configuration of a destination in Amazon S3.

S3DestinationDescription

Describes a destination in Amazon S3.

S3DestinationUpdate

Describes an update for a destination in Amazon S3.

SchemaConfiguration

Specifies the schema to which you want Firehose to configure your data before it writes it to Amazon S3. This parameter is required if Enabled is set to true.

SchemaEvolutionConfiguration

The configuration to enable schema evolution.

Amazon Data Firehose is in preview release and is subject to change.

SecretsManagerConfiguration

The structure that defines how Firehose accesses the secret.

Serializer

The serializer that you want Firehose to use to convert data to the target format before writing it to Amazon S3. Firehose supports two types of serializers: the ORC SerDe and the Parquet SerDe.

SnowflakeBufferingHints

Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.

SnowflakeDestinationConfiguration

Configure Snowflake destination

SnowflakeDestinationDescription

Optional Snowflake destination description

SnowflakeDestinationUpdate

Update to configuration settings

SnowflakeRetryOptions

Specify how long Firehose retries sending data to the New Relic HTTP endpoint. After sending data, Firehose first waits for an acknowledgment from the HTTP endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment timeout period, Firehose starts the retry duration counter. It keeps retrying until the retry duration expires. After that, Firehose considers it a data delivery failure and backs up the data to your Amazon S3 bucket. Every time that Firehose sends data to the HTTP endpoint (either the initial attempt or a retry), it restarts the acknowledgement timeout counter and waits for an acknowledgement from the HTTP endpoint. Even if the retry duration expires, Firehose still waits for the acknowledgment until it receives it or the acknowledgement timeout period is reached. If the acknowledgment times out, Firehose determines whether there's time left in the retry counter. If there is time left, it retries again and repeats the logic until it receives an acknowledgment or determines that the retry time has expired. If you don't want Firehose to retry sending data, set this value to 0.

SnowflakeRoleConfiguration

Optionally configure a Snowflake role. Otherwise the default user role will be used.

SnowflakeVpcConfiguration

Configure a Snowflake VPC

SourceDescription

Details about a Kinesis data stream used as the source for a Firehose stream.

SplunkBufferingHints

The buffering options. If no value is specified, the default values for Splunk are used.

SplunkDestinationConfiguration

Describes the configuration of a destination in Splunk.

SplunkDestinationDescription

Describes a destination in Splunk.

SplunkDestinationUpdate

Describes an update for a destination in Splunk.

SplunkRetryOptions

Configures retry behavior in case Firehose is unable to deliver documents to Splunk, or if it doesn't receive an acknowledgment from Splunk.

TableCreationConfiguration

The configuration to enable automatic table creation.

Amazon Data Firehose is in preview release and is subject to change.

Tag

Metadata that you can assign to a Firehose stream, consisting of a key-value pair.

VpcConfiguration

The details of the VPC of the Amazon OpenSearch or Amazon OpenSearch Serverless destination.

VpcConfigurationDescription

The details of the VPC of the Amazon OpenSearch Service destination.

Enums§

AmazonOpenSearchServerlessS3BackupMode
When writing a match expression against AmazonOpenSearchServerlessS3BackupMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
AmazonopensearchserviceIndexRotationPeriod
When writing a match expression against AmazonopensearchserviceIndexRotationPeriod, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
AmazonopensearchserviceS3BackupMode
When writing a match expression against AmazonopensearchserviceS3BackupMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
CompressionFormat
When writing a match expression against CompressionFormat, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
Connectivity
When writing a match expression against Connectivity, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ContentEncoding
When writing a match expression against ContentEncoding, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
DatabaseType
When writing a match expression against DatabaseType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
DefaultDocumentIdFormat
When writing a match expression against DefaultDocumentIdFormat, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
DeliveryStreamEncryptionStatus
When writing a match expression against DeliveryStreamEncryptionStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
DeliveryStreamFailureType
When writing a match expression against DeliveryStreamFailureType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
DeliveryStreamStatus
When writing a match expression against DeliveryStreamStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
DeliveryStreamType
When writing a match expression against DeliveryStreamType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ElasticsearchIndexRotationPeriod
When writing a match expression against ElasticsearchIndexRotationPeriod, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ElasticsearchS3BackupMode
When writing a match expression against ElasticsearchS3BackupMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
HecEndpointType
When writing a match expression against HecEndpointType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
HttpEndpointS3BackupMode
When writing a match expression against HttpEndpointS3BackupMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
IcebergS3BackupMode
When writing a match expression against IcebergS3BackupMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
KeyType
When writing a match expression against KeyType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
NoEncryptionConfig
When writing a match expression against NoEncryptionConfig, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
OrcCompression
When writing a match expression against OrcCompression, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
OrcFormatVersion
When writing a match expression against OrcFormatVersion, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ParquetCompression
When writing a match expression against ParquetCompression, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ParquetWriterVersion
When writing a match expression against ParquetWriterVersion, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ProcessorParameterName
When writing a match expression against ProcessorParameterName, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ProcessorType
When writing a match expression against ProcessorType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
RedshiftS3BackupMode
When writing a match expression against RedshiftS3BackupMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
S3BackupMode
When writing a match expression against S3BackupMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
SnapshotRequestedBy
When writing a match expression against SnapshotRequestedBy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
SnapshotStatus
When writing a match expression against SnapshotStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
SnowflakeDataLoadingOption
When writing a match expression against SnowflakeDataLoadingOption, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
SnowflakeS3BackupMode
When writing a match expression against SnowflakeS3BackupMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
SplunkS3BackupMode
When writing a match expression against SplunkS3BackupMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
SslMode
When writing a match expression against SslMode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.