Expand description
Data structures used by operation inputs/outputs.
Modules§
Structs§
- Amazon
Open Search Serverless Buffering Hints Describes the buffering to perform before delivering data to the Serverless offering for Amazon OpenSearch Service destination.
- Amazon
Open Search Serverless Destination Configuration Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- Amazon
Open Search Serverless Destination Description The destination description in the Serverless offering for Amazon OpenSearch Service.
- Amazon
Open Search Serverless Destination Update Describes an update for a destination in the Serverless offering for Amazon OpenSearch Service.
- Amazon
Open Search Serverless Retry Options Configures retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service.
- Amazonopensearchservice
Buffering Hints Describes the buffering to perform before delivering data to the Amazon OpenSearch Service destination.
- Amazonopensearchservice
Destination Configuration Describes the configuration of a destination in Amazon OpenSearch Service
- Amazonopensearchservice
Destination Description The destination description in Amazon OpenSearch Service.
- Amazonopensearchservice
Destination Update Describes an update for a destination in Amazon OpenSearch Service.
- Amazonopensearchservice
Retry Options Configures retry behavior in case Firehose is unable to deliver documents to Amazon OpenSearch Service.
- Authentication
Configuration The authentication configuration of the Amazon MSK cluster.
- Buffering
Hints Describes hints for the buffering to perform before delivering data to the destination. These options are treated as hints, and therefore Firehose might choose to use different values when it is optimal. The
SizeInMBs
andIntervalInSeconds
parameters are optional. However, if specify a value for one of them, you must also provide a value for the other.- Catalog
Configuration Describes the containers where the destination Apache Iceberg Tables are persisted.
- Cloud
Watch Logging Options Describes the Amazon CloudWatch logging options for your Firehose stream.
- Copy
Command Describes a
COPY
command for Amazon Redshift.- Data
Format Conversion Configuration Specifies that you want Firehose to convert data from the JSON format to the Parquet or ORC format before writing it to Amazon S3. Firehose uses the serializer and deserializer that you specify, in addition to the column information from the Amazon Web Services Glue table, to deserialize your input data from JSON and then serialize it to the Parquet or ORC format. For more information, see Firehose Record Format Conversion.
- Database
Column List The structure used to configure the list of column patterns in source database endpoint for Firehose to read from.
Amazon Data Firehose is in preview release and is subject to change.
- Database
List The structure used to configure the list of database patterns in source database endpoint for Firehose to read from.
Amazon Data Firehose is in preview release and is subject to change.
- Database
Snapshot Info The structure that describes the snapshot information of a table in source database endpoint that Firehose reads.
Amazon Data Firehose is in preview release and is subject to change.
- Database
Source Authentication Configuration The structure to configure the authentication methods for Firehose to connect to source database endpoint.
Amazon Data Firehose is in preview release and is subject to change.
- Database
Source Configuration The top level object for configuring streams with database as a source.
Amazon Data Firehose is in preview release and is subject to change.
- Database
Source Description The top level object for database source description.
Amazon Data Firehose is in preview release and is subject to change.
- Database
Source VpcConfiguration The structure for details of the VPC Endpoint Service which Firehose uses to create a PrivateLink to the database.
Amazon Data Firehose is in preview release and is subject to change.
- Database
Table List The structure used to configure the list of table patterns in source database endpoint for Firehose to read from.
Amazon Data Firehose is in preview release and is subject to change.
- Delivery
Stream Description Contains information about a Firehose stream.
- Delivery
Stream Encryption Configuration Contains information about the server-side encryption (SSE) status for the delivery stream, the type customer master key (CMK) in use, if any, and the ARN of the CMK. You can get
DeliveryStreamEncryptionConfiguration
by invoking theDescribeDeliveryStream
operation.- Delivery
Stream Encryption Configuration Input Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- Deserializer
The deserializer you want Firehose to use for converting the input data from JSON. Firehose then serializes the data to its final format using the
Serializer
. Firehose supports two types of deserializers: the Apache Hive JSON SerDe and the OpenX JSON SerDe.- Destination
Description Describes the destination for a Firehose stream.
- Destination
Table Configuration Describes the configuration of a destination in Apache Iceberg Tables.
- Direct
PutSource Configuration The structure that configures parameters such as
ThroughputHintInMBs
for a stream configured with Direct PUT as a source.- Direct
PutSource Description The structure that configures parameters such as
ThroughputHintInMBs
for a stream configured with Direct PUT as a source.- Document
IdOptions Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- Dynamic
Partitioning Configuration The configuration of the dynamic partitioning mechanism that creates smaller data sets from the streaming data by partitioning it based on partition keys. Currently, dynamic partitioning is only supported for Amazon S3 destinations.
- Elasticsearch
Buffering Hints Describes the buffering to perform before delivering data to the Amazon OpenSearch Service destination.
- Elasticsearch
Destination Configuration Describes the configuration of a destination in Amazon OpenSearch Service.
- Elasticsearch
Destination Description The destination description in Amazon OpenSearch Service.
- Elasticsearch
Destination Update Describes an update for a destination in Amazon OpenSearch Service.
- Elasticsearch
Retry Options Configures retry behavior in case Firehose is unable to deliver documents to Amazon OpenSearch Service.
- Encryption
Configuration Describes the encryption for a destination in Amazon S3.
- Extended
S3Destination Configuration Describes the configuration of a destination in Amazon S3.
- Extended
S3Destination Description Describes a destination in Amazon S3.
- Extended
S3Destination Update Describes an update for a destination in Amazon S3.
- Failure
Description Provides details in case one of the following operations fails due to an error related to KMS:
CreateDeliveryStream
,DeleteDeliveryStream
,StartDeliveryStreamEncryption
,StopDeliveryStreamEncryption
.- Hive
Json SerDe The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- Http
Endpoint Buffering Hints Describes the buffering options that can be applied before data is delivered to the HTTP endpoint destination. Firehose treats these options as hints, and it might choose to use more optimal values. The
SizeInMBs
andIntervalInSeconds
parameters are optional. However, if specify a value for one of them, you must also provide a value for the other.- Http
Endpoint Common Attribute Describes the metadata that's delivered to the specified HTTP endpoint destination.
- Http
Endpoint Configuration Describes the configuration of the HTTP endpoint to which Kinesis Firehose delivers data.
- Http
Endpoint Description Describes the HTTP endpoint selected as the destination.
- Http
Endpoint Destination Configuration Describes the configuration of the HTTP endpoint destination.
- Http
Endpoint Destination Description Describes the HTTP endpoint destination.
- Http
Endpoint Destination Update Updates the specified HTTP endpoint destination.
- Http
Endpoint Request Configuration The configuration of the HTTP endpoint request.
- Http
Endpoint Retry Options Describes the retry behavior in case Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- Iceberg
Destination Configuration Specifies the destination configure settings for Apache Iceberg Table.
- Iceberg
Destination Description Describes a destination in Apache Iceberg Tables.
- Iceberg
Destination Update Describes an update for a destination in Apache Iceberg Tables.
- Input
Format Configuration Specifies the deserializer you want to use to convert the format of the input data. This parameter is required if
Enabled
is set to true.- Kinesis
Stream Source Configuration The stream and role Amazon Resource Names (ARNs) for a Kinesis data stream used as the source for a Firehose stream.
- Kinesis
Stream Source Description Details about a Kinesis data stream used as the source for a Firehose stream.
- KmsEncryption
Config Describes an encryption key for a destination in Amazon S3.
- MskSource
Configuration The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- MskSource
Description Details about the Amazon MSK cluster used as the source for a Firehose stream.
- OpenX
Json SerDe The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- OrcSer
De A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC.
- Output
Format Configuration Specifies the serializer that you want Firehose to use to convert the format of your data before it writes it to Amazon S3. This parameter is required if
Enabled
is set to true.- Parquet
SerDe A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet.
- Partition
Field Represents a single field in a
PartitionSpec
.Amazon Data Firehose is in preview release and is subject to change.
- Partition
Spec Represents how to produce partition data for a table. Partition data is produced by transforming columns in a table. Each column transform is represented by a named
PartitionField
.Here is an example of the schema in JSON.
"partitionSpec": { "identity": \[ {"sourceName": "column1"}, {"sourceName": "column2"}, {"sourceName": "column3"} \] }
Amazon Data Firehose is in preview release and is subject to change.
- Processing
Configuration Describes a data processing configuration.
- Processor
Describes a data processor.
If you want to add a new line delimiter between records in objects that are delivered to Amazon S3, choose
AppendDelimiterToRecord
as a processor type. You don’t have to put a processor parameter when you selectAppendDelimiterToRecord
.- Processor
Parameter Describes the processor parameter.
- PutRecord
Batch Response Entry Contains the result for an individual record from a
PutRecordBatch
request. If the record is successfully added to your Firehose stream, it receives a record ID. If the record fails to be added to your Firehose stream, the result includes an error code and an error message.- Record
The unit of data in a Firehose stream.
- Redshift
Destination Configuration Describes the configuration of a destination in Amazon Redshift.
- Redshift
Destination Description Describes a destination in Amazon Redshift.
- Redshift
Destination Update Describes an update for a destination in Amazon Redshift.
- Redshift
Retry Options Configures retry behavior in case Firehose is unable to deliver documents to Amazon Redshift.
- Retry
Options The retry behavior in case Firehose is unable to deliver data to a destination.
- S3Destination
Configuration Describes the configuration of a destination in Amazon S3.
- S3Destination
Description Describes a destination in Amazon S3.
- S3Destination
Update Describes an update for a destination in Amazon S3.
- Schema
Configuration Specifies the schema to which you want Firehose to configure your data before it writes it to Amazon S3. This parameter is required if
Enabled
is set to true.- Schema
Evolution Configuration The configuration to enable schema evolution.
Amazon Data Firehose is in preview release and is subject to change.
- Secrets
Manager Configuration The structure that defines how Firehose accesses the secret.
- Serializer
The serializer that you want Firehose to use to convert data to the target format before writing it to Amazon S3. Firehose supports two types of serializers: the ORC SerDe and the Parquet SerDe.
- Snowflake
Buffering Hints Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- Snowflake
Destination Configuration Configure Snowflake destination
- Snowflake
Destination Description Optional Snowflake destination description
- Snowflake
Destination Update Update to configuration settings
- Snowflake
Retry Options Specify how long Firehose retries sending data to the New Relic HTTP endpoint. After sending data, Firehose first waits for an acknowledgment from the HTTP endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment timeout period, Firehose starts the retry duration counter. It keeps retrying until the retry duration expires. After that, Firehose considers it a data delivery failure and backs up the data to your Amazon S3 bucket. Every time that Firehose sends data to the HTTP endpoint (either the initial attempt or a retry), it restarts the acknowledgement timeout counter and waits for an acknowledgement from the HTTP endpoint. Even if the retry duration expires, Firehose still waits for the acknowledgment until it receives it or the acknowledgement timeout period is reached. If the acknowledgment times out, Firehose determines whether there's time left in the retry counter. If there is time left, it retries again and repeats the logic until it receives an acknowledgment or determines that the retry time has expired. If you don't want Firehose to retry sending data, set this value to 0.
- Snowflake
Role Configuration Optionally configure a Snowflake role. Otherwise the default user role will be used.
- Snowflake
VpcConfiguration Configure a Snowflake VPC
- Source
Description Details about a Kinesis data stream used as the source for a Firehose stream.
- Splunk
Buffering Hints The buffering options. If no value is specified, the default values for Splunk are used.
- Splunk
Destination Configuration Describes the configuration of a destination in Splunk.
- Splunk
Destination Description Describes a destination in Splunk.
- Splunk
Destination Update Describes an update for a destination in Splunk.
- Splunk
Retry Options Configures retry behavior in case Firehose is unable to deliver documents to Splunk, or if it doesn't receive an acknowledgment from Splunk.
- Table
Creation Configuration The configuration to enable automatic table creation.
Amazon Data Firehose is in preview release and is subject to change.
- Tag
Metadata that you can assign to a Firehose stream, consisting of a key-value pair.
- VpcConfiguration
The details of the VPC of the Amazon OpenSearch or Amazon OpenSearch Serverless destination.
- VpcConfiguration
Description The details of the VPC of the Amazon OpenSearch Service destination.
Enums§
- Amazon
Open Search Serverless S3Backup Mode - When writing a match expression against
AmazonOpenSearchServerlessS3BackupMode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Amazonopensearchservice
Index Rotation Period - When writing a match expression against
AmazonopensearchserviceIndexRotationPeriod
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Amazonopensearchservice
S3Backup Mode - When writing a match expression against
AmazonopensearchserviceS3BackupMode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Compression
Format - When writing a match expression against
CompressionFormat
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Connectivity
- When writing a match expression against
Connectivity
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Content
Encoding - When writing a match expression against
ContentEncoding
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Database
Type - When writing a match expression against
DatabaseType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Default
Document IdFormat - When writing a match expression against
DefaultDocumentIdFormat
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Delivery
Stream Encryption Status - When writing a match expression against
DeliveryStreamEncryptionStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Delivery
Stream Failure Type - When writing a match expression against
DeliveryStreamFailureType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Delivery
Stream Status - When writing a match expression against
DeliveryStreamStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Delivery
Stream Type - When writing a match expression against
DeliveryStreamType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Elasticsearch
Index Rotation Period - When writing a match expression against
ElasticsearchIndexRotationPeriod
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Elasticsearch
S3Backup Mode - When writing a match expression against
ElasticsearchS3BackupMode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - HecEndpoint
Type - When writing a match expression against
HecEndpointType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Http
Endpoint S3Backup Mode - When writing a match expression against
HttpEndpointS3BackupMode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Iceberg
S3Backup Mode - When writing a match expression against
IcebergS3BackupMode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - KeyType
- When writing a match expression against
KeyType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - NoEncryption
Config - When writing a match expression against
NoEncryptionConfig
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - OrcCompression
- When writing a match expression against
OrcCompression
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - OrcFormat
Version - When writing a match expression against
OrcFormatVersion
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Parquet
Compression - When writing a match expression against
ParquetCompression
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Parquet
Writer Version - When writing a match expression against
ParquetWriterVersion
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Processor
Parameter Name - When writing a match expression against
ProcessorParameterName
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Processor
Type - When writing a match expression against
ProcessorType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Redshift
S3Backup Mode - When writing a match expression against
RedshiftS3BackupMode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - S3Backup
Mode - When writing a match expression against
S3BackupMode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Snapshot
Requested By - When writing a match expression against
SnapshotRequestedBy
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Snapshot
Status - When writing a match expression against
SnapshotStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Snowflake
Data Loading Option - When writing a match expression against
SnowflakeDataLoadingOption
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Snowflake
S3Backup Mode - When writing a match expression against
SnowflakeS3BackupMode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Splunk
S3Backup Mode - When writing a match expression against
SplunkS3BackupMode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - SslMode
- When writing a match expression against
SslMode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.