#[non_exhaustive]
pub struct CreateRecipeJobInput {
Show 15 fields pub dataset_name: Option<String>, pub encryption_key_arn: Option<String>, pub encryption_mode: Option<EncryptionMode>, pub name: Option<String>, pub log_subscription: Option<LogSubscription>, pub max_capacity: i32, pub max_retries: i32, pub outputs: Option<Vec<Output>>, pub data_catalog_outputs: Option<Vec<DataCatalogOutput>>, pub database_outputs: Option<Vec<DatabaseOutput>>, pub project_name: Option<String>, pub recipe_reference: Option<RecipeReference>, pub role_arn: Option<String>, pub tags: Option<HashMap<String, String>>, pub timeout: i32,
}

Fields (Non-exhaustive)

This struct is marked as non-exhaustive
Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.
dataset_name: Option<String>

The name of the dataset that this job processes.

encryption_key_arn: Option<String>

The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.

encryption_mode: Option<EncryptionMode>

The encryption mode for the job, which can be one of the following:

  • SSE-KMS - Server-side encryption with keys managed by KMS.

  • SSE-S3 - Server-side encryption with keys managed by Amazon S3.

name: Option<String>

A unique name for the job. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

log_subscription: Option<LogSubscription>

Enables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.

max_capacity: i32

The maximum number of nodes that DataBrew can consume when the job processes data.

max_retries: i32

The maximum number of times to retry the job after a job run fails.

outputs: Option<Vec<Output>>

One or more artifacts that represent the output from running the job.

data_catalog_outputs: Option<Vec<DataCatalogOutput>>

One or more artifacts that represent the Glue Data Catalog output from running the job.

database_outputs: Option<Vec<DatabaseOutput>>

Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write to.

project_name: Option<String>

Either the name of an existing project, or a combination of a recipe and a dataset to associate with the recipe.

recipe_reference: Option<RecipeReference>

Represents the name and version of a DataBrew recipe.

role_arn: Option<String>

The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

tags: Option<HashMap<String, String>>

Metadata tags to apply to this job.

timeout: i32

The job's timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of TIMEOUT.

Implementations

Consumes the builder and constructs an Operation<CreateRecipeJob>

Creates a new builder-style object to manufacture CreateRecipeJobInput

The name of the dataset that this job processes.

The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.

The encryption mode for the job, which can be one of the following:

  • SSE-KMS - Server-side encryption with keys managed by KMS.

  • SSE-S3 - Server-side encryption with keys managed by Amazon S3.

A unique name for the job. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

Enables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.

The maximum number of nodes that DataBrew can consume when the job processes data.

The maximum number of times to retry the job after a job run fails.

One or more artifacts that represent the output from running the job.

One or more artifacts that represent the Glue Data Catalog output from running the job.

Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write to.

Either the name of an existing project, or a combination of a recipe and a dataset to associate with the recipe.

Represents the name and version of a DataBrew recipe.

The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

Metadata tags to apply to this job.

The job's timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of TIMEOUT.

Trait Implementations

Returns a copy of the value. Read more

Performs copy-assignment from source. Read more

Formats the value using the given formatter. Read more

This method tests for self and other values to be equal, and is used by ==. Read more

This method tests for !=.

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more

Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The resulting type after obtaining ownership.

Creates owned data from borrowed data, usually by cloning. Read more

🔬 This is a nightly-only experimental API. (toowned_clone_into)

Uses borrowed data to replace owned data, usually by cloning. Read more

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more