Expand description

Data structures used by operation inputs/outputs.

Modules

Structs

An object representing an Batch array job.

An object representing the array properties of a job.

An object representing the array properties of a job.

An object representing the details of a container that's part of a job attempt.

An object representing a job attempt.

An object representing an Batch compute environment.

The order in which compute environments are tried for job placement within a queue. Compute environments are tried in ascending order. For example, if two compute environments are associated with a job queue, the compute environment with a lower order integer value is tried for job placement first. Compute environments must be in the VALID state before you can associate them with a job queue. All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT); EC2 and Fargate compute environments can't be mixed.

An object representing an Batch compute resource. For more information, see Compute Environments in the Batch User Guide.

An object representing the attributes of a compute environment that can be updated. For more information, see Compute Environments in the Batch User Guide.

An object representing the details of a container that's part of a job.

The overrides that should be sent to a container.

Container properties are used in job definitions to describe the container that's launched as part of a job.

An object representing summary details of a container within a job.

An object representing a container instance host device.

Provides information used to select Amazon Machine Images (AMIs) for instances in the compute environment. If Ec2Configuration isn't specified, the default is ECS_AL2 (Amazon Linux 2).

The authorization configuration details for the Amazon EFS file system.

This is used when you're using an Amazon Elastic File System file system for job storage. For more information, see Amazon EFS Volumes in the Batch User Guide.

Specifies a set of conditions to be met, and an action to take (RETRY or EXIT) if all conditions are met.

The fair share policy for a scheduling policy.

The platform configuration for jobs that are running on Fargate resources. Jobs that run on EC2 resources must not specify this parameter.

Determine whether your data volume persists on the host container instance and where it is stored. If this parameter is empty, then the Docker daemon assigns a host path for your data volume, but the data isn't guaranteed to persist after the containers associated with it stop running.

An object representing an Batch job definition.

An object representing an Batch job dependency.

An object representing an Batch job.

An object representing the details of an Batch job queue.

An object representing summary details of a job.

An object representing a job timeout configuration.

A key-value pair object.

A filter name and value pair that's used to return a more specific list of results from a ListJobs API operation.

An object representing a launch template associated with a compute resource. You must specify either the launch template ID or launch template name in the request, but not both.

Linux-specific modifications that are applied to the container, such as details for device mappings.

Log configuration options to send to a custom log driver for the container.

Details on a Docker volume mount point that's used in a job's container properties. This parameter maps to Volumes in the Create a container section of the Docker Remote API and the --volume option to docker run.

The network configuration for jobs that are running on Fargate resources. Jobs that are running on EC2 resources must not specify this parameter.

An object representing the elastic network interface for a multi-node parallel job node.

An object representing the details of a multi-node parallel job node.

Object representing any node overrides to a job definition that's used in a SubmitJob API operation.

An object representing the node properties of a multi-node parallel job.

An object representing the properties of a node that's associated with a multi-node parallel job.

Object representing any node overrides to a job definition that's used in a SubmitJob API operation.

An object representing the properties of the node range for a multi-node parallel job.

The type and amount of a resource to assign to a container. The supported resources include GPU, MEMORY, and VCPU.

The retry strategy associated with a job. For more information, see Automated job retries in the Batch User Guide.

An object that represents a scheduling policy.

An object that contains the details of a scheduling policy that's returned in a ListSchedulingPolicy action.

An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:

Specifies the weights for the fair share identifiers for the fair share policy. Fair share identifiers that aren't included have a default weight of 1.0.

The container path, mount options, and size of the tmpfs mount.

The ulimit settings to pass to the container.

A data volume used in a job's container properties.

Enums