Struct aws_sdk_batch::Client
source · pub struct Client { /* private fields */ }
Expand description
Client for AWS Batch
Client for invoking operations on AWS Batch. Each operation on AWS Batch is a method on this
this struct. .send()
MUST be invoked on the generated operations to dispatch the request to the service.
Constructing a Client
A Config
is required to construct a client. For most use cases, the aws-config
crate should be used to automatically resolve this config using
aws_config::load_from_env()
, since this will resolve an SdkConfig
which can be shared
across multiple different AWS SDK clients. This config resolution process can be customized
by calling aws_config::from_env()
instead, which returns a ConfigLoader
that uses
the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
let config = aws_config::load_from_env().await;
let client = aws_sdk_batch::Client::new(&config);
Occasionally, SDKs may have additional service-specific that can be set on the Config
that
is absent from SdkConfig
, or slightly different settings for a specific client may be desired.
The Config
struct implements From<&SdkConfig>
, so setting these specific settings can be
done as follows:
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_batch::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
See the aws-config
docs and Config
for more information on customizing configuration.
Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the Client
A client has a function for every operation that can be performed by the service.
For example, the CancelJob
operation has
a Client::cancel_job
, function which returns a builder for that operation.
The fluent builder ultimately has a send()
function that returns an async future that
returns a result, as illustrated below:
let result = client.cancel_job()
.job_id("example")
.send()
.await;
The underlying HTTP requests that get made by this can be modified with the customize_operation
function on the fluent builder. See the customize
module for more
information.
Implementations§
source§impl Client
impl Client
sourcepub fn cancel_job(&self) -> CancelJobFluentBuilder
pub fn cancel_job(&self) -> CancelJobFluentBuilder
Constructs a fluent builder for the CancelJob
operation.
- The fluent builder is configurable:
job_id(impl ::std::convert::Into<String>)
/set_job_id(Option<String>)
:The Batch job ID of the job to cancel.
reason(impl ::std::convert::Into<String>)
/set_reason(Option<String>)
:A message to attach to the job that explains the reason for canceling it. This message is returned by future
DescribeJobs
operations on the job. This message is also recorded in the Batch activity logs.
- On success, responds with
CancelJobOutput
- On failure, responds with
SdkError<CancelJobError>
source§impl Client
impl Client
sourcepub fn create_compute_environment(
&self
) -> CreateComputeEnvironmentFluentBuilder
pub fn create_compute_environment( &self ) -> CreateComputeEnvironmentFluentBuilder
Constructs a fluent builder for the CreateComputeEnvironment
operation.
- The fluent builder is configurable:
compute_environment_name(impl ::std::convert::Into<String>)
/set_compute_environment_name(Option<String>)
:The name for your compute environment. It can be up to 128 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
r#type(CeType)
/set_type(Option<CeType>)
:The type of the compute environment:
MANAGED
orUNMANAGED
. For more information, see Compute Environments in the Batch User Guide.state(CeState)
/set_state(Option<CeState>)
:The state of the compute environment. If the state is
ENABLED
, then the compute environment accepts jobs from a queue and can scale out automatically based on queues.If the state is
ENABLED
, then the Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.If the state is
DISABLED
, then the Batch scheduler doesn’t attempt to place jobs within the environment. Jobs in aSTARTING
orRUNNING
state continue to progress normally. Managed compute environments in theDISABLED
state don’t scale out.Compute environments in a
DISABLED
state may continue to incur billing charges. To prevent additional charges, turn off and then delete the compute environment. For more information, see State in the Batch User Guide.When an instance is idle, the instance scales down to the
minvCpus
value. However, the instance size doesn’t change. For example, consider ac5.8xlarge
instance with aminvCpus
value of4
and adesiredvCpus
value of36
. This instance doesn’t scale down to ac5.large
instance.unmanagedv_cpus(i32)
/set_unmanagedv_cpus(Option<i32>)
:The maximum number of vCPUs for an unmanaged compute environment. This parameter is only used for fair share scheduling to reserve vCPU capacity for new share identifiers. If this parameter isn’t provided for a fair share job queue, no vCPU capacity is reserved.
This parameter is only supported when the
type
parameter is set toUNMANAGED
.compute_resources(ComputeResource)
/set_compute_resources(Option<ComputeResource>)
:Details about the compute resources managed by the compute environment. This parameter is required for managed compute environments. For more information, see Compute Environments in the Batch User Guide.
service_role(impl ::std::convert::Into<String>)
/set_service_role(Option<String>)
:The full Amazon Resource Name (ARN) of the IAM role that allows Batch to make calls to other Amazon Web Services services on your behalf. For more information, see Batch service IAM role in the Batch User Guide.
If your account already created the Batch service-linked role, that role is used by default for your compute environment unless you specify a different role here. If the Batch service-linked role doesn’t exist in your account, and no role is specified here, the service attempts to create the Batch service-linked role in your account.
If your specified role has a path other than
/
, then you must specify either the full role ARN (recommended) or prefix the role name with the path. For example, if a role with the namebar
has a path of/foo/
, specify/foo/bar
as the role name. For more information, see Friendly names and paths in the IAM User Guide.Depending on how you created your Batch service role, its ARN might contain the
service-role
path prefix. When you only specify the name of the service role, Batch assumes that your ARN doesn’t use theservice-role
path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.tags(HashMap<String, String>)
/set_tags(Option<HashMap<String, String>>)
:The tags that you apply to the compute environment to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging Amazon Web Services Resources in Amazon Web Services General Reference.
These tags can be updated or removed using the TagResource and UntagResource API operations. These tags don’t propagate to the underlying compute resources.
eks_configuration(EksConfiguration)
/set_eks_configuration(Option<EksConfiguration>)
:The details for the Amazon EKS cluster that supports the compute environment.
- On success, responds with
CreateComputeEnvironmentOutput
with field(s):compute_environment_name(Option<String>)
:The name of the compute environment. It can be up to 128 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
compute_environment_arn(Option<String>)
:The Amazon Resource Name (ARN) of the compute environment.
- On failure, responds with
SdkError<CreateComputeEnvironmentError>
source§impl Client
impl Client
sourcepub fn create_job_queue(&self) -> CreateJobQueueFluentBuilder
pub fn create_job_queue(&self) -> CreateJobQueueFluentBuilder
Constructs a fluent builder for the CreateJobQueue
operation.
- The fluent builder is configurable:
job_queue_name(impl ::std::convert::Into<String>)
/set_job_queue_name(Option<String>)
:The name of the job queue. It can be up to 128 letters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
state(JqState)
/set_state(Option<JqState>)
:The state of the job queue. If the job queue state is
ENABLED
, it is able to accept jobs. If the job queue state isDISABLED
, new jobs can’t be added to the queue, but jobs already in the queue can finish.scheduling_policy_arn(impl ::std::convert::Into<String>)
/set_scheduling_policy_arn(Option<String>)
:The Amazon Resource Name (ARN) of the fair share scheduling policy. If this parameter is specified, the job queue uses a fair share scheduling policy. If this parameter isn’t specified, the job queue uses a first in, first out (FIFO) scheduling policy. After a job queue is created, you can replace but can’t remove the fair share scheduling policy. The format is
aws:Partition:batch:Region:Account:scheduling-policy/Name
. An example isaws:aws:batch:us-west-2:123456789012:scheduling-policy/MySchedulingPolicy
.priority(i32)
/set_priority(Option<i32>)
:The priority of the job queue. Job queues with a higher priority (or a higher integer value for the
priority
parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of10
is given scheduling preference over a job queue with a priority value of1
. All of the compute environments must be either EC2 (EC2
orSPOT
) or Fargate (FARGATE
orFARGATE_SPOT
); EC2 and Fargate compute environments can’t be mixed.compute_environment_order(Vec<ComputeEnvironmentOrder>)
/set_compute_environment_order(Option<Vec<ComputeEnvironmentOrder>>)
:The set of compute environments mapped to a job queue and their order relative to each other. The job scheduler uses this parameter to determine which compute environment runs a specific job. Compute environments must be in the
VALID
state before you can associate them with a job queue. You can associate up to three compute environments with a job queue. All of the compute environments must be either EC2 (EC2
orSPOT
) or Fargate (FARGATE
orFARGATE_SPOT
); EC2 and Fargate compute environments can’t be mixed.All compute environments that are associated with a job queue must share the same architecture. Batch doesn’t support mixing compute environment architecture types in a single job queue.
tags(HashMap<String, String>)
/set_tags(Option<HashMap<String, String>>)
:The tags that you apply to the job queue to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging your Batch resources in Batch User Guide.
- On success, responds with
CreateJobQueueOutput
with field(s):job_queue_name(Option<String>)
:The name of the job queue.
job_queue_arn(Option<String>)
:The Amazon Resource Name (ARN) of the job queue.
- On failure, responds with
SdkError<CreateJobQueueError>
source§impl Client
impl Client
sourcepub fn create_scheduling_policy(&self) -> CreateSchedulingPolicyFluentBuilder
pub fn create_scheduling_policy(&self) -> CreateSchedulingPolicyFluentBuilder
Constructs a fluent builder for the CreateSchedulingPolicy
operation.
- The fluent builder is configurable:
name(impl ::std::convert::Into<String>)
/set_name(Option<String>)
:The name of the scheduling policy. It can be up to 128 letters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
fairshare_policy(FairsharePolicy)
/set_fairshare_policy(Option<FairsharePolicy>)
:The fair share policy of the scheduling policy.
tags(HashMap<String, String>)
/set_tags(Option<HashMap<String, String>>)
:The tags that you apply to the scheduling policy to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging Amazon Web Services Resources in Amazon Web Services General Reference.
These tags can be updated or removed using the TagResource and UntagResource API operations.
- On success, responds with
CreateSchedulingPolicyOutput
with field(s):name(Option<String>)
:The name of the scheduling policy.
arn(Option<String>)
:The Amazon Resource Name (ARN) of the scheduling policy. The format is
aws:Partition:batch:Region:Account:scheduling-policy/Name
. For example,aws:aws:batch:us-west-2:123456789012:scheduling-policy/MySchedulingPolicy
.
- On failure, responds with
SdkError<CreateSchedulingPolicyError>
source§impl Client
impl Client
sourcepub fn delete_compute_environment(
&self
) -> DeleteComputeEnvironmentFluentBuilder
pub fn delete_compute_environment( &self ) -> DeleteComputeEnvironmentFluentBuilder
Constructs a fluent builder for the DeleteComputeEnvironment
operation.
- The fluent builder is configurable:
compute_environment(impl ::std::convert::Into<String>)
/set_compute_environment(Option<String>)
:The name or Amazon Resource Name (ARN) of the compute environment to delete.
- On success, responds with
DeleteComputeEnvironmentOutput
- On failure, responds with
SdkError<DeleteComputeEnvironmentError>
source§impl Client
impl Client
sourcepub fn delete_job_queue(&self) -> DeleteJobQueueFluentBuilder
pub fn delete_job_queue(&self) -> DeleteJobQueueFluentBuilder
Constructs a fluent builder for the DeleteJobQueue
operation.
- The fluent builder is configurable:
job_queue(impl ::std::convert::Into<String>)
/set_job_queue(Option<String>)
:The short name or full Amazon Resource Name (ARN) of the queue to delete.
- On success, responds with
DeleteJobQueueOutput
- On failure, responds with
SdkError<DeleteJobQueueError>
source§impl Client
impl Client
sourcepub fn delete_scheduling_policy(&self) -> DeleteSchedulingPolicyFluentBuilder
pub fn delete_scheduling_policy(&self) -> DeleteSchedulingPolicyFluentBuilder
Constructs a fluent builder for the DeleteSchedulingPolicy
operation.
- The fluent builder is configurable:
arn(impl ::std::convert::Into<String>)
/set_arn(Option<String>)
:The Amazon Resource Name (ARN) of the scheduling policy to delete.
- On success, responds with
DeleteSchedulingPolicyOutput
- On failure, responds with
SdkError<DeleteSchedulingPolicyError>
source§impl Client
impl Client
sourcepub fn deregister_job_definition(&self) -> DeregisterJobDefinitionFluentBuilder
pub fn deregister_job_definition(&self) -> DeregisterJobDefinitionFluentBuilder
Constructs a fluent builder for the DeregisterJobDefinition
operation.
- The fluent builder is configurable:
job_definition(impl ::std::convert::Into<String>)
/set_job_definition(Option<String>)
:The name and revision (
name:revision
) or full Amazon Resource Name (ARN) of the job definition to deregister.
- On success, responds with
DeregisterJobDefinitionOutput
- On failure, responds with
SdkError<DeregisterJobDefinitionError>
source§impl Client
impl Client
sourcepub fn describe_compute_environments(
&self
) -> DescribeComputeEnvironmentsFluentBuilder
pub fn describe_compute_environments( &self ) -> DescribeComputeEnvironmentsFluentBuilder
Constructs a fluent builder for the DescribeComputeEnvironments
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
compute_environments(Vec<String>)
/set_compute_environments(Option<Vec<String>>)
:A list of up to 100 compute environment names or full Amazon Resource Name (ARN) entries.
max_results(i32)
/set_max_results(Option<i32>)
:The maximum number of cluster results returned by
DescribeComputeEnvironments
in paginated output. When this parameter is used,DescribeComputeEnvironments
only returnsmaxResults
results in a single page along with anextToken
response element. The remaining results of the initial request can be seen by sending anotherDescribeComputeEnvironments
request with the returnednextToken
value. This value can be between 1 and 100. If this parameter isn’t used, thenDescribeComputeEnvironments
returns up to 100 results and anextToken
value if applicable.next_token(impl ::std::convert::Into<String>)
/set_next_token(Option<String>)
:The
nextToken
value returned from a previous paginatedDescribeComputeEnvironments
request wheremaxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned thenextToken
value. This value isnull
when there are no more results to return.Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.
- On success, responds with
DescribeComputeEnvironmentsOutput
with field(s):compute_environments(Option<Vec<ComputeEnvironmentDetail>>)
:The list of compute environments.
next_token(Option<String>)
:The
nextToken
value to include in a futureDescribeComputeEnvironments
request. When the results of aDescribeComputeEnvironments
request exceedmaxResults
, this value can be used to retrieve the next page of results. This value isnull
when there are no more results to return.
- On failure, responds with
SdkError<DescribeComputeEnvironmentsError>
source§impl Client
impl Client
sourcepub fn describe_job_definitions(&self) -> DescribeJobDefinitionsFluentBuilder
pub fn describe_job_definitions(&self) -> DescribeJobDefinitionsFluentBuilder
Constructs a fluent builder for the DescribeJobDefinitions
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
job_definitions(Vec<String>)
/set_job_definitions(Option<Vec<String>>)
:A list of up to 100 job definitions. Each entry in the list can either be an ARN in the format
arn:aws:batch:${Region}:${Account}:job-definition/${JobDefinitionName}:${Revision}
or a short version using the form${JobDefinitionName}:${Revision}
.max_results(i32)
/set_max_results(Option<i32>)
:The maximum number of results returned by
DescribeJobDefinitions
in paginated output. When this parameter is used,DescribeJobDefinitions
only returnsmaxResults
results in a single page and anextToken
response element. The remaining results of the initial request can be seen by sending anotherDescribeJobDefinitions
request with the returnednextToken
value. This value can be between 1 and 100. If this parameter isn’t used, thenDescribeJobDefinitions
returns up to 100 results and anextToken
value if applicable.job_definition_name(impl ::std::convert::Into<String>)
/set_job_definition_name(Option<String>)
:The name of the job definition to describe.
status(impl ::std::convert::Into<String>)
/set_status(Option<String>)
:The status used to filter job definitions.
next_token(impl ::std::convert::Into<String>)
/set_next_token(Option<String>)
:The
nextToken
value returned from a previous paginatedDescribeJobDefinitions
request wheremaxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned thenextToken
value. This value isnull
when there are no more results to return.Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.
- On success, responds with
DescribeJobDefinitionsOutput
with field(s):job_definitions(Option<Vec<JobDefinition>>)
:The list of job definitions.
next_token(Option<String>)
:The
nextToken
value to include in a futureDescribeJobDefinitions
request. When the results of aDescribeJobDefinitions
request exceedmaxResults
, this value can be used to retrieve the next page of results. This value isnull
when there are no more results to return.
- On failure, responds with
SdkError<DescribeJobDefinitionsError>
source§impl Client
impl Client
sourcepub fn describe_job_queues(&self) -> DescribeJobQueuesFluentBuilder
pub fn describe_job_queues(&self) -> DescribeJobQueuesFluentBuilder
Constructs a fluent builder for the DescribeJobQueues
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
job_queues(Vec<String>)
/set_job_queues(Option<Vec<String>>)
:A list of up to 100 queue names or full queue Amazon Resource Name (ARN) entries.
max_results(i32)
/set_max_results(Option<i32>)
:The maximum number of results returned by
DescribeJobQueues
in paginated output. When this parameter is used,DescribeJobQueues
only returnsmaxResults
results in a single page and anextToken
response element. The remaining results of the initial request can be seen by sending anotherDescribeJobQueues
request with the returnednextToken
value. This value can be between 1 and 100. If this parameter isn’t used, thenDescribeJobQueues
returns up to 100 results and anextToken
value if applicable.next_token(impl ::std::convert::Into<String>)
/set_next_token(Option<String>)
:The
nextToken
value returned from a previous paginatedDescribeJobQueues
request wheremaxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned thenextToken
value. This value isnull
when there are no more results to return.Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.
- On success, responds with
DescribeJobQueuesOutput
with field(s):job_queues(Option<Vec<JobQueueDetail>>)
:The list of job queues.
next_token(Option<String>)
:The
nextToken
value to include in a futureDescribeJobQueues
request. When the results of aDescribeJobQueues
request exceedmaxResults
, this value can be used to retrieve the next page of results. This value isnull
when there are no more results to return.
- On failure, responds with
SdkError<DescribeJobQueuesError>
source§impl Client
impl Client
sourcepub fn describe_jobs(&self) -> DescribeJobsFluentBuilder
pub fn describe_jobs(&self) -> DescribeJobsFluentBuilder
Constructs a fluent builder for the DescribeJobs
operation.
- The fluent builder is configurable:
jobs(Vec<String>)
/set_jobs(Option<Vec<String>>)
:A list of up to 100 job IDs.
- On success, responds with
DescribeJobsOutput
with field(s):jobs(Option<Vec<JobDetail>>)
:The list of jobs.
- On failure, responds with
SdkError<DescribeJobsError>
source§impl Client
impl Client
sourcepub fn describe_scheduling_policies(
&self
) -> DescribeSchedulingPoliciesFluentBuilder
pub fn describe_scheduling_policies( &self ) -> DescribeSchedulingPoliciesFluentBuilder
Constructs a fluent builder for the DescribeSchedulingPolicies
operation.
- The fluent builder is configurable:
arns(Vec<String>)
/set_arns(Option<Vec<String>>)
:A list of up to 100 scheduling policy Amazon Resource Name (ARN) entries.
- On success, responds with
DescribeSchedulingPoliciesOutput
with field(s):scheduling_policies(Option<Vec<SchedulingPolicyDetail>>)
:The list of scheduling policies.
- On failure, responds with
SdkError<DescribeSchedulingPoliciesError>
source§impl Client
impl Client
sourcepub fn list_jobs(&self) -> ListJobsFluentBuilder
pub fn list_jobs(&self) -> ListJobsFluentBuilder
Constructs a fluent builder for the ListJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
job_queue(impl ::std::convert::Into<String>)
/set_job_queue(Option<String>)
:The name or full Amazon Resource Name (ARN) of the job queue used to list jobs.
array_job_id(impl ::std::convert::Into<String>)
/set_array_job_id(Option<String>)
:The job ID for an array job. Specifying an array job ID with this parameter lists all child jobs from within the specified array.
multi_node_job_id(impl ::std::convert::Into<String>)
/set_multi_node_job_id(Option<String>)
:The job ID for a multi-node parallel job. Specifying a multi-node parallel job ID with this parameter lists all nodes that are associated with the specified job.
job_status(JobStatus)
/set_job_status(Option<JobStatus>)
:The job status used to filter jobs in the specified queue. If the
filters
parameter is specified, thejobStatus
parameter is ignored and jobs with any status are returned. If you don’t specify a status, onlyRUNNING
jobs are returned.max_results(i32)
/set_max_results(Option<i32>)
:The maximum number of results returned by
ListJobs
in paginated output. When this parameter is used,ListJobs
only returnsmaxResults
results in a single page and anextToken
response element. The remaining results of the initial request can be seen by sending anotherListJobs
request with the returnednextToken
value. This value can be between 1 and 100. If this parameter isn’t used, thenListJobs
returns up to 100 results and anextToken
value if applicable.next_token(impl ::std::convert::Into<String>)
/set_next_token(Option<String>)
:The
nextToken
value returned from a previous paginatedListJobs
request wheremaxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned thenextToken
value. This value isnull
when there are no more results to return.Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.
filters(Vec<KeyValuesPair>)
/set_filters(Option<Vec<KeyValuesPair>>)
:The filter to apply to the query. Only one filter can be used at a time. When the filter is used,
jobStatus
is ignored. The filter doesn’t apply to child jobs in an array or multi-node parallel (MNP) jobs. The results are sorted by thecreatedAt
field, with the most recent jobs being first.- JOB_NAME
-
The value of the filter is a case-insensitive match for the job name. If the value ends with an asterisk (), the filter matches any job name that begins with the string before the ‘’. This corresponds to the
jobName
value. For example,test1
matches bothTest1
andtest1
, andtest1*
matches bothtest1
andTest10
. When theJOB_NAME
filter is used, the results are grouped by the job name and version. - JOB_DEFINITION
-
The value for the filter is the name or Amazon Resource Name (ARN) of the job definition. This corresponds to the
jobDefinition
value. The value is case sensitive. When the value for the filter is the job definition name, the results include all the jobs that used any revision of that job definition name. If the value ends with an asterisk (), the filter matches any job definition name that begins with the string before the ‘’. For example,jd1
matches onlyjd1
, andjd1*
matches bothjd1
andjd1A
. The version of the job definition that’s used doesn’t affect the sort order. When theJOB_DEFINITION
filter is used and the ARN is used (which is in the formarn:${Partition}:batch:${Region}:${Account}:job-definition/${JobDefinitionName}:${Revision}
), the results include jobs that used the specified revision of the job definition. Asterisk (*) isn’t supported when the ARN is used. - BEFORE_CREATED_AT
-
The value for the filter is the time that’s before the job was created. This corresponds to the
createdAt
value. The value is a string representation of the number of milliseconds since 00:00:00 UTC (midnight) on January 1, 1970. - AFTER_CREATED_AT
-
The value for the filter is the time that’s after the job was created. This corresponds to the
createdAt
value. The value is a string representation of the number of milliseconds since 00:00:00 UTC (midnight) on January 1, 1970.
- On success, responds with
ListJobsOutput
with field(s):job_summary_list(Option<Vec<JobSummary>>)
:A list of job summaries that match the request.
next_token(Option<String>)
:The
nextToken
value to include in a futureListJobs
request. When the results of aListJobs
request exceedmaxResults
, this value can be used to retrieve the next page of results. This value isnull
when there are no more results to return.
- On failure, responds with
SdkError<ListJobsError>
source§impl Client
impl Client
sourcepub fn list_scheduling_policies(&self) -> ListSchedulingPoliciesFluentBuilder
pub fn list_scheduling_policies(&self) -> ListSchedulingPoliciesFluentBuilder
Constructs a fluent builder for the ListSchedulingPolicies
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
max_results(i32)
/set_max_results(Option<i32>)
:The maximum number of results that’s returned by
ListSchedulingPolicies
in paginated output. When this parameter is used,ListSchedulingPolicies
only returnsmaxResults
results in a single page and anextToken
response element. You can see the remaining results of the initial request by sending anotherListSchedulingPolicies
request with the returnednextToken
value. This value can be between 1 and 100. If this parameter isn’t used,ListSchedulingPolicies
returns up to 100 results and anextToken
value if applicable.next_token(impl ::std::convert::Into<String>)
/set_next_token(Option<String>)
:The
nextToken
value that’s returned from a previous paginatedListSchedulingPolicies
request wheremaxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned thenextToken
value. This value isnull
when there are no more results to return.Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.
- On success, responds with
ListSchedulingPoliciesOutput
with field(s):scheduling_policies(Option<Vec<SchedulingPolicyListingDetail>>)
:A list of scheduling policies that match the request.
next_token(Option<String>)
:The
nextToken
value to include in a futureListSchedulingPolicies
request. When the results of aListSchedulingPolicies
request exceedmaxResults
, this value can be used to retrieve the next page of results. This value isnull
when there are no more results to return.
- On failure, responds with
SdkError<ListSchedulingPoliciesError>
source§impl Client
impl Client
Constructs a fluent builder for the ListTagsForResource
operation.
- The fluent builder is configurable:
resource_arn(impl ::std::convert::Into<String>)
/set_resource_arn(Option<String>)
:The Amazon Resource Name (ARN) that identifies the resource that tags are listed for. Batch resources that support tags are compute environments, jobs, job definitions, job queues, and scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs aren’t supported.
- On success, responds with
ListTagsForResourceOutput
with field(s):tags(Option<HashMap<String, String>>)
:The tags for the resource.
- On failure, responds with
SdkError<ListTagsForResourceError>
source§impl Client
impl Client
sourcepub fn register_job_definition(&self) -> RegisterJobDefinitionFluentBuilder
pub fn register_job_definition(&self) -> RegisterJobDefinitionFluentBuilder
Constructs a fluent builder for the RegisterJobDefinition
operation.
- The fluent builder is configurable:
job_definition_name(impl ::std::convert::Into<String>)
/set_job_definition_name(Option<String>)
:The name of the job definition to register. It can be up to 128 letters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
r#type(JobDefinitionType)
/set_type(Option<JobDefinitionType>)
:The type of job definition. For more information about multi-node parallel jobs, see Creating a multi-node parallel job definition in the Batch User Guide.
If the job is run on Fargate resources, then
multinode
isn’t supported.parameters(HashMap<String, String>)
/set_parameters(Option<HashMap<String, String>>)
:Default parameter substitution placeholders to set in the job definition. Parameters are specified as a key-value pair mapping. Parameters in a
SubmitJob
request override any corresponding parameter defaults from the job definition.scheduling_priority(i32)
/set_scheduling_priority(Option<i32>)
:The scheduling priority for jobs that are submitted with this job definition. This only affects jobs in job queues with a fair share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower scheduling priority.
The minimum supported value is 0 and the maximum supported value is 9999.
container_properties(ContainerProperties)
/set_container_properties(Option<ContainerProperties>)
:An object with various properties specific to Amazon ECS based single-node container-based jobs. If the job definition’s
type
parameter iscontainer
, then you must specify eithercontainerProperties
ornodeProperties
. This must not be specified for Amazon EKS based job definitions.If the job runs on Fargate resources, then you must not specify
nodeProperties
; use onlycontainerProperties
.node_properties(NodeProperties)
/set_node_properties(Option<NodeProperties>)
:An object with various properties specific to multi-node parallel jobs. If you specify node properties for a job, it becomes a multi-node parallel job. For more information, see Multi-node Parallel Jobs in the Batch User Guide. If the job definition’s
type
parameter iscontainer
, then you must specify eithercontainerProperties
ornodeProperties
.If the job runs on Fargate resources, then you must not specify
nodeProperties
; usecontainerProperties
instead.If the job runs on Amazon EKS resources, then you must not specify
nodeProperties
.retry_strategy(RetryStrategy)
/set_retry_strategy(Option<RetryStrategy>)
:The retry strategy to use for failed jobs that are submitted with this job definition. Any retry strategy that’s specified during a
SubmitJob
operation overrides the retry strategy defined here. If a job is terminated due to a timeout, it isn’t retried.propagate_tags(bool)
/set_propagate_tags(Option<bool>)
:Specifies whether to propagate the tags from the job or job definition to the corresponding Amazon ECS task. If no value is specified, the tags are not propagated. Tags can only be propagated to the tasks during task creation. For tags with the same name, job tags are given priority over job definitions tags. If the total number of combined tags from the job and job definition is over 50, the job is moved to the
FAILED
state.If the job runs on Amazon EKS resources, then you must not specify
propagateTags
.timeout(JobTimeout)
/set_timeout(Option<JobTimeout>)
:The timeout configuration for jobs that are submitted with this job definition, after which Batch terminates your jobs if they have not finished. If a job is terminated due to a timeout, it isn’t retried. The minimum value for the timeout is 60 seconds. Any timeout configuration that’s specified during a
SubmitJob
operation overrides the timeout configuration defined here. For more information, see Job Timeouts in the Batch User Guide.tags(HashMap<String, String>)
/set_tags(Option<HashMap<String, String>>)
:The tags that you apply to the job definition to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging Amazon Web Services Resources in Batch User Guide.
platform_capabilities(Vec<PlatformCapability>)
/set_platform_capabilities(Option<Vec<PlatformCapability>>)
:The platform capabilities required by the job definition. If no value is specified, it defaults to
EC2
. To run the job on Fargate resources, specifyFARGATE
.If the job runs on Amazon EKS resources, then you must not specify
platformCapabilities
.eks_properties(EksProperties)
/set_eks_properties(Option<EksProperties>)
:An object with various properties that are specific to Amazon EKS based jobs. This must not be specified for Amazon ECS based job definitions.
- On success, responds with
RegisterJobDefinitionOutput
with field(s):job_definition_name(Option<String>)
:The name of the job definition.
job_definition_arn(Option<String>)
:The Amazon Resource Name (ARN) of the job definition.
revision(Option<i32>)
:The revision of the job definition.
- On failure, responds with
SdkError<RegisterJobDefinitionError>
source§impl Client
impl Client
sourcepub fn submit_job(&self) -> SubmitJobFluentBuilder
pub fn submit_job(&self) -> SubmitJobFluentBuilder
Constructs a fluent builder for the SubmitJob
operation.
- The fluent builder is configurable:
job_name(impl ::std::convert::Into<String>)
/set_job_name(Option<String>)
:The name of the job. It can be up to 128 letters long. The first character must be alphanumeric, can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
job_queue(impl ::std::convert::Into<String>)
/set_job_queue(Option<String>)
:The job queue where the job is submitted. You can specify either the name or the Amazon Resource Name (ARN) of the queue.
share_identifier(impl ::std::convert::Into<String>)
/set_share_identifier(Option<String>)
:The share identifier for the job. Don’t specify this parameter if the job queue doesn’t have a scheduling policy. If the job queue has a scheduling policy, then this parameter must be specified.
This string is limited to 255 alphanumeric characters, and can be followed by an asterisk (*).
scheduling_priority_override(i32)
/set_scheduling_priority_override(Option<i32>)
:The scheduling priority for the job. This only affects jobs in job queues with a fair share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower scheduling priority. This overrides any scheduling priority in the job definition.
The minimum supported value is 0 and the maximum supported value is 9999.
array_properties(ArrayProperties)
/set_array_properties(Option<ArrayProperties>)
:The array properties for the submitted job, such as the size of the array. The array size can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job. For more information, see Array Jobs in the Batch User Guide.
depends_on(Vec<JobDependency>)
/set_depends_on(Option<Vec<JobDependency>>)
:A list of dependencies for the job. A job can depend upon a maximum of 20 jobs. You can specify a
SEQUENTIAL
type dependency without specifying a job ID for array jobs so that each child array job completes sequentially, starting at index 0. You can also specify anN_TO_N
type dependency with a job ID for array jobs. In that case, each index child of this job must wait for the corresponding index child of each dependency to complete before it can begin.job_definition(impl ::std::convert::Into<String>)
/set_job_definition(Option<String>)
:The job definition used by this job. This value can be one of
definition-name
,definition-name:revision
, or the Amazon Resource Name (ARN) for the job definition, with or without the revision (arn:aws:batch:region:account:job-definition/definition-name:revision
, orarn:aws:batch:region:account:job-definition/definition-name
).If the revision is not specified, then the latest active revision is used.
parameters(HashMap<String, String>)
/set_parameters(Option<HashMap<String, String>>)
:Additional parameters passed to the job that replace parameter substitution placeholders that are set in the job definition. Parameters are specified as a key and value pair mapping. Parameters in a
SubmitJob
request override any corresponding parameter defaults from the job definition.container_overrides(ContainerOverrides)
/set_container_overrides(Option<ContainerOverrides>)
:An object with various properties that override the defaults for the job definition that specify the name of a container in the specified job definition and the overrides it should receive. You can override the default command for a container, which is specified in the job definition or the Docker image, with a
command
override. You can also override existing environment variables on a container or add new environment variables to it with anenvironment
override.node_overrides(NodeOverrides)
/set_node_overrides(Option<NodeOverrides>)
:A list of node overrides in JSON format that specify the node range to target and the container overrides for that node range.
This parameter isn’t applicable to jobs that are running on Fargate resources; use
containerOverrides
instead.retry_strategy(RetryStrategy)
/set_retry_strategy(Option<RetryStrategy>)
:The retry strategy to use for failed jobs from this
SubmitJob
operation. When a retry strategy is specified here, it overrides the retry strategy defined in the job definition.propagate_tags(bool)
/set_propagate_tags(Option<bool>)
:Specifies whether to propagate the tags from the job or job definition to the corresponding Amazon ECS task. If no value is specified, the tags aren’t propagated. Tags can only be propagated to the tasks during task creation. For tags with the same name, job tags are given priority over job definitions tags. If the total number of combined tags from the job and job definition is over 50, the job is moved to the
FAILED
state. When specified, this overrides the tag propagation setting in the job definition.timeout(JobTimeout)
/set_timeout(Option<JobTimeout>)
:The timeout configuration for this
SubmitJob
operation. You can specify a timeout duration after which Batch terminates your jobs if they haven’t finished. If a job is terminated due to a timeout, it isn’t retried. The minimum value for the timeout is 60 seconds. This configuration overrides any timeout configuration specified in the job definition. For array jobs, child jobs have the same timeout configuration as the parent job. For more information, see Job Timeouts in the Amazon Elastic Container Service Developer Guide.tags(HashMap<String, String>)
/set_tags(Option<HashMap<String, String>>)
:The tags that you apply to the job request to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging Amazon Web Services Resources in Amazon Web Services General Reference.
eks_properties_override(EksPropertiesOverride)
/set_eks_properties_override(Option<EksPropertiesOverride>)
:An object that can only be specified for jobs that are run on Amazon EKS resources with various properties that override defaults for the job definition.
- On success, responds with
SubmitJobOutput
with field(s):job_arn(Option<String>)
:The Amazon Resource Name (ARN) for the job.
job_name(Option<String>)
:The name of the job.
job_id(Option<String>)
:The unique identifier for the job.
- On failure, responds with
SdkError<SubmitJobError>
source§impl Client
impl Client
sourcepub fn tag_resource(&self) -> TagResourceFluentBuilder
pub fn tag_resource(&self) -> TagResourceFluentBuilder
Constructs a fluent builder for the TagResource
operation.
- The fluent builder is configurable:
resource_arn(impl ::std::convert::Into<String>)
/set_resource_arn(Option<String>)
:The Amazon Resource Name (ARN) of the resource that tags are added to. Batch resources that support tags are compute environments, jobs, job definitions, job queues, and scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs aren’t supported.
tags(HashMap<String, String>)
/set_tags(Option<HashMap<String, String>>)
:The tags that you apply to the resource to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging Amazon Web Services Resources in Amazon Web Services General Reference.
- On success, responds with
TagResourceOutput
- On failure, responds with
SdkError<TagResourceError>
source§impl Client
impl Client
sourcepub fn terminate_job(&self) -> TerminateJobFluentBuilder
pub fn terminate_job(&self) -> TerminateJobFluentBuilder
Constructs a fluent builder for the TerminateJob
operation.
- The fluent builder is configurable:
job_id(impl ::std::convert::Into<String>)
/set_job_id(Option<String>)
:The Batch job ID of the job to terminate.
reason(impl ::std::convert::Into<String>)
/set_reason(Option<String>)
:A message to attach to the job that explains the reason for canceling it. This message is returned by future
DescribeJobs
operations on the job. This message is also recorded in the Batch activity logs.
- On success, responds with
TerminateJobOutput
- On failure, responds with
SdkError<TerminateJobError>
source§impl Client
impl Client
sourcepub fn untag_resource(&self) -> UntagResourceFluentBuilder
pub fn untag_resource(&self) -> UntagResourceFluentBuilder
Constructs a fluent builder for the UntagResource
operation.
- The fluent builder is configurable:
resource_arn(impl ::std::convert::Into<String>)
/set_resource_arn(Option<String>)
:The Amazon Resource Name (ARN) of the resource from which to delete tags. Batch resources that support tags are compute environments, jobs, job definitions, job queues, and scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs aren’t supported.
tag_keys(Vec<String>)
/set_tag_keys(Option<Vec<String>>)
:The keys of the tags to be removed.
- On success, responds with
UntagResourceOutput
- On failure, responds with
SdkError<UntagResourceError>
source§impl Client
impl Client
sourcepub fn update_compute_environment(
&self
) -> UpdateComputeEnvironmentFluentBuilder
pub fn update_compute_environment( &self ) -> UpdateComputeEnvironmentFluentBuilder
Constructs a fluent builder for the UpdateComputeEnvironment
operation.
- The fluent builder is configurable:
compute_environment(impl ::std::convert::Into<String>)
/set_compute_environment(Option<String>)
:The name or full Amazon Resource Name (ARN) of the compute environment to update.
state(CeState)
/set_state(Option<CeState>)
:The state of the compute environment. Compute environments in the
ENABLED
state can accept jobs from a queue and scale in or out automatically based on the workload demand of its associated queues.If the state is
ENABLED
, then the Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.If the state is
DISABLED
, then the Batch scheduler doesn’t attempt to place jobs within the environment. Jobs in aSTARTING
orRUNNING
state continue to progress normally. Managed compute environments in theDISABLED
state don’t scale out.Compute environments in a
DISABLED
state may continue to incur billing charges. To prevent additional charges, turn off and then delete the compute environment. For more information, see State in the Batch User Guide.When an instance is idle, the instance scales down to the
minvCpus
value. However, the instance size doesn’t change. For example, consider ac5.8xlarge
instance with aminvCpus
value of4
and adesiredvCpus
value of36
. This instance doesn’t scale down to ac5.large
instance.unmanagedv_cpus(i32)
/set_unmanagedv_cpus(Option<i32>)
:The maximum number of vCPUs expected to be used for an unmanaged compute environment. Don’t specify this parameter for a managed compute environment. This parameter is only used for fair share scheduling to reserve vCPU capacity for new share identifiers. If this parameter isn’t provided for a fair share job queue, no vCPU capacity is reserved.
compute_resources(ComputeResourceUpdate)
/set_compute_resources(Option<ComputeResourceUpdate>)
:Details of the compute resources managed by the compute environment. Required for a managed compute environment. For more information, see Compute Environments in the Batch User Guide.
service_role(impl ::std::convert::Into<String>)
/set_service_role(Option<String>)
:The full Amazon Resource Name (ARN) of the IAM role that allows Batch to make calls to other Amazon Web Services services on your behalf. For more information, see Batch service IAM role in the Batch User Guide.
If the compute environment has a service-linked role, it can’t be changed to use a regular IAM role. Likewise, if the compute environment has a regular IAM role, it can’t be changed to use a service-linked role. To update the parameters for the compute environment that require an infrastructure update to change, the AWSServiceRoleForBatch service-linked role must be used. For more information, see Updating compute environments in the Batch User Guide.
If your specified role has a path other than
/
, then you must either specify the full role ARN (recommended) or prefix the role name with the path.Depending on how you created your Batch service role, its ARN might contain the
service-role
path prefix. When you only specify the name of the service role, Batch assumes that your ARN doesn’t use theservice-role
path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.update_policy(UpdatePolicy)
/set_update_policy(Option<UpdatePolicy>)
:Specifies the updated infrastructure update policy for the compute environment. For more information about infrastructure updates, see Updating compute environments in the Batch User Guide.
- On success, responds with
UpdateComputeEnvironmentOutput
with field(s):compute_environment_name(Option<String>)
:The name of the compute environment. It can be up to 128 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
compute_environment_arn(Option<String>)
:The Amazon Resource Name (ARN) of the compute environment.
- On failure, responds with
SdkError<UpdateComputeEnvironmentError>
source§impl Client
impl Client
sourcepub fn update_job_queue(&self) -> UpdateJobQueueFluentBuilder
pub fn update_job_queue(&self) -> UpdateJobQueueFluentBuilder
Constructs a fluent builder for the UpdateJobQueue
operation.
- The fluent builder is configurable:
job_queue(impl ::std::convert::Into<String>)
/set_job_queue(Option<String>)
:The name or the Amazon Resource Name (ARN) of the job queue.
state(JqState)
/set_state(Option<JqState>)
:Describes the queue’s ability to accept new jobs. If the job queue state is
ENABLED
, it can accept jobs. If the job queue state isDISABLED
, new jobs can’t be added to the queue, but jobs already in the queue can finish.scheduling_policy_arn(impl ::std::convert::Into<String>)
/set_scheduling_policy_arn(Option<String>)
:Amazon Resource Name (ARN) of the fair share scheduling policy. Once a job queue is created, the fair share scheduling policy can be replaced but not removed. The format is
aws:Partition:batch:Region:Account:scheduling-policy/Name
. For example,aws:aws:batch:us-west-2:123456789012:scheduling-policy/MySchedulingPolicy
.priority(i32)
/set_priority(Option<i32>)
:The priority of the job queue. Job queues with a higher priority (or a higher integer value for the
priority
parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of10
is given scheduling preference over a job queue with a priority value of1
. All of the compute environments must be either EC2 (EC2
orSPOT
) or Fargate (FARGATE
orFARGATE_SPOT
). EC2 and Fargate compute environments can’t be mixed.compute_environment_order(Vec<ComputeEnvironmentOrder>)
/set_compute_environment_order(Option<Vec<ComputeEnvironmentOrder>>)
:Details the set of compute environments mapped to a job queue and their order relative to each other. This is one of the parameters used by the job scheduler to determine which compute environment runs a given job. Compute environments must be in the
VALID
state before you can associate them with a job queue. All of the compute environments must be either EC2 (EC2
orSPOT
) or Fargate (FARGATE
orFARGATE_SPOT
). EC2 and Fargate compute environments can’t be mixed.All compute environments that are associated with a job queue must share the same architecture. Batch doesn’t support mixing compute environment architecture types in a single job queue.
- On success, responds with
UpdateJobQueueOutput
with field(s):job_queue_name(Option<String>)
:The name of the job queue.
job_queue_arn(Option<String>)
:The Amazon Resource Name (ARN) of the job queue.
- On failure, responds with
SdkError<UpdateJobQueueError>
source§impl Client
impl Client
sourcepub fn update_scheduling_policy(&self) -> UpdateSchedulingPolicyFluentBuilder
pub fn update_scheduling_policy(&self) -> UpdateSchedulingPolicyFluentBuilder
Constructs a fluent builder for the UpdateSchedulingPolicy
operation.
- The fluent builder is configurable:
arn(impl ::std::convert::Into<String>)
/set_arn(Option<String>)
:The Amazon Resource Name (ARN) of the scheduling policy to update.
fairshare_policy(FairsharePolicy)
/set_fairshare_policy(Option<FairsharePolicy>)
:The fair share policy.
- On success, responds with
UpdateSchedulingPolicyOutput
- On failure, responds with
SdkError<UpdateSchedulingPolicyError>
source§impl Client
impl Client
sourcepub fn from_conf(conf: Config) -> Self
pub fn from_conf(conf: Config) -> Self
Creates a new client from the service Config
.
Panics
- This method will panic if the
conf
is missing an async sleep implementation. If you experience this panic, set thesleep_impl
on the Config passed into this function to fix it. - This method will panic if the
conf
is missing an HTTP connector. If you experience this panic, set thehttp_connector
on the Config passed into this function to fix it.
source§impl Client
impl Client
sourcepub fn new(sdk_config: &SdkConfig) -> Self
pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
Panics
- This method will panic if the
sdk_config
is missing an async sleep implementation. If you experience this panic, set thesleep_impl
on the Config passed into this function to fix it. - This method will panic if the
sdk_config
is missing an HTTP connector. If you experience this panic, set thehttp_connector
on the Config passed into this function to fix it.