Struct aws_sdk_batch::Client

source ·
pub struct Client { /* private fields */ }
Expand description

Client for AWS Batch

Client for invoking operations on AWS Batch. Each operation on AWS Batch is a method on this this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.

§Constructing a Client

A Config is required to construct a client. For most use cases, the aws-config crate should be used to automatically resolve this config using aws_config::load_from_env(), since this will resolve an SdkConfig which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling aws_config::from_env() instead, which returns a ConfigLoader that uses the builder pattern to customize the default config.

In the simplest case, creating a client looks as follows:

let config = aws_config::load_from_env().await;
let client = aws_sdk_batch::Client::new(&config);

Occasionally, SDKs may have additional service-specific values that can be set on the Config that is absent from SdkConfig, or slightly different settings for a specific client may be desired. The Config struct implements From<&SdkConfig>, so setting these specific settings can be done as follows:

let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_batch::config::Builder::from(&sdk_config)
    .some_service_specific_setting("value")
    .build();

See the aws-config docs and Config for more information on customizing configuration.

Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.

§Using the Client

A client has a function for every operation that can be performed by the service. For example, the CancelJob operation has a Client::cancel_job, function which returns a builder for that operation. The fluent builder ultimately has a send() function that returns an async future that returns a result, as illustrated below:

let result = client.cancel_job()
    .job_id("example")
    .send()
    .await;

The underlying HTTP requests that get made by this can be modified with the customize_operation function on the fluent builder. See the customize module for more information.

Implementations§

source§

impl Client

source

pub fn cancel_job(&self) -> CancelJobFluentBuilder

Constructs a fluent builder for the CancelJob operation.

source§

impl Client

source

pub fn create_compute_environment( &self ) -> CreateComputeEnvironmentFluentBuilder

Constructs a fluent builder for the CreateComputeEnvironment operation.

  • The fluent builder is configurable:
    • compute_environment_name(impl Into<String>) / set_compute_environment_name(Option<String>):
      required: true

      The name for your compute environment. It can be up to 128 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).


    • r#type(CeType) / set_type(Option<CeType>):
      required: true

      The type of the compute environment: MANAGED or UNMANAGED. For more information, see Compute Environments in the Batch User Guide.


    • state(CeState) / set_state(Option<CeState>):
      required: false

      The state of the compute environment. If the state is ENABLED, then the compute environment accepts jobs from a queue and can scale out automatically based on queues.

      If the state is ENABLED, then the Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.

      If the state is DISABLED, then the Batch scheduler doesn’t attempt to place jobs within the environment. Jobs in a STARTING or RUNNING state continue to progress normally. Managed compute environments in the DISABLED state don’t scale out.

      Compute environments in a DISABLED state may continue to incur billing charges. To prevent additional charges, turn off and then delete the compute environment. For more information, see State in the Batch User Guide.

      When an instance is idle, the instance scales down to the minvCpus value. However, the instance size doesn’t change. For example, consider a c5.8xlarge instance with a minvCpus value of 4 and a desiredvCpus value of 36. This instance doesn’t scale down to a c5.large instance.


    • unmanagedv_cpus(i32) / set_unmanagedv_cpus(Option<i32>):
      required: false

      The maximum number of vCPUs for an unmanaged compute environment. This parameter is only used for fair share scheduling to reserve vCPU capacity for new share identifiers. If this parameter isn’t provided for a fair share job queue, no vCPU capacity is reserved.

      This parameter is only supported when the type parameter is set to UNMANAGED.


    • compute_resources(ComputeResource) / set_compute_resources(Option<ComputeResource>):
      required: false

      Details about the compute resources managed by the compute environment. This parameter is required for managed compute environments. For more information, see Compute Environments in the Batch User Guide.


    • service_role(impl Into<String>) / set_service_role(Option<String>):
      required: false

      The full Amazon Resource Name (ARN) of the IAM role that allows Batch to make calls to other Amazon Web Services services on your behalf. For more information, see Batch service IAM role in the Batch User Guide.

      If your account already created the Batch service-linked role, that role is used by default for your compute environment unless you specify a different role here. If the Batch service-linked role doesn’t exist in your account, and no role is specified here, the service attempts to create the Batch service-linked role in your account.

      If your specified role has a path other than /, then you must specify either the full role ARN (recommended) or prefix the role name with the path. For example, if a role with the name bar has a path of /foo/, specify /foo/bar as the role name. For more information, see Friendly names and paths in the IAM User Guide.

      Depending on how you created your Batch service role, its ARN might contain the service-role path prefix. When you only specify the name of the service role, Batch assumes that your ARN doesn’t use the service-role path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.


    • tags(impl Into<String>, impl Into<String>) / set_tags(Option<HashMap::<String, String>>):
      required: false

      The tags that you apply to the compute environment to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging Amazon Web Services Resources in Amazon Web Services General Reference.

      These tags can be updated or removed using the TagResource and UntagResource API operations. These tags don’t propagate to the underlying compute resources.


    • eks_configuration(EksConfiguration) / set_eks_configuration(Option<EksConfiguration>):
      required: false

      The details for the Amazon EKS cluster that supports the compute environment.


  • On success, responds with CreateComputeEnvironmentOutput with field(s):
  • On failure, responds with SdkError<CreateComputeEnvironmentError>
source§

impl Client

source

pub fn create_job_queue(&self) -> CreateJobQueueFluentBuilder

Constructs a fluent builder for the CreateJobQueue operation.

  • The fluent builder is configurable:
    • job_queue_name(impl Into<String>) / set_job_queue_name(Option<String>):
      required: true

      The name of the job queue. It can be up to 128 letters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).


    • state(JqState) / set_state(Option<JqState>):
      required: false

      The state of the job queue. If the job queue state is ENABLED, it is able to accept jobs. If the job queue state is DISABLED, new jobs can’t be added to the queue, but jobs already in the queue can finish.


    • scheduling_policy_arn(impl Into<String>) / set_scheduling_policy_arn(Option<String>):
      required: false

      The Amazon Resource Name (ARN) of the fair share scheduling policy. If this parameter is specified, the job queue uses a fair share scheduling policy. If this parameter isn’t specified, the job queue uses a first in, first out (FIFO) scheduling policy. After a job queue is created, you can replace but can’t remove the fair share scheduling policy. The format is aws:Partition:batch:Region:Account:scheduling-policy/Name . An example is aws:aws:batch:us-west-2:123456789012:scheduling-policy/MySchedulingPolicy.


    • priority(i32) / set_priority(Option<i32>):
      required: true

      The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of 10 is given scheduling preference over a job queue with a priority value of 1. All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT); EC2 and Fargate compute environments can’t be mixed.


    • compute_environment_order(ComputeEnvironmentOrder) / set_compute_environment_order(Option<Vec::<ComputeEnvironmentOrder>>):
      required: true

      The set of compute environments mapped to a job queue and their order relative to each other. The job scheduler uses this parameter to determine which compute environment runs a specific job. Compute environments must be in the VALID state before you can associate them with a job queue. You can associate up to three compute environments with a job queue. All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT); EC2 and Fargate compute environments can’t be mixed.

      All compute environments that are associated with a job queue must share the same architecture. Batch doesn’t support mixing compute environment architecture types in a single job queue.


    • tags(impl Into<String>, impl Into<String>) / set_tags(Option<HashMap::<String, String>>):
      required: false

      The tags that you apply to the job queue to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging your Batch resources in Batch User Guide.


  • On success, responds with CreateJobQueueOutput with field(s):
  • On failure, responds with SdkError<CreateJobQueueError>
source§

impl Client

source

pub fn create_scheduling_policy(&self) -> CreateSchedulingPolicyFluentBuilder

Constructs a fluent builder for the CreateSchedulingPolicy operation.

source§

impl Client

source

pub fn delete_compute_environment( &self ) -> DeleteComputeEnvironmentFluentBuilder

Constructs a fluent builder for the DeleteComputeEnvironment operation.

source§

impl Client

source

pub fn delete_job_queue(&self) -> DeleteJobQueueFluentBuilder

Constructs a fluent builder for the DeleteJobQueue operation.

source§

impl Client

source

pub fn delete_scheduling_policy(&self) -> DeleteSchedulingPolicyFluentBuilder

Constructs a fluent builder for the DeleteSchedulingPolicy operation.

source§

impl Client

source

pub fn deregister_job_definition(&self) -> DeregisterJobDefinitionFluentBuilder

Constructs a fluent builder for the DeregisterJobDefinition operation.

source§

impl Client

source

pub fn describe_compute_environments( &self ) -> DescribeComputeEnvironmentsFluentBuilder

Constructs a fluent builder for the DescribeComputeEnvironments operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • compute_environments(impl Into<String>) / set_compute_environments(Option<Vec::<String>>):
      required: false

      A list of up to 100 compute environment names or full Amazon Resource Name (ARN) entries.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The maximum number of cluster results returned by DescribeComputeEnvironments in paginated output. When this parameter is used, DescribeComputeEnvironments only returns maxResults results in a single page along with a nextToken response element. The remaining results of the initial request can be seen by sending another DescribeComputeEnvironments request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn’t used, then DescribeComputeEnvironments returns up to 100 results and a nextToken value if applicable.


    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      The nextToken value returned from a previous paginated DescribeComputeEnvironments request where maxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken value. This value is null when there are no more results to return.

      Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.


  • On success, responds with DescribeComputeEnvironmentsOutput with field(s):
  • On failure, responds with SdkError<DescribeComputeEnvironmentsError>
source§

impl Client

source

pub fn describe_job_definitions(&self) -> DescribeJobDefinitionsFluentBuilder

Constructs a fluent builder for the DescribeJobDefinitions operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • job_definitions(impl Into<String>) / set_job_definitions(Option<Vec::<String>>):
      required: false

      A list of up to 100 job definitions. Each entry in the list can either be an ARN in the format arn:aws:batch:${Region}:${Account}:job-definition/${JobDefinitionName}:${Revision} or a short version using the form ${JobDefinitionName}:${Revision}. This parameter can’t be used with other parameters.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The maximum number of results returned by DescribeJobDefinitions in paginated output. When this parameter is used, DescribeJobDefinitions only returns maxResults results in a single page and a nextToken response element. The remaining results of the initial request can be seen by sending another DescribeJobDefinitions request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn’t used, then DescribeJobDefinitions returns up to 100 results and a nextToken value if applicable.


    • job_definition_name(impl Into<String>) / set_job_definition_name(Option<String>):
      required: false

      The name of the job definition to describe.


    • status(impl Into<String>) / set_status(Option<String>):
      required: false

      The status used to filter job definitions.


    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      The nextToken value returned from a previous paginated DescribeJobDefinitions request where maxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken value. This value is null when there are no more results to return.

      Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.


  • On success, responds with DescribeJobDefinitionsOutput with field(s):
    • job_definitions(Option<Vec::<JobDefinition>>):

      The list of job definitions.

    • next_token(Option<String>):

      The nextToken value to include in a future DescribeJobDefinitions request. When the results of a DescribeJobDefinitions request exceed maxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.

  • On failure, responds with SdkError<DescribeJobDefinitionsError>
source§

impl Client

source

pub fn describe_job_queues(&self) -> DescribeJobQueuesFluentBuilder

Constructs a fluent builder for the DescribeJobQueues operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • job_queues(impl Into<String>) / set_job_queues(Option<Vec::<String>>):
      required: false

      A list of up to 100 queue names or full queue Amazon Resource Name (ARN) entries.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The maximum number of results returned by DescribeJobQueues in paginated output. When this parameter is used, DescribeJobQueues only returns maxResults results in a single page and a nextToken response element. The remaining results of the initial request can be seen by sending another DescribeJobQueues request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn’t used, then DescribeJobQueues returns up to 100 results and a nextToken value if applicable.


    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      The nextToken value returned from a previous paginated DescribeJobQueues request where maxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken value. This value is null when there are no more results to return.

      Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.


  • On success, responds with DescribeJobQueuesOutput with field(s):
    • job_queues(Option<Vec::<JobQueueDetail>>):

      The list of job queues.

    • next_token(Option<String>):

      The nextToken value to include in a future DescribeJobQueues request. When the results of a DescribeJobQueues request exceed maxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.

  • On failure, responds with SdkError<DescribeJobQueuesError>
source§

impl Client

source

pub fn describe_jobs(&self) -> DescribeJobsFluentBuilder

Constructs a fluent builder for the DescribeJobs operation.

source§

impl Client

source

pub fn describe_scheduling_policies( &self ) -> DescribeSchedulingPoliciesFluentBuilder

Constructs a fluent builder for the DescribeSchedulingPolicies operation.

source§

impl Client

source

pub fn list_jobs(&self) -> ListJobsFluentBuilder

Constructs a fluent builder for the ListJobs operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • job_queue(impl Into<String>) / set_job_queue(Option<String>):
      required: false

      The name or full Amazon Resource Name (ARN) of the job queue used to list jobs.


    • array_job_id(impl Into<String>) / set_array_job_id(Option<String>):
      required: false

      The job ID for an array job. Specifying an array job ID with this parameter lists all child jobs from within the specified array.


    • multi_node_job_id(impl Into<String>) / set_multi_node_job_id(Option<String>):
      required: false

      The job ID for a multi-node parallel job. Specifying a multi-node parallel job ID with this parameter lists all nodes that are associated with the specified job.


    • job_status(JobStatus) / set_job_status(Option<JobStatus>):
      required: false

      The job status used to filter jobs in the specified queue. If the filters parameter is specified, the jobStatus parameter is ignored and jobs with any status are returned. If you don’t specify a status, only RUNNING jobs are returned.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The maximum number of results returned by ListJobs in paginated output. When this parameter is used, ListJobs only returns maxResults results in a single page and a nextToken response element. The remaining results of the initial request can be seen by sending another ListJobs request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn’t used, then ListJobs returns up to 100 results and a nextToken value if applicable.


    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      The nextToken value returned from a previous paginated ListJobs request where maxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken value. This value is null when there are no more results to return.

      Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.


    • filters(KeyValuesPair) / set_filters(Option<Vec::<KeyValuesPair>>):
      required: false

      The filter to apply to the query. Only one filter can be used at a time. When the filter is used, jobStatus is ignored. The filter doesn’t apply to child jobs in an array or multi-node parallel (MNP) jobs. The results are sorted by the createdAt field, with the most recent jobs being first.

      JOB_NAME

      The value of the filter is a case-insensitive match for the job name. If the value ends with an asterisk (), the filter matches any job name that begins with the string before the ‘’. This corresponds to the jobName value. For example, test1 matches both Test1 and test1, and test1* matches both test1 and Test10. When the JOB_NAME filter is used, the results are grouped by the job name and version.

      JOB_DEFINITION

      The value for the filter is the name or Amazon Resource Name (ARN) of the job definition. This corresponds to the jobDefinition value. The value is case sensitive. When the value for the filter is the job definition name, the results include all the jobs that used any revision of that job definition name. If the value ends with an asterisk (), the filter matches any job definition name that begins with the string before the ‘’. For example, jd1 matches only jd1, and jd1* matches both jd1 and jd1A. The version of the job definition that’s used doesn’t affect the sort order. When the JOB_DEFINITION filter is used and the ARN is used (which is in the form arn:${Partition}:batch:${Region}:${Account}:job-definition/${JobDefinitionName}:${Revision}), the results include jobs that used the specified revision of the job definition. Asterisk (*) isn’t supported when the ARN is used.

      BEFORE_CREATED_AT

      The value for the filter is the time that’s before the job was created. This corresponds to the createdAt value. The value is a string representation of the number of milliseconds since 00:00:00 UTC (midnight) on January 1, 1970.

      AFTER_CREATED_AT

      The value for the filter is the time that’s after the job was created. This corresponds to the createdAt value. The value is a string representation of the number of milliseconds since 00:00:00 UTC (midnight) on January 1, 1970.


  • On success, responds with ListJobsOutput with field(s):
    • job_summary_list(Option<Vec::<JobSummary>>):

      A list of job summaries that match the request.

    • next_token(Option<String>):

      The nextToken value to include in a future ListJobs request. When the results of a ListJobs request exceed maxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.

  • On failure, responds with SdkError<ListJobsError>
source§

impl Client

source

pub fn list_scheduling_policies(&self) -> ListSchedulingPoliciesFluentBuilder

Constructs a fluent builder for the ListSchedulingPolicies operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The maximum number of results that’s returned by ListSchedulingPolicies in paginated output. When this parameter is used, ListSchedulingPolicies only returns maxResults results in a single page and a nextToken response element. You can see the remaining results of the initial request by sending another ListSchedulingPolicies request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn’t used, ListSchedulingPolicies returns up to 100 results and a nextToken value if applicable.


    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      The nextToken value that’s returned from a previous paginated ListSchedulingPolicies request where maxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken value. This value is null when there are no more results to return.

      Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.


  • On success, responds with ListSchedulingPoliciesOutput with field(s):
  • On failure, responds with SdkError<ListSchedulingPoliciesError>
source§

impl Client

source

pub fn list_tags_for_resource(&self) -> ListTagsForResourceFluentBuilder

Constructs a fluent builder for the ListTagsForResource operation.

source§

impl Client

source

pub fn register_job_definition(&self) -> RegisterJobDefinitionFluentBuilder

Constructs a fluent builder for the RegisterJobDefinition operation.

source§

impl Client

source

pub fn submit_job(&self) -> SubmitJobFluentBuilder

Constructs a fluent builder for the SubmitJob operation.

  • The fluent builder is configurable:
    • job_name(impl Into<String>) / set_job_name(Option<String>):
      required: true

      The name of the job. It can be up to 128 letters long. The first character must be alphanumeric, can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).


    • job_queue(impl Into<String>) / set_job_queue(Option<String>):
      required: true

      The job queue where the job is submitted. You can specify either the name or the Amazon Resource Name (ARN) of the queue.


    • share_identifier(impl Into<String>) / set_share_identifier(Option<String>):
      required: false

      The share identifier for the job. Don’t specify this parameter if the job queue doesn’t have a scheduling policy. If the job queue has a scheduling policy, then this parameter must be specified.

      This string is limited to 255 alphanumeric characters, and can be followed by an asterisk (*).


    • scheduling_priority_override(i32) / set_scheduling_priority_override(Option<i32>):
      required: false

      The scheduling priority for the job. This only affects jobs in job queues with a fair share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower scheduling priority. This overrides any scheduling priority in the job definition and works only within a single share identifier.

      The minimum supported value is 0 and the maximum supported value is 9999.


    • array_properties(ArrayProperties) / set_array_properties(Option<ArrayProperties>):
      required: false

      The array properties for the submitted job, such as the size of the array. The array size can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job. For more information, see Array Jobs in the Batch User Guide.


    • depends_on(JobDependency) / set_depends_on(Option<Vec::<JobDependency>>):
      required: false

      A list of dependencies for the job. A job can depend upon a maximum of 20 jobs. You can specify a SEQUENTIAL type dependency without specifying a job ID for array jobs so that each child array job completes sequentially, starting at index 0. You can also specify an N_TO_N type dependency with a job ID for array jobs. In that case, each index child of this job must wait for the corresponding index child of each dependency to complete before it can begin.


    • job_definition(impl Into<String>) / set_job_definition(Option<String>):
      required: true

      The job definition used by this job. This value can be one of definition-name, definition-name:revision, or the Amazon Resource Name (ARN) for the job definition, with or without the revision (arn:aws:batch:region:account:job-definition/definition-name:revision , or arn:aws:batch:region:account:job-definition/definition-name ).

      If the revision is not specified, then the latest active revision is used.


    • parameters(impl Into<String>, impl Into<String>) / set_parameters(Option<HashMap::<String, String>>):
      required: false

      Additional parameters passed to the job that replace parameter substitution placeholders that are set in the job definition. Parameters are specified as a key and value pair mapping. Parameters in a SubmitJob request override any corresponding parameter defaults from the job definition.


    • container_overrides(ContainerOverrides) / set_container_overrides(Option<ContainerOverrides>):
      required: false

      An object with various properties that override the defaults for the job definition that specify the name of a container in the specified job definition and the overrides it should receive. You can override the default command for a container, which is specified in the job definition or the Docker image, with a command override. You can also override existing environment variables on a container or add new environment variables to it with an environment override.


    • node_overrides(NodeOverrides) / set_node_overrides(Option<NodeOverrides>):
      required: false

      A list of node overrides in JSON format that specify the node range to target and the container overrides for that node range.

      This parameter isn’t applicable to jobs that are running on Fargate resources; use containerOverrides instead.


    • retry_strategy(RetryStrategy) / set_retry_strategy(Option<RetryStrategy>):
      required: false

      The retry strategy to use for failed jobs from this SubmitJob operation. When a retry strategy is specified here, it overrides the retry strategy defined in the job definition.


    • propagate_tags(bool) / set_propagate_tags(Option<bool>):
      required: false

      Specifies whether to propagate the tags from the job or job definition to the corresponding Amazon ECS task. If no value is specified, the tags aren’t propagated. Tags can only be propagated to the tasks during task creation. For tags with the same name, job tags are given priority over job definitions tags. If the total number of combined tags from the job and job definition is over 50, the job is moved to the FAILED state. When specified, this overrides the tag propagation setting in the job definition.


    • timeout(JobTimeout) / set_timeout(Option<JobTimeout>):
      required: false

      The timeout configuration for this SubmitJob operation. You can specify a timeout duration after which Batch terminates your jobs if they haven’t finished. If a job is terminated due to a timeout, it isn’t retried. The minimum value for the timeout is 60 seconds. This configuration overrides any timeout configuration specified in the job definition. For array jobs, child jobs have the same timeout configuration as the parent job. For more information, see Job Timeouts in the Amazon Elastic Container Service Developer Guide.


    • tags(impl Into<String>, impl Into<String>) / set_tags(Option<HashMap::<String, String>>):
      required: false

      The tags that you apply to the job request to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging Amazon Web Services Resources in Amazon Web Services General Reference.


    • eks_properties_override(EksPropertiesOverride) / set_eks_properties_override(Option<EksPropertiesOverride>):
      required: false

      An object that can only be specified for jobs that are run on Amazon EKS resources with various properties that override defaults for the job definition.


  • On success, responds with SubmitJobOutput with field(s):
  • On failure, responds with SdkError<SubmitJobError>
source§

impl Client

source

pub fn tag_resource(&self) -> TagResourceFluentBuilder

Constructs a fluent builder for the TagResource operation.

source§

impl Client

source

pub fn terminate_job(&self) -> TerminateJobFluentBuilder

Constructs a fluent builder for the TerminateJob operation.

source§

impl Client

source

pub fn untag_resource(&self) -> UntagResourceFluentBuilder

Constructs a fluent builder for the UntagResource operation.

source§

impl Client

source

pub fn update_compute_environment( &self ) -> UpdateComputeEnvironmentFluentBuilder

Constructs a fluent builder for the UpdateComputeEnvironment operation.

  • The fluent builder is configurable:
    • compute_environment(impl Into<String>) / set_compute_environment(Option<String>):
      required: true

      The name or full Amazon Resource Name (ARN) of the compute environment to update.


    • state(CeState) / set_state(Option<CeState>):
      required: false

      The state of the compute environment. Compute environments in the ENABLED state can accept jobs from a queue and scale in or out automatically based on the workload demand of its associated queues.

      If the state is ENABLED, then the Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.

      If the state is DISABLED, then the Batch scheduler doesn’t attempt to place jobs within the environment. Jobs in a STARTING or RUNNING state continue to progress normally. Managed compute environments in the DISABLED state don’t scale out.

      Compute environments in a DISABLED state may continue to incur billing charges. To prevent additional charges, turn off and then delete the compute environment. For more information, see State in the Batch User Guide.

      When an instance is idle, the instance scales down to the minvCpus value. However, the instance size doesn’t change. For example, consider a c5.8xlarge instance with a minvCpus value of 4 and a desiredvCpus value of 36. This instance doesn’t scale down to a c5.large instance.


    • unmanagedv_cpus(i32) / set_unmanagedv_cpus(Option<i32>):
      required: false

      The maximum number of vCPUs expected to be used for an unmanaged compute environment. Don’t specify this parameter for a managed compute environment. This parameter is only used for fair share scheduling to reserve vCPU capacity for new share identifiers. If this parameter isn’t provided for a fair share job queue, no vCPU capacity is reserved.


    • compute_resources(ComputeResourceUpdate) / set_compute_resources(Option<ComputeResourceUpdate>):
      required: false

      Details of the compute resources managed by the compute environment. Required for a managed compute environment. For more information, see Compute Environments in the Batch User Guide.


    • service_role(impl Into<String>) / set_service_role(Option<String>):
      required: false

      The full Amazon Resource Name (ARN) of the IAM role that allows Batch to make calls to other Amazon Web Services services on your behalf. For more information, see Batch service IAM role in the Batch User Guide.

      If the compute environment has a service-linked role, it can’t be changed to use a regular IAM role. Likewise, if the compute environment has a regular IAM role, it can’t be changed to use a service-linked role. To update the parameters for the compute environment that require an infrastructure update to change, the AWSServiceRoleForBatch service-linked role must be used. For more information, see Updating compute environments in the Batch User Guide.

      If your specified role has a path other than /, then you must either specify the full role ARN (recommended) or prefix the role name with the path.

      Depending on how you created your Batch service role, its ARN might contain the service-role path prefix. When you only specify the name of the service role, Batch assumes that your ARN doesn’t use the service-role path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.


    • update_policy(UpdatePolicy) / set_update_policy(Option<UpdatePolicy>):
      required: false

      Specifies the updated infrastructure update policy for the compute environment. For more information about infrastructure updates, see Updating compute environments in the Batch User Guide.


  • On success, responds with UpdateComputeEnvironmentOutput with field(s):
  • On failure, responds with SdkError<UpdateComputeEnvironmentError>
source§

impl Client

source

pub fn update_job_queue(&self) -> UpdateJobQueueFluentBuilder

Constructs a fluent builder for the UpdateJobQueue operation.

  • The fluent builder is configurable:
    • job_queue(impl Into<String>) / set_job_queue(Option<String>):
      required: true

      The name or the Amazon Resource Name (ARN) of the job queue.


    • state(JqState) / set_state(Option<JqState>):
      required: false

      Describes the queue’s ability to accept new jobs. If the job queue state is ENABLED, it can accept jobs. If the job queue state is DISABLED, new jobs can’t be added to the queue, but jobs already in the queue can finish.


    • scheduling_policy_arn(impl Into<String>) / set_scheduling_policy_arn(Option<String>):
      required: false

      Amazon Resource Name (ARN) of the fair share scheduling policy. Once a job queue is created, the fair share scheduling policy can be replaced but not removed. The format is aws:Partition:batch:Region:Account:scheduling-policy/Name . For example, aws:aws:batch:us-west-2:123456789012:scheduling-policy/MySchedulingPolicy.


    • priority(i32) / set_priority(Option<i32>):
      required: false

      The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of 10 is given scheduling preference over a job queue with a priority value of 1. All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT). EC2 and Fargate compute environments can’t be mixed.


    • compute_environment_order(ComputeEnvironmentOrder) / set_compute_environment_order(Option<Vec::<ComputeEnvironmentOrder>>):
      required: false

      Details the set of compute environments mapped to a job queue and their order relative to each other. This is one of the parameters used by the job scheduler to determine which compute environment runs a given job. Compute environments must be in the VALID state before you can associate them with a job queue. All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT). EC2 and Fargate compute environments can’t be mixed.

      All compute environments that are associated with a job queue must share the same architecture. Batch doesn’t support mixing compute environment architecture types in a single job queue.


  • On success, responds with UpdateJobQueueOutput with field(s):
  • On failure, responds with SdkError<UpdateJobQueueError>
source§

impl Client

source

pub fn update_scheduling_policy(&self) -> UpdateSchedulingPolicyFluentBuilder

Constructs a fluent builder for the UpdateSchedulingPolicy operation.

source§

impl Client

source

pub fn from_conf(conf: Config) -> Self

Creates a new client from the service Config.

§Panics

This method will panic in the following cases:

  • Retries or timeouts are enabled without a sleep_impl configured.
  • Identity caching is enabled without a sleep_impl and time_source configured.
  • No behavior_version is provided.

The panic message for each of these will have instructions on how to resolve them.

source

pub fn config(&self) -> &Config

Returns the client’s configuration.

source§

impl Client

source

pub fn new(sdk_config: &SdkConfig) -> Self

Creates a new client from an SDK Config.

§Panics
  • This method will panic if the sdk_config is missing an async sleep implementation. If you experience this panic, set the sleep_impl on the Config passed into this function to fix it.
  • This method will panic if the sdk_config is missing an HTTP connector. If you experience this panic, set the http_connector on the Config passed into this function to fix it.
  • This method will panic if no BehaviorVersion is provided. If you experience this panic, set behavior_version on the Config or enable the behavior-version-latest Cargo feature.

Trait Implementations§

source§

impl Clone for Client

source§

fn clone(&self) -> Client

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for Client

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

§

impl !RefUnwindSafe for Client

§

impl Send for Client

§

impl Sync for Client

§

impl Unpin for Client

§

impl !UnwindSafe for Client

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more