Struct aws_sdk_batch::Client

source ·
pub struct Client { /* private fields */ }
Expand description

Client for AWS Batch

Client for invoking operations on AWS Batch. Each operation on AWS Batch is a method on this this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.

Constructing a Client

A Config is required to construct a client. For most use cases, the aws-config crate should be used to automatically resolve this config using aws_config::load_from_env(), since this will resolve an SdkConfig which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling aws_config::from_env() instead, which returns a ConfigLoader that uses the builder pattern to customize the default config.

In the simplest case, creating a client looks as follows:

let config = aws_config::load_from_env().await;
let client = aws_sdk_batch::Client::new(&config);

Occasionally, SDKs may have additional service-specific that can be set on the Config that is absent from SdkConfig, or slightly different settings for a specific client may be desired. The Config struct implements From<&SdkConfig>, so setting these specific settings can be done as follows:

let sdk_config = aws_config::load_from_env().await;
let config = aws_sdk_batch::config::Builder::from(&sdk_config)
    .some_service_specific_setting("value")
    .build();

See the aws-config docs and Config for more information on customizing configuration.

Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.

Using the Client

A client has a function for every operation that can be performed by the service. For example, the CancelJob operation has a Client::cancel_job, function which returns a builder for that operation. The fluent builder ultimately has a call() function that returns an async future that returns a result, as illustrated below:

let result = client.cancel_job()
    .job_id("example")
    .call()
    .await;

The underlying HTTP requests that get made by this can be modified with the customize_operation function on the fluent builder. See the customize module for more information.

Implementations§

source§

impl Client

source

pub fn cancel_job(&self) -> CancelJobFluentBuilder

Constructs a fluent builder for the CancelJob operation.

source§

impl Client

source

pub fn create_compute_environment( &self ) -> CreateComputeEnvironmentFluentBuilder

Constructs a fluent builder for the CreateComputeEnvironment operation.

  • The fluent builder is configurable:
    • compute_environment_name(impl Into<String>) / set_compute_environment_name(Option<String>):

      The name for your compute environment. It can be up to 128 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).

    • r#type(CeType) / set_type(Option<CeType>):

      The type of the compute environment: MANAGED or UNMANAGED. For more information, see Compute Environments in the Batch User Guide.

    • state(CeState) / set_state(Option<CeState>):

      The state of the compute environment. If the state is ENABLED, then the compute environment accepts jobs from a queue and can scale out automatically based on queues.

      If the state is ENABLED, then the Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.

      If the state is DISABLED, then the Batch scheduler doesn’t attempt to place jobs within the environment. Jobs in a STARTING or RUNNING state continue to progress normally. Managed compute environments in the DISABLED state don’t scale out.

      Compute environments in a DISABLED state may continue to incur billing charges. To prevent additional charges, turn off and then delete the compute environment. For more information, see State in the Batch User Guide.

      When an instance is idle, the instance scales down to the minvCpus value. However, the instance size doesn’t change. For example, consider a c5.8xlarge instance with a minvCpus value of 4 and a desiredvCpus value of 36. This instance doesn’t scale down to a c5.large instance.

    • unmanagedv_cpus(i32) / set_unmanagedv_cpus(Option<i32>):

      The maximum number of vCPUs for an unmanaged compute environment. This parameter is only used for fair share scheduling to reserve vCPU capacity for new share identifiers. If this parameter isn’t provided for a fair share job queue, no vCPU capacity is reserved.

      This parameter is only supported when the type parameter is set to UNMANAGED.

    • compute_resources(ComputeResource) / set_compute_resources(Option<ComputeResource>):

      Details about the compute resources managed by the compute environment. This parameter is required for managed compute environments. For more information, see Compute Environments in the Batch User Guide.

    • service_role(impl Into<String>) / set_service_role(Option<String>):

      The full Amazon Resource Name (ARN) of the IAM role that allows Batch to make calls to other Amazon Web Services services on your behalf. For more information, see Batch service IAM role in the Batch User Guide.

      If your account already created the Batch service-linked role, that role is used by default for your compute environment unless you specify a different role here. If the Batch service-linked role doesn’t exist in your account, and no role is specified here, the service attempts to create the Batch service-linked role in your account.

      If your specified role has a path other than /, then you must specify either the full role ARN (recommended) or prefix the role name with the path. For example, if a role with the name bar has a path of /foo/, specify /foo/bar as the role name. For more information, see Friendly names and paths in the IAM User Guide.

      Depending on how you created your Batch service role, its ARN might contain the service-role path prefix. When you only specify the name of the service role, Batch assumes that your ARN doesn’t use the service-role path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.

    • tags(HashMap<String, String>) / set_tags(Option<HashMap<String, String>>):

      The tags that you apply to the compute environment to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging Amazon Web Services Resources in Amazon Web Services General Reference.

      These tags can be updated or removed using the TagResource and UntagResource API operations. These tags don’t propagate to the underlying compute resources.

    • eks_configuration(EksConfiguration) / set_eks_configuration(Option<EksConfiguration>):

      The details for the Amazon EKS cluster that supports the compute environment.

  • On success, responds with CreateComputeEnvironmentOutput with field(s):
  • On failure, responds with SdkError<CreateComputeEnvironmentError>
source§

impl Client

source

pub fn create_job_queue(&self) -> CreateJobQueueFluentBuilder

Constructs a fluent builder for the CreateJobQueue operation.

  • The fluent builder is configurable:
    • job_queue_name(impl Into<String>) / set_job_queue_name(Option<String>):

      The name of the job queue. It can be up to 128 letters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).

    • state(JqState) / set_state(Option<JqState>):

      The state of the job queue. If the job queue state is ENABLED, it is able to accept jobs. If the job queue state is DISABLED, new jobs can’t be added to the queue, but jobs already in the queue can finish.

    • scheduling_policy_arn(impl Into<String>) / set_scheduling_policy_arn(Option<String>):

      The Amazon Resource Name (ARN) of the fair share scheduling policy. If this parameter is specified, the job queue uses a fair share scheduling policy. If this parameter isn’t specified, the job queue uses a first in, first out (FIFO) scheduling policy. After a job queue is created, you can replace but can’t remove the fair share scheduling policy. The format is aws:Partition:batch:Region:Account:scheduling-policy/Name . An example is aws:aws:batch:us-west-2:123456789012:scheduling-policy/MySchedulingPolicy.

    • priority(i32) / set_priority(Option<i32>):

      The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of 10 is given scheduling preference over a job queue with a priority value of 1. All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT); EC2 and Fargate compute environments can’t be mixed.

    • compute_environment_order(Vec<ComputeEnvironmentOrder>) / set_compute_environment_order(Option<Vec<ComputeEnvironmentOrder>>):

      The set of compute environments mapped to a job queue and their order relative to each other. The job scheduler uses this parameter to determine which compute environment runs a specific job. Compute environments must be in the VALID state before you can associate them with a job queue. You can associate up to three compute environments with a job queue. All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT); EC2 and Fargate compute environments can’t be mixed.

      All compute environments that are associated with a job queue must share the same architecture. Batch doesn’t support mixing compute environment architecture types in a single job queue.

    • tags(HashMap<String, String>) / set_tags(Option<HashMap<String, String>>):

      The tags that you apply to the job queue to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging your Batch resources in Batch User Guide.

  • On success, responds with CreateJobQueueOutput with field(s):
  • On failure, responds with SdkError<CreateJobQueueError>
source§

impl Client

source

pub fn create_scheduling_policy(&self) -> CreateSchedulingPolicyFluentBuilder

Constructs a fluent builder for the CreateSchedulingPolicy operation.

source§

impl Client

source

pub fn delete_compute_environment( &self ) -> DeleteComputeEnvironmentFluentBuilder

Constructs a fluent builder for the DeleteComputeEnvironment operation.

source§

impl Client

source

pub fn delete_job_queue(&self) -> DeleteJobQueueFluentBuilder

Constructs a fluent builder for the DeleteJobQueue operation.

source§

impl Client

source

pub fn delete_scheduling_policy(&self) -> DeleteSchedulingPolicyFluentBuilder

Constructs a fluent builder for the DeleteSchedulingPolicy operation.

source§

impl Client

source

pub fn deregister_job_definition(&self) -> DeregisterJobDefinitionFluentBuilder

Constructs a fluent builder for the DeregisterJobDefinition operation.

source§

impl Client

source

pub fn describe_compute_environments( &self ) -> DescribeComputeEnvironmentsFluentBuilder

Constructs a fluent builder for the DescribeComputeEnvironments operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • compute_environments(Vec<String>) / set_compute_environments(Option<Vec<String>>):

      A list of up to 100 compute environment names or full Amazon Resource Name (ARN) entries.

    • max_results(i32) / set_max_results(Option<i32>):

      The maximum number of cluster results returned by DescribeComputeEnvironments in paginated output. When this parameter is used, DescribeComputeEnvironments only returns maxResults results in a single page along with a nextToken response element. The remaining results of the initial request can be seen by sending another DescribeComputeEnvironments request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn’t used, then DescribeComputeEnvironments returns up to 100 results and a nextToken value if applicable.

    • next_token(impl Into<String>) / set_next_token(Option<String>):

      The nextToken value returned from a previous paginated DescribeComputeEnvironments request where maxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken value. This value is null when there are no more results to return.

      Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.

  • On success, responds with DescribeComputeEnvironmentsOutput with field(s):
  • On failure, responds with SdkError<DescribeComputeEnvironmentsError>
source§

impl Client

source

pub fn describe_job_definitions(&self) -> DescribeJobDefinitionsFluentBuilder

Constructs a fluent builder for the DescribeJobDefinitions operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • job_definitions(Vec<String>) / set_job_definitions(Option<Vec<String>>):

      A list of up to 100 job definitions. Each entry in the list can either be an ARN in the format arn:aws:batch:${Region}:${Account}:job-definition/${JobDefinitionName}:${Revision} or a short version using the form ${JobDefinitionName}:${Revision}.

    • max_results(i32) / set_max_results(Option<i32>):

      The maximum number of results returned by DescribeJobDefinitions in paginated output. When this parameter is used, DescribeJobDefinitions only returns maxResults results in a single page and a nextToken response element. The remaining results of the initial request can be seen by sending another DescribeJobDefinitions request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn’t used, then DescribeJobDefinitions returns up to 100 results and a nextToken value if applicable.

    • job_definition_name(impl Into<String>) / set_job_definition_name(Option<String>):

      The name of the job definition to describe.

    • status(impl Into<String>) / set_status(Option<String>):

      The status used to filter job definitions.

    • next_token(impl Into<String>) / set_next_token(Option<String>):

      The nextToken value returned from a previous paginated DescribeJobDefinitions request where maxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken value. This value is null when there are no more results to return.

      Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.

  • On success, responds with DescribeJobDefinitionsOutput with field(s):
    • job_definitions(Option<Vec<JobDefinition>>):

      The list of job definitions.

    • next_token(Option<String>):

      The nextToken value to include in a future DescribeJobDefinitions request. When the results of a DescribeJobDefinitions request exceed maxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.

  • On failure, responds with SdkError<DescribeJobDefinitionsError>
source§

impl Client

source

pub fn describe_job_queues(&self) -> DescribeJobQueuesFluentBuilder

Constructs a fluent builder for the DescribeJobQueues operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • job_queues(Vec<String>) / set_job_queues(Option<Vec<String>>):

      A list of up to 100 queue names or full queue Amazon Resource Name (ARN) entries.

    • max_results(i32) / set_max_results(Option<i32>):

      The maximum number of results returned by DescribeJobQueues in paginated output. When this parameter is used, DescribeJobQueues only returns maxResults results in a single page and a nextToken response element. The remaining results of the initial request can be seen by sending another DescribeJobQueues request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn’t used, then DescribeJobQueues returns up to 100 results and a nextToken value if applicable.

    • next_token(impl Into<String>) / set_next_token(Option<String>):

      The nextToken value returned from a previous paginated DescribeJobQueues request where maxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken value. This value is null when there are no more results to return.

      Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.

  • On success, responds with DescribeJobQueuesOutput with field(s):
    • job_queues(Option<Vec<JobQueueDetail>>):

      The list of job queues.

    • next_token(Option<String>):

      The nextToken value to include in a future DescribeJobQueues request. When the results of a DescribeJobQueues request exceed maxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.

  • On failure, responds with SdkError<DescribeJobQueuesError>
source§

impl Client

source

pub fn describe_jobs(&self) -> DescribeJobsFluentBuilder

Constructs a fluent builder for the DescribeJobs operation.

source§

impl Client

source

pub fn describe_scheduling_policies( &self ) -> DescribeSchedulingPoliciesFluentBuilder

Constructs a fluent builder for the DescribeSchedulingPolicies operation.

source§

impl Client

source

pub fn list_jobs(&self) -> ListJobsFluentBuilder

Constructs a fluent builder for the ListJobs operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • job_queue(impl Into<String>) / set_job_queue(Option<String>):

      The name or full Amazon Resource Name (ARN) of the job queue used to list jobs.

    • array_job_id(impl Into<String>) / set_array_job_id(Option<String>):

      The job ID for an array job. Specifying an array job ID with this parameter lists all child jobs from within the specified array.

    • multi_node_job_id(impl Into<String>) / set_multi_node_job_id(Option<String>):

      The job ID for a multi-node parallel job. Specifying a multi-node parallel job ID with this parameter lists all nodes that are associated with the specified job.

    • job_status(JobStatus) / set_job_status(Option<JobStatus>):

      The job status used to filter jobs in the specified queue. If the filters parameter is specified, the jobStatus parameter is ignored and jobs with any status are returned. If you don’t specify a status, only RUNNING jobs are returned.

    • max_results(i32) / set_max_results(Option<i32>):

      The maximum number of results returned by ListJobs in paginated output. When this parameter is used, ListJobs only returns maxResults results in a single page and a nextToken response element. The remaining results of the initial request can be seen by sending another ListJobs request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn’t used, then ListJobs returns up to 100 results and a nextToken value if applicable.

    • next_token(impl Into<String>) / set_next_token(Option<String>):

      The nextToken value returned from a previous paginated ListJobs request where maxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken value. This value is null when there are no more results to return.

      Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.

    • filters(Vec<KeyValuesPair>) / set_filters(Option<Vec<KeyValuesPair>>):

      The filter to apply to the query. Only one filter can be used at a time. When the filter is used, jobStatus is ignored. The filter doesn’t apply to child jobs in an array or multi-node parallel (MNP) jobs. The results are sorted by the createdAt field, with the most recent jobs being first.

      JOB_NAME

      The value of the filter is a case-insensitive match for the job name. If the value ends with an asterisk (), the filter matches any job name that begins with the string before the ‘’. This corresponds to the jobName value. For example, test1 matches both Test1 and test1, and test1* matches both test1 and Test10. When the JOB_NAME filter is used, the results are grouped by the job name and version.

      JOB_DEFINITION

      The value for the filter is the name or Amazon Resource Name (ARN) of the job definition. This corresponds to the jobDefinition value. The value is case sensitive. When the value for the filter is the job definition name, the results include all the jobs that used any revision of that job definition name. If the value ends with an asterisk (), the filter matches any job definition name that begins with the string before the ‘’. For example, jd1 matches only jd1, and jd1* matches both jd1 and jd1A. The version of the job definition that’s used doesn’t affect the sort order. When the JOB_DEFINITION filter is used and the ARN is used (which is in the form arn:${Partition}:batch:${Region}:${Account}:job-definition/${JobDefinitionName}:${Revision}), the results include jobs that used the specified revision of the job definition. Asterisk (*) isn’t supported when the ARN is used.

      BEFORE_CREATED_AT

      The value for the filter is the time that’s before the job was created. This corresponds to the createdAt value. The value is a string representation of the number of milliseconds since 00:00:00 UTC (midnight) on January 1, 1970.

      AFTER_CREATED_AT

      The value for the filter is the time that’s after the job was created. This corresponds to the createdAt value. The value is a string representation of the number of milliseconds since 00:00:00 UTC (midnight) on January 1, 1970.

  • On success, responds with ListJobsOutput with field(s):
    • job_summary_list(Option<Vec<JobSummary>>):

      A list of job summaries that match the request.

    • next_token(Option<String>):

      The nextToken value to include in a future ListJobs request. When the results of a ListJobs request exceed maxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.

  • On failure, responds with SdkError<ListJobsError>
source§

impl Client

source

pub fn list_scheduling_policies(&self) -> ListSchedulingPoliciesFluentBuilder

Constructs a fluent builder for the ListSchedulingPolicies operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • max_results(i32) / set_max_results(Option<i32>):

      The maximum number of results that’s returned by ListSchedulingPolicies in paginated output. When this parameter is used, ListSchedulingPolicies only returns maxResults results in a single page and a nextToken response element. You can see the remaining results of the initial request by sending another ListSchedulingPolicies request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn’t used, ListSchedulingPolicies returns up to 100 results and a nextToken value if applicable.

    • next_token(impl Into<String>) / set_next_token(Option<String>):

      The nextToken value that’s returned from a previous paginated ListSchedulingPolicies request where maxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken value. This value is null when there are no more results to return.

      Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.

  • On success, responds with ListSchedulingPoliciesOutput with field(s):
  • On failure, responds with SdkError<ListSchedulingPoliciesError>
source§

impl Client

source

pub fn list_tags_for_resource(&self) -> ListTagsForResourceFluentBuilder

Constructs a fluent builder for the ListTagsForResource operation.

source§

impl Client

source

pub fn register_job_definition(&self) -> RegisterJobDefinitionFluentBuilder

Constructs a fluent builder for the RegisterJobDefinition operation.

source§

impl Client

source

pub fn submit_job(&self) -> SubmitJobFluentBuilder

Constructs a fluent builder for the SubmitJob operation.

source§

impl Client

source

pub fn tag_resource(&self) -> TagResourceFluentBuilder

Constructs a fluent builder for the TagResource operation.

source§

impl Client

source

pub fn terminate_job(&self) -> TerminateJobFluentBuilder

Constructs a fluent builder for the TerminateJob operation.

source§

impl Client

source

pub fn untag_resource(&self) -> UntagResourceFluentBuilder

Constructs a fluent builder for the UntagResource operation.

source§

impl Client

source

pub fn update_compute_environment( &self ) -> UpdateComputeEnvironmentFluentBuilder

Constructs a fluent builder for the UpdateComputeEnvironment operation.

  • The fluent builder is configurable:
    • compute_environment(impl Into<String>) / set_compute_environment(Option<String>):

      The name or full Amazon Resource Name (ARN) of the compute environment to update.

    • state(CeState) / set_state(Option<CeState>):

      The state of the compute environment. Compute environments in the ENABLED state can accept jobs from a queue and scale in or out automatically based on the workload demand of its associated queues.

      If the state is ENABLED, then the Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.

      If the state is DISABLED, then the Batch scheduler doesn’t attempt to place jobs within the environment. Jobs in a STARTING or RUNNING state continue to progress normally. Managed compute environments in the DISABLED state don’t scale out.

      Compute environments in a DISABLED state may continue to incur billing charges. To prevent additional charges, turn off and then delete the compute environment. For more information, see State in the Batch User Guide.

      When an instance is idle, the instance scales down to the minvCpus value. However, the instance size doesn’t change. For example, consider a c5.8xlarge instance with a minvCpus value of 4 and a desiredvCpus value of 36. This instance doesn’t scale down to a c5.large instance.

    • unmanagedv_cpus(i32) / set_unmanagedv_cpus(Option<i32>):

      The maximum number of vCPUs expected to be used for an unmanaged compute environment. Don’t specify this parameter for a managed compute environment. This parameter is only used for fair share scheduling to reserve vCPU capacity for new share identifiers. If this parameter isn’t provided for a fair share job queue, no vCPU capacity is reserved.

    • compute_resources(ComputeResourceUpdate) / set_compute_resources(Option<ComputeResourceUpdate>):

      Details of the compute resources managed by the compute environment. Required for a managed compute environment. For more information, see Compute Environments in the Batch User Guide.

    • service_role(impl Into<String>) / set_service_role(Option<String>):

      The full Amazon Resource Name (ARN) of the IAM role that allows Batch to make calls to other Amazon Web Services services on your behalf. For more information, see Batch service IAM role in the Batch User Guide.

      If the compute environment has a service-linked role, it can’t be changed to use a regular IAM role. Likewise, if the compute environment has a regular IAM role, it can’t be changed to use a service-linked role. To update the parameters for the compute environment that require an infrastructure update to change, the AWSServiceRoleForBatch service-linked role must be used. For more information, see Updating compute environments in the Batch User Guide.

      If your specified role has a path other than /, then you must either specify the full role ARN (recommended) or prefix the role name with the path.

      Depending on how you created your Batch service role, its ARN might contain the service-role path prefix. When you only specify the name of the service role, Batch assumes that your ARN doesn’t use the service-role path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.

    • update_policy(UpdatePolicy) / set_update_policy(Option<UpdatePolicy>):

      Specifies the updated infrastructure update policy for the compute environment. For more information about infrastructure updates, see Updating compute environments in the Batch User Guide.

  • On success, responds with UpdateComputeEnvironmentOutput with field(s):
  • On failure, responds with SdkError<UpdateComputeEnvironmentError>
source§

impl Client

source

pub fn update_job_queue(&self) -> UpdateJobQueueFluentBuilder

Constructs a fluent builder for the UpdateJobQueue operation.

  • The fluent builder is configurable:
    • job_queue(impl Into<String>) / set_job_queue(Option<String>):

      The name or the Amazon Resource Name (ARN) of the job queue.

    • state(JqState) / set_state(Option<JqState>):

      Describes the queue’s ability to accept new jobs. If the job queue state is ENABLED, it can accept jobs. If the job queue state is DISABLED, new jobs can’t be added to the queue, but jobs already in the queue can finish.

    • scheduling_policy_arn(impl Into<String>) / set_scheduling_policy_arn(Option<String>):

      Amazon Resource Name (ARN) of the fair share scheduling policy. Once a job queue is created, the fair share scheduling policy can be replaced but not removed. The format is aws:Partition:batch:Region:Account:scheduling-policy/Name . For example, aws:aws:batch:us-west-2:123456789012:scheduling-policy/MySchedulingPolicy.

    • priority(i32) / set_priority(Option<i32>):

      The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of 10 is given scheduling preference over a job queue with a priority value of 1. All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT). EC2 and Fargate compute environments can’t be mixed.

    • compute_environment_order(Vec<ComputeEnvironmentOrder>) / set_compute_environment_order(Option<Vec<ComputeEnvironmentOrder>>):

      Details the set of compute environments mapped to a job queue and their order relative to each other. This is one of the parameters used by the job scheduler to determine which compute environment runs a given job. Compute environments must be in the VALID state before you can associate them with a job queue. All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT). EC2 and Fargate compute environments can’t be mixed.

      All compute environments that are associated with a job queue must share the same architecture. Batch doesn’t support mixing compute environment architecture types in a single job queue.

  • On success, responds with UpdateJobQueueOutput with field(s):
  • On failure, responds with SdkError<UpdateJobQueueError>
source§

impl Client

source

pub fn update_scheduling_policy(&self) -> UpdateSchedulingPolicyFluentBuilder

Constructs a fluent builder for the UpdateSchedulingPolicy operation.

source§

impl Client

source

pub fn with_config( client: Client<DynConnector, DynMiddleware<DynConnector>>, conf: Config ) -> Self

Creates a client with the given service configuration.

source

pub fn conf(&self) -> &Config

Returns the client’s configuration.

source§

impl Client

source

pub fn new(sdk_config: &SdkConfig) -> Self

Creates a new client from an SDK Config.

Panics
  • This method will panic if the sdk_config is missing an async sleep implementation. If you experience this panic, set the sleep_impl on the Config passed into this function to fix it.
  • This method will panic if the sdk_config is missing an HTTP connector. If you experience this panic, set the http_connector on the Config passed into this function to fix it.
source

pub fn from_conf(conf: Config) -> Self

Creates a new client from the service Config.

Panics
  • This method will panic if the conf is missing an async sleep implementation. If you experience this panic, set the sleep_impl on the Config passed into this function to fix it.
  • This method will panic if the conf is missing an HTTP connector. If you experience this panic, set the http_connector on the Config passed into this function to fix it.

Trait Implementations§

source§

impl Clone for Client

source§

fn clone(&self) -> Self

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for Client

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl From<Client<DynConnector, DynMiddleware<DynConnector>, Standard>> for Client

source§

fn from(client: Client<DynConnector, DynMiddleware<DynConnector>>) -> Self

Converts to this type from the input type.

Auto Trait Implementations§

§

impl !RefUnwindSafe for Client

§

impl Send for Client

§

impl Sync for Client

§

impl Unpin for Client

§

impl !UnwindSafe for Client

Blanket Implementations§

source§

impl<T> Any for Twhere T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for Twhere T: ?Sized,

const: unstable · source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for Twhere T: ?Sized,

const: unstable · source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

const: unstable · source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for Twhere U: From<T>,

const: unstable · source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> Same<T> for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for Twhere T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for Twhere U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
const: unstable · source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
const: unstable · source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more