pub struct Client { /* private fields */ }
Expand description
Client for Amazon Bedrock
Client for invoking operations on Amazon Bedrock. Each operation on Amazon Bedrock is a method on this
this struct. .send()
MUST be invoked on the generated operations to dispatch the request to the service.
§Constructing a Client
A Config
is required to construct a client. For most use cases, the aws-config
crate should be used to automatically resolve this config using
aws_config::load_from_env()
, since this will resolve an SdkConfig
which can be shared
across multiple different AWS SDK clients. This config resolution process can be customized
by calling aws_config::from_env()
instead, which returns a ConfigLoader
that uses
the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
let config = aws_config::load_from_env().await;
let client = aws_sdk_bedrock::Client::new(&config);
Occasionally, SDKs may have additional service-specific values that can be set on the Config
that
is absent from SdkConfig
, or slightly different settings for a specific client may be desired.
The Builder
struct implements From<&SdkConfig>
, so setting these specific settings can be
done as follows:
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_bedrock::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
See the aws-config
docs and Config
for more information on customizing configuration.
Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
§Using the Client
A client has a function for every operation that can be performed by the service.
For example, the CreateEvaluationJob
operation has
a Client::create_evaluation_job
, function which returns a builder for that operation.
The fluent builder ultimately has a send()
function that returns an async future that
returns a result, as illustrated below:
let result = client.create_evaluation_job()
.job_name("example")
.send()
.await;
The underlying HTTP requests that get made by this can be modified with the customize_operation
function on the fluent builder. See the customize
module for more
information.
Implementations§
Source§impl Client
impl Client
Sourcepub fn batch_delete_evaluation_job(
&self,
) -> BatchDeleteEvaluationJobFluentBuilder
pub fn batch_delete_evaluation_job( &self, ) -> BatchDeleteEvaluationJobFluentBuilder
Constructs a fluent builder for the BatchDeleteEvaluationJob
operation.
- The fluent builder is configurable:
job_identifiers(impl Into<String>)
/set_job_identifiers(Option<Vec::<String>>)
:
required: trueA list of one or more evaluation job Amazon Resource Names (ARNs) you want to delete.
- On success, responds with
BatchDeleteEvaluationJobOutput
with field(s):errors(Vec::<BatchDeleteEvaluationJobError>)
:A JSON object containing the HTTP status codes and the ARNs of evaluation jobs that failed to be deleted.
evaluation_jobs(Vec::<BatchDeleteEvaluationJobItem>)
:The list of evaluation jobs for deletion.
- On failure, responds with
SdkError<BatchDeleteEvaluationJobError>
Source§impl Client
impl Client
Sourcepub fn create_evaluation_job(&self) -> CreateEvaluationJobFluentBuilder
pub fn create_evaluation_job(&self) -> CreateEvaluationJobFluentBuilder
Constructs a fluent builder for the CreateEvaluationJob
operation.
- The fluent builder is configurable:
job_name(impl Into<String>)
/set_job_name(Option<String>)
:
required: trueA name for the evaluation job. Names must unique with your Amazon Web Services account, and your account’s Amazon Web Services region.
job_description(impl Into<String>)
/set_job_description(Option<String>)
:
required: falseA description of the evaluation job.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
role_arn(impl Into<String>)
/set_role_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of an IAM service role that Amazon Bedrock can assume to perform tasks on your behalf. To learn more about the required permissions, see Required permissions for model evaluations.
customer_encryption_key_id(impl Into<String>)
/set_customer_encryption_key_id(Option<String>)
:
required: falseSpecify your customer managed encryption key Amazon Resource Name (ARN) that will be used to encrypt your evaluation job.
job_tags(Tag)
/set_job_tags(Option<Vec::<Tag>>)
:
required: falseTags to attach to the model evaluation job.
application_type(ApplicationType)
/set_application_type(Option<ApplicationType>)
:
required: falseSpecifies whether the evaluation job is for evaluating a model or evaluating a knowledge base (retrieval and response generation).
evaluation_config(EvaluationConfig)
/set_evaluation_config(Option<EvaluationConfig>)
:
required: trueContains the configuration details of either an automated or human-based evaluation job.
inference_config(EvaluationInferenceConfig)
/set_inference_config(Option<EvaluationInferenceConfig>)
:
required: trueContains the configuration details of the inference model for the evaluation job.
For model evaluation jobs, automated jobs support a single model or inference profile, and jobs that use human workers support two models or inference profiles.
output_data_config(EvaluationOutputDataConfig)
/set_output_data_config(Option<EvaluationOutputDataConfig>)
:
required: trueContains the configuration details of the Amazon S3 bucket for storing the results of the evaluation job.
- On success, responds with
CreateEvaluationJobOutput
with field(s):job_arn(String)
:The Amazon Resource Name (ARN) of the evaluation job.
- On failure, responds with
SdkError<CreateEvaluationJobError>
Source§impl Client
impl Client
Sourcepub fn create_guardrail(&self) -> CreateGuardrailFluentBuilder
pub fn create_guardrail(&self) -> CreateGuardrailFluentBuilder
Constructs a fluent builder for the CreateGuardrail
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name to give the guardrail.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseA description of the guardrail.
topic_policy_config(GuardrailTopicPolicyConfig)
/set_topic_policy_config(Option<GuardrailTopicPolicyConfig>)
:
required: falseThe topic policies to configure for the guardrail.
content_policy_config(GuardrailContentPolicyConfig)
/set_content_policy_config(Option<GuardrailContentPolicyConfig>)
:
required: falseThe content filter policies to configure for the guardrail.
word_policy_config(GuardrailWordPolicyConfig)
/set_word_policy_config(Option<GuardrailWordPolicyConfig>)
:
required: falseThe word policy you configure for the guardrail.
sensitive_information_policy_config(GuardrailSensitiveInformationPolicyConfig)
/set_sensitive_information_policy_config(Option<GuardrailSensitiveInformationPolicyConfig>)
:
required: falseThe sensitive information policy to configure for the guardrail.
contextual_grounding_policy_config(GuardrailContextualGroundingPolicyConfig)
/set_contextual_grounding_policy_config(Option<GuardrailContextualGroundingPolicyConfig>)
:
required: falseThe contextual grounding policy configuration used to create a guardrail.
cross_region_config(GuardrailCrossRegionConfig)
/set_cross_region_config(Option<GuardrailCrossRegionConfig>)
:
required: falseThe system-defined guardrail profile that you’re using with your guardrail. Guardrail profiles define the destination Amazon Web Services Regions where guardrail inference requests can be automatically routed.
For more information, see the Amazon Bedrock User Guide.
blocked_input_messaging(impl Into<String>)
/set_blocked_input_messaging(Option<String>)
:
required: trueThe message to return when the guardrail blocks a prompt.
blocked_outputs_messaging(impl Into<String>)
/set_blocked_outputs_messaging(Option<String>)
:
required: trueThe message to return when the guardrail blocks a model response.
kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:
required: falseThe ARN of the KMS key that you use to encrypt the guardrail.
tags(Tag)
/set_tags(Option<Vec::<Tag>>)
:
required: falseThe tags that you want to attach to the guardrail.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier to ensure that the API request completes no more than once. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency in the Amazon S3 User Guide.
- On success, responds with
CreateGuardrailOutput
with field(s):guardrail_id(String)
:The unique identifier of the guardrail that was created.
guardrail_arn(String)
:The ARN of the guardrail.
version(String)
:The version of the guardrail that was created. This value will always be
DRAFT
.created_at(DateTime)
:The time at which the guardrail was created.
- On failure, responds with
SdkError<CreateGuardrailError>
Source§impl Client
impl Client
Sourcepub fn create_guardrail_version(&self) -> CreateGuardrailVersionFluentBuilder
pub fn create_guardrail_version(&self) -> CreateGuardrailVersionFluentBuilder
Constructs a fluent builder for the CreateGuardrailVersion
operation.
- The fluent builder is configurable:
guardrail_identifier(impl Into<String>)
/set_guardrail_identifier(Option<String>)
:
required: trueThe unique identifier of the guardrail. This can be an ID or the ARN.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseA description of the guardrail version.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier to ensure that the API request completes no more than once. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency in the Amazon S3 User Guide.
- On success, responds with
CreateGuardrailVersionOutput
with field(s):guardrail_id(String)
:The unique identifier of the guardrail.
version(String)
:The number of the version of the guardrail.
- On failure, responds with
SdkError<CreateGuardrailVersionError>
Source§impl Client
impl Client
Sourcepub fn create_inference_profile(&self) -> CreateInferenceProfileFluentBuilder
pub fn create_inference_profile(&self) -> CreateInferenceProfileFluentBuilder
Constructs a fluent builder for the CreateInferenceProfile
operation.
- The fluent builder is configurable:
inference_profile_name(impl Into<String>)
/set_inference_profile_name(Option<String>)
:
required: trueA name for the inference profile.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseA description for the inference profile.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
model_source(InferenceProfileModelSource)
/set_model_source(Option<InferenceProfileModelSource>)
:
required: trueThe foundation model or system-defined inference profile that the inference profile will track metrics and costs for.
tags(Tag)
/set_tags(Option<Vec::<Tag>>)
:
required: falseAn array of objects, each of which contains a tag and its value. For more information, see Tagging resources in the Amazon Bedrock User Guide.
- On success, responds with
CreateInferenceProfileOutput
with field(s):inference_profile_arn(String)
:The ARN of the inference profile that you created.
status(Option<InferenceProfileStatus>)
:The status of the inference profile.
ACTIVE
means that the inference profile is ready to be used.
- On failure, responds with
SdkError<CreateInferenceProfileError>
Source§impl Client
impl Client
Sourcepub fn create_marketplace_model_endpoint(
&self,
) -> CreateMarketplaceModelEndpointFluentBuilder
pub fn create_marketplace_model_endpoint( &self, ) -> CreateMarketplaceModelEndpointFluentBuilder
Constructs a fluent builder for the CreateMarketplaceModelEndpoint
operation.
- The fluent builder is configurable:
model_source_identifier(impl Into<String>)
/set_model_source_identifier(Option<String>)
:
required: trueThe ARN of the model from Amazon Bedrock Marketplace that you want to deploy to the endpoint.
endpoint_config(EndpointConfig)
/set_endpoint_config(Option<EndpointConfig>)
:
required: trueThe configuration for the endpoint, including the number and type of instances to use.
accept_eula(bool)
/set_accept_eula(Option<bool>)
:
required: falseIndicates whether you accept the end-user license agreement (EULA) for the model. Set to
true
to accept the EULA.endpoint_name(impl Into<String>)
/set_endpoint_name(Option<String>)
:
required: trueThe name of the endpoint. This name must be unique within your Amazon Web Services account and region.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier that you provide to ensure the idempotency of the request. This token is listed as not required because Amazon Web Services SDKs automatically generate it for you and set this parameter. If you’re not using the Amazon Web Services SDK or the CLI, you must provide this token or the action will fail.
tags(Tag)
/set_tags(Option<Vec::<Tag>>)
:
required: falseAn array of key-value pairs to apply to the underlying Amazon SageMaker endpoint. You can use these tags to organize and identify your Amazon Web Services resources.
- On success, responds with
CreateMarketplaceModelEndpointOutput
with field(s):marketplace_model_endpoint(Option<MarketplaceModelEndpoint>)
:Details about the created endpoint.
- On failure, responds with
SdkError<CreateMarketplaceModelEndpointError>
Source§impl Client
impl Client
Sourcepub fn create_model_copy_job(&self) -> CreateModelCopyJobFluentBuilder
pub fn create_model_copy_job(&self) -> CreateModelCopyJobFluentBuilder
Constructs a fluent builder for the CreateModelCopyJob
operation.
- The fluent builder is configurable:
source_model_arn(impl Into<String>)
/set_source_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the model to be copied.
target_model_name(impl Into<String>)
/set_target_model_name(Option<String>)
:
required: trueA name for the copied model.
model_kms_key_id(impl Into<String>)
/set_model_kms_key_id(Option<String>)
:
required: falseThe ARN of the KMS key that you use to encrypt the model copy.
target_model_tags(Tag)
/set_target_model_tags(Option<Vec::<Tag>>)
:
required: falseTags to associate with the target model. For more information, see Tag resources in the Amazon Bedrock User Guide.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
- On success, responds with
CreateModelCopyJobOutput
with field(s):job_arn(String)
:The Amazon Resource Name (ARN) of the model copy job.
- On failure, responds with
SdkError<CreateModelCopyJobError>
Source§impl Client
impl Client
Sourcepub fn create_model_customization_job(
&self,
) -> CreateModelCustomizationJobFluentBuilder
pub fn create_model_customization_job( &self, ) -> CreateModelCustomizationJobFluentBuilder
Constructs a fluent builder for the CreateModelCustomizationJob
operation.
- The fluent builder is configurable:
job_name(impl Into<String>)
/set_job_name(Option<String>)
:
required: trueA name for the fine-tuning job.
custom_model_name(impl Into<String>)
/set_custom_model_name(Option<String>)
:
required: trueA name for the resulting custom model.
role_arn(impl Into<String>)
/set_role_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of an IAM service role that Amazon Bedrock can assume to perform tasks on your behalf. For example, during model training, Amazon Bedrock needs your permission to read input data from an S3 bucket, write model artifacts to an S3 bucket. To pass this role to Amazon Bedrock, the caller of this API must have the
iam:PassRole
permission.client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
base_model_identifier(impl Into<String>)
/set_base_model_identifier(Option<String>)
:
required: trueName of the base model.
customization_type(CustomizationType)
/set_customization_type(Option<CustomizationType>)
:
required: falseThe customization type.
custom_model_kms_key_id(impl Into<String>)
/set_custom_model_kms_key_id(Option<String>)
:
required: falseThe custom model is encrypted at rest using this key.
job_tags(Tag)
/set_job_tags(Option<Vec::<Tag>>)
:
required: falseTags to attach to the job.
custom_model_tags(Tag)
/set_custom_model_tags(Option<Vec::<Tag>>)
:
required: falseTags to attach to the resulting custom model.
training_data_config(TrainingDataConfig)
/set_training_data_config(Option<TrainingDataConfig>)
:
required: trueInformation about the training dataset.
validation_data_config(ValidationDataConfig)
/set_validation_data_config(Option<ValidationDataConfig>)
:
required: falseInformation about the validation dataset.
output_data_config(OutputDataConfig)
/set_output_data_config(Option<OutputDataConfig>)
:
required: trueS3 location for the output data.
hyper_parameters(impl Into<String>, impl Into<String>)
/set_hyper_parameters(Option<HashMap::<String, String>>)
:
required: falseParameters related to tuning the model. For details on the format for different models, see Custom model hyperparameters.
vpc_config(VpcConfig)
/set_vpc_config(Option<VpcConfig>)
:
required: falseThe configuration of the Virtual Private Cloud (VPC) that contains the resources that you’re using for this job. For more information, see Protect your model customization jobs using a VPC.
customization_config(CustomizationConfig)
/set_customization_config(Option<CustomizationConfig>)
:
required: falseThe customization configuration for the model customization job.
- On success, responds with
CreateModelCustomizationJobOutput
with field(s):job_arn(String)
:Amazon Resource Name (ARN) of the fine tuning job
- On failure, responds with
SdkError<CreateModelCustomizationJobError>
Source§impl Client
impl Client
Sourcepub fn create_model_import_job(&self) -> CreateModelImportJobFluentBuilder
pub fn create_model_import_job(&self) -> CreateModelImportJobFluentBuilder
Constructs a fluent builder for the CreateModelImportJob
operation.
- The fluent builder is configurable:
job_name(impl Into<String>)
/set_job_name(Option<String>)
:
required: trueThe name of the import job.
imported_model_name(impl Into<String>)
/set_imported_model_name(Option<String>)
:
required: trueThe name of the imported model.
role_arn(impl Into<String>)
/set_role_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the model import job.
model_data_source(ModelDataSource)
/set_model_data_source(Option<ModelDataSource>)
:
required: trueThe data source for the imported model.
job_tags(Tag)
/set_job_tags(Option<Vec::<Tag>>)
:
required: falseTags to attach to this import job.
imported_model_tags(Tag)
/set_imported_model_tags(Option<Vec::<Tag>>)
:
required: falseTags to attach to the imported model.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
vpc_config(VpcConfig)
/set_vpc_config(Option<VpcConfig>)
:
required: falseVPC configuration parameters for the private Virtual Private Cloud (VPC) that contains the resources you are using for the import job.
imported_model_kms_key_id(impl Into<String>)
/set_imported_model_kms_key_id(Option<String>)
:
required: falseThe imported model is encrypted at rest using this key.
- On success, responds with
CreateModelImportJobOutput
with field(s):job_arn(String)
:The Amazon Resource Name (ARN) of the model import job.
- On failure, responds with
SdkError<CreateModelImportJobError>
Source§impl Client
impl Client
Sourcepub fn create_model_invocation_job(
&self,
) -> CreateModelInvocationJobFluentBuilder
pub fn create_model_invocation_job( &self, ) -> CreateModelInvocationJobFluentBuilder
Constructs a fluent builder for the CreateModelInvocationJob
operation.
- The fluent builder is configurable:
job_name(impl Into<String>)
/set_job_name(Option<String>)
:
required: trueA name to give the batch inference job.
role_arn(impl Into<String>)
/set_role_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
model_id(impl Into<String>)
/set_model_id(Option<String>)
:
required: trueThe unique identifier of the foundation model to use for the batch inference job.
input_data_config(ModelInvocationJobInputDataConfig)
/set_input_data_config(Option<ModelInvocationJobInputDataConfig>)
:
required: trueDetails about the location of the input to the batch inference job.
output_data_config(ModelInvocationJobOutputDataConfig)
/set_output_data_config(Option<ModelInvocationJobOutputDataConfig>)
:
required: trueDetails about the location of the output of the batch inference job.
vpc_config(VpcConfig)
/set_vpc_config(Option<VpcConfig>)
:
required: falseThe configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.
timeout_duration_in_hours(i32)
/set_timeout_duration_in_hours(Option<i32>)
:
required: falseThe number of hours after which to force the batch inference job to time out.
tags(Tag)
/set_tags(Option<Vec::<Tag>>)
:
required: falseAny tags to associate with the batch inference job. For more information, see Tagging Amazon Bedrock resources.
- On success, responds with
CreateModelInvocationJobOutput
with field(s):job_arn(String)
:The Amazon Resource Name (ARN) of the batch inference job.
- On failure, responds with
SdkError<CreateModelInvocationJobError>
Source§impl Client
impl Client
Sourcepub fn create_prompt_router(&self) -> CreatePromptRouterFluentBuilder
pub fn create_prompt_router(&self) -> CreatePromptRouterFluentBuilder
Constructs a fluent builder for the CreatePromptRouter
operation.
- The fluent builder is configurable:
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier that you provide to ensure idempotency of your requests. If not specified, the Amazon Web Services SDK automatically generates one for you.
prompt_router_name(impl Into<String>)
/set_prompt_router_name(Option<String>)
:
required: trueThe name of the prompt router. The name must be unique within your Amazon Web Services account in the current region.
models(PromptRouterTargetModel)
/set_models(Option<Vec::<PromptRouterTargetModel>>)
:
required: trueA list of foundation models that the prompt router can route requests to. At least one model must be specified.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseAn optional description of the prompt router to help identify its purpose.
routing_criteria(RoutingCriteria)
/set_routing_criteria(Option<RoutingCriteria>)
:
required: trueThe criteria, which is the response quality difference, used to determine how incoming requests are routed to different models.
fallback_model(PromptRouterTargetModel)
/set_fallback_model(Option<PromptRouterTargetModel>)
:
required: trueThe default model to use when the routing criteria is not met.
tags(Tag)
/set_tags(Option<Vec::<Tag>>)
:
required: falseAn array of key-value pairs to apply to this resource as tags. You can use tags to categorize and manage your Amazon Web Services resources.
- On success, responds with
CreatePromptRouterOutput
with field(s):prompt_router_arn(Option<String>)
:The Amazon Resource Name (ARN) that uniquely identifies the prompt router.
- On failure, responds with
SdkError<CreatePromptRouterError>
Source§impl Client
impl Client
Sourcepub fn create_provisioned_model_throughput(
&self,
) -> CreateProvisionedModelThroughputFluentBuilder
pub fn create_provisioned_model_throughput( &self, ) -> CreateProvisionedModelThroughputFluentBuilder
Constructs a fluent builder for the CreateProvisionedModelThroughput
operation.
- The fluent builder is configurable:
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency in the Amazon S3 User Guide.
model_units(i32)
/set_model_units(Option<i32>)
:
required: trueNumber of model units to allocate. A model unit delivers a specific throughput level for the specified model. The throughput level of a model unit specifies the total number of input and output tokens that it can process and generate within a span of one minute. By default, your account has no model units for purchasing Provisioned Throughputs with commitment. You must first visit the Amazon Web Services support center to request MUs.
For model unit quotas, see Provisioned Throughput quotas in the Amazon Bedrock User Guide.
For more information about what an MU specifies, contact your Amazon Web Services account manager.
provisioned_model_name(impl Into<String>)
/set_provisioned_model_name(Option<String>)
:
required: trueThe name for this Provisioned Throughput.
model_id(impl Into<String>)
/set_model_id(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) or name of the model to associate with this Provisioned Throughput. For a list of models for which you can purchase Provisioned Throughput, see Amazon Bedrock model IDs for purchasing Provisioned Throughput in the Amazon Bedrock User Guide.
commitment_duration(CommitmentDuration)
/set_commitment_duration(Option<CommitmentDuration>)
:
required: falseThe commitment duration requested for the Provisioned Throughput. Billing occurs hourly and is discounted for longer commitment terms. To request a no-commit Provisioned Throughput, omit this field.
Custom models support all levels of commitment. To see which base models support no commitment, see Supported regions and models for Provisioned Throughput in the Amazon Bedrock User Guide
tags(Tag)
/set_tags(Option<Vec::<Tag>>)
:
required: falseTags to associate with this Provisioned Throughput.
- On success, responds with
CreateProvisionedModelThroughputOutput
with field(s):provisioned_model_arn(String)
:The Amazon Resource Name (ARN) for this Provisioned Throughput.
- On failure, responds with
SdkError<CreateProvisionedModelThroughputError>
Source§impl Client
impl Client
Sourcepub fn delete_custom_model(&self) -> DeleteCustomModelFluentBuilder
pub fn delete_custom_model(&self) -> DeleteCustomModelFluentBuilder
Constructs a fluent builder for the DeleteCustomModel
operation.
- The fluent builder is configurable:
model_identifier(impl Into<String>)
/set_model_identifier(Option<String>)
:
required: trueName of the model to delete.
- On success, responds with
DeleteCustomModelOutput
- On failure, responds with
SdkError<DeleteCustomModelError>
Source§impl Client
impl Client
Sourcepub fn delete_guardrail(&self) -> DeleteGuardrailFluentBuilder
pub fn delete_guardrail(&self) -> DeleteGuardrailFluentBuilder
Constructs a fluent builder for the DeleteGuardrail
operation.
- The fluent builder is configurable:
guardrail_identifier(impl Into<String>)
/set_guardrail_identifier(Option<String>)
:
required: trueThe unique identifier of the guardrail. This can be an ID or the ARN.
guardrail_version(impl Into<String>)
/set_guardrail_version(Option<String>)
:
required: falseThe version of the guardrail.
- On success, responds with
DeleteGuardrailOutput
- On failure, responds with
SdkError<DeleteGuardrailError>
Source§impl Client
impl Client
Sourcepub fn delete_imported_model(&self) -> DeleteImportedModelFluentBuilder
pub fn delete_imported_model(&self) -> DeleteImportedModelFluentBuilder
Constructs a fluent builder for the DeleteImportedModel
operation.
- The fluent builder is configurable:
model_identifier(impl Into<String>)
/set_model_identifier(Option<String>)
:
required: trueName of the imported model to delete.
- On success, responds with
DeleteImportedModelOutput
- On failure, responds with
SdkError<DeleteImportedModelError>
Source§impl Client
impl Client
Sourcepub fn delete_inference_profile(&self) -> DeleteInferenceProfileFluentBuilder
pub fn delete_inference_profile(&self) -> DeleteInferenceProfileFluentBuilder
Constructs a fluent builder for the DeleteInferenceProfile
operation.
- The fluent builder is configurable:
inference_profile_identifier(impl Into<String>)
/set_inference_profile_identifier(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) or ID of the application inference profile to delete.
- On success, responds with
DeleteInferenceProfileOutput
- On failure, responds with
SdkError<DeleteInferenceProfileError>
Source§impl Client
impl Client
Sourcepub fn delete_marketplace_model_endpoint(
&self,
) -> DeleteMarketplaceModelEndpointFluentBuilder
pub fn delete_marketplace_model_endpoint( &self, ) -> DeleteMarketplaceModelEndpointFluentBuilder
Constructs a fluent builder for the DeleteMarketplaceModelEndpoint
operation.
- The fluent builder is configurable:
endpoint_arn(impl Into<String>)
/set_endpoint_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the endpoint you want to delete.
- On success, responds with
DeleteMarketplaceModelEndpointOutput
- On failure, responds with
SdkError<DeleteMarketplaceModelEndpointError>
Source§impl Client
impl Client
Sourcepub fn delete_model_invocation_logging_configuration(
&self,
) -> DeleteModelInvocationLoggingConfigurationFluentBuilder
pub fn delete_model_invocation_logging_configuration( &self, ) -> DeleteModelInvocationLoggingConfigurationFluentBuilder
Constructs a fluent builder for the DeleteModelInvocationLoggingConfiguration
operation.
- The fluent builder takes no input, just
send
it. - On success, responds with
DeleteModelInvocationLoggingConfigurationOutput
- On failure, responds with
SdkError<DeleteModelInvocationLoggingConfigurationError>
Source§impl Client
impl Client
Sourcepub fn delete_prompt_router(&self) -> DeletePromptRouterFluentBuilder
pub fn delete_prompt_router(&self) -> DeletePromptRouterFluentBuilder
Constructs a fluent builder for the DeletePromptRouter
operation.
- The fluent builder is configurable:
prompt_router_arn(impl Into<String>)
/set_prompt_router_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the prompt router to delete.
- On success, responds with
DeletePromptRouterOutput
- On failure, responds with
SdkError<DeletePromptRouterError>
Source§impl Client
impl Client
Sourcepub fn delete_provisioned_model_throughput(
&self,
) -> DeleteProvisionedModelThroughputFluentBuilder
pub fn delete_provisioned_model_throughput( &self, ) -> DeleteProvisionedModelThroughputFluentBuilder
Constructs a fluent builder for the DeleteProvisionedModelThroughput
operation.
- The fluent builder is configurable:
provisioned_model_id(impl Into<String>)
/set_provisioned_model_id(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) or name of the Provisioned Throughput.
- On success, responds with
DeleteProvisionedModelThroughputOutput
- On failure, responds with
SdkError<DeleteProvisionedModelThroughputError>
Source§impl Client
impl Client
Sourcepub fn deregister_marketplace_model_endpoint(
&self,
) -> DeregisterMarketplaceModelEndpointFluentBuilder
pub fn deregister_marketplace_model_endpoint( &self, ) -> DeregisterMarketplaceModelEndpointFluentBuilder
Constructs a fluent builder for the DeregisterMarketplaceModelEndpoint
operation.
- The fluent builder is configurable:
endpoint_arn(impl Into<String>)
/set_endpoint_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the endpoint you want to deregister.
- On success, responds with
DeregisterMarketplaceModelEndpointOutput
- On failure, responds with
SdkError<DeregisterMarketplaceModelEndpointError>
Source§impl Client
impl Client
Sourcepub fn get_custom_model(&self) -> GetCustomModelFluentBuilder
pub fn get_custom_model(&self) -> GetCustomModelFluentBuilder
Constructs a fluent builder for the GetCustomModel
operation.
- The fluent builder is configurable:
model_identifier(impl Into<String>)
/set_model_identifier(Option<String>)
:
required: trueName or Amazon Resource Name (ARN) of the custom model.
- On success, responds with
GetCustomModelOutput
with field(s):model_arn(String)
:Amazon Resource Name (ARN) associated with this model.
model_name(String)
:Model name associated with this model.
job_name(Option<String>)
:Job name associated with this model.
job_arn(String)
:Job Amazon Resource Name (ARN) associated with this model.
base_model_arn(String)
:Amazon Resource Name (ARN) of the base model.
customization_type(Option<CustomizationType>)
:The type of model customization.
model_kms_key_arn(Option<String>)
:The custom model is encrypted at rest using this key.
hyper_parameters(Option<HashMap::<String, String>>)
:Hyperparameter values associated with this model. For details on the format for different models, see Custom model hyperparameters.
training_data_config(Option<TrainingDataConfig>)
:Contains information about the training dataset.
validation_data_config(Option<ValidationDataConfig>)
:Contains information about the validation dataset.
output_data_config(Option<OutputDataConfig>)
:Output data configuration associated with this custom model.
training_metrics(Option<TrainingMetrics>)
:Contains training metrics from the job creation.
validation_metrics(Option<Vec::<ValidatorMetric>>)
:The validation metrics from the job creation.
creation_time(DateTime)
:Creation time of the model.
customization_config(Option<CustomizationConfig>)
:The customization configuration for the custom model.
- On failure, responds with
SdkError<GetCustomModelError>
Source§impl Client
impl Client
Sourcepub fn get_evaluation_job(&self) -> GetEvaluationJobFluentBuilder
pub fn get_evaluation_job(&self) -> GetEvaluationJobFluentBuilder
Constructs a fluent builder for the GetEvaluationJob
operation.
- The fluent builder is configurable:
job_identifier(impl Into<String>)
/set_job_identifier(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the evaluation job you want get information on.
- On success, responds with
GetEvaluationJobOutput
with field(s):job_name(String)
:The name for the evaluation job.
status(EvaluationJobStatus)
:The current status of the evaluation job.
job_arn(String)
:The Amazon Resource Name (ARN) of the evaluation job.
job_description(Option<String>)
:The description of the evaluation job.
role_arn(String)
:The Amazon Resource Name (ARN) of the IAM service role used in the evaluation job.
customer_encryption_key_id(Option<String>)
:The Amazon Resource Name (ARN) of the customer managed encryption key specified when the evaluation job was created.
job_type(EvaluationJobType)
:Specifies whether the evaluation job is automated or human-based.
application_type(Option<ApplicationType>)
:Specifies whether the evaluation job is for evaluating a model or evaluating a knowledge base (retrieval and response generation).
evaluation_config(Option<EvaluationConfig>)
:Contains the configuration details of either an automated or human-based evaluation job.
inference_config(Option<EvaluationInferenceConfig>)
:Contains the configuration details of the inference model used for the evaluation job.
output_data_config(Option<EvaluationOutputDataConfig>)
:Contains the configuration details of the Amazon S3 bucket for storing the results of the evaluation job.
creation_time(DateTime)
:The time the evaluation job was created.
last_modified_time(Option<DateTime>)
:The time the evaluation job was last modified.
failure_messages(Option<Vec::<String>>)
:A list of strings that specify why the evaluation job failed to create.
- On failure, responds with
SdkError<GetEvaluationJobError>
Source§impl Client
impl Client
Sourcepub fn get_foundation_model(&self) -> GetFoundationModelFluentBuilder
pub fn get_foundation_model(&self) -> GetFoundationModelFluentBuilder
Constructs a fluent builder for the GetFoundationModel
operation.
- The fluent builder is configurable:
model_identifier(impl Into<String>)
/set_model_identifier(Option<String>)
:
required: trueThe model identifier.
- On success, responds with
GetFoundationModelOutput
with field(s):model_details(Option<FoundationModelDetails>)
:Information about the foundation model.
- On failure, responds with
SdkError<GetFoundationModelError>
Source§impl Client
impl Client
Sourcepub fn get_guardrail(&self) -> GetGuardrailFluentBuilder
pub fn get_guardrail(&self) -> GetGuardrailFluentBuilder
Constructs a fluent builder for the GetGuardrail
operation.
- The fluent builder is configurable:
guardrail_identifier(impl Into<String>)
/set_guardrail_identifier(Option<String>)
:
required: trueThe unique identifier of the guardrail for which to get details. This can be an ID or the ARN.
guardrail_version(impl Into<String>)
/set_guardrail_version(Option<String>)
:
required: falseThe version of the guardrail for which to get details. If you don’t specify a version, the response returns details for the
DRAFT
version.
- On success, responds with
GetGuardrailOutput
with field(s):name(String)
:The name of the guardrail.
description(Option<String>)
:The description of the guardrail.
guardrail_id(String)
:The unique identifier of the guardrail.
guardrail_arn(String)
:The ARN of the guardrail.
version(String)
:The version of the guardrail.
status(GuardrailStatus)
:The status of the guardrail.
topic_policy(Option<GuardrailTopicPolicy>)
:The topic policy that was configured for the guardrail.
content_policy(Option<GuardrailContentPolicy>)
:The content policy that was configured for the guardrail.
word_policy(Option<GuardrailWordPolicy>)
:The word policy that was configured for the guardrail.
sensitive_information_policy(Option<GuardrailSensitiveInformationPolicy>)
:The sensitive information policy that was configured for the guardrail.
contextual_grounding_policy(Option<GuardrailContextualGroundingPolicy>)
:The contextual grounding policy used in the guardrail.
cross_region_details(Option<GuardrailCrossRegionDetails>)
:Details about the system-defined guardrail profile that you’re using with your guardrail, including the guardrail profile ID and Amazon Resource Name (ARN).
created_at(DateTime)
:The date and time at which the guardrail was created.
updated_at(DateTime)
:The date and time at which the guardrail was updated.
status_reasons(Option<Vec::<String>>)
:Appears if the
status
isFAILED
. A list of reasons for why the guardrail failed to be created, updated, versioned, or deleted.failure_recommendations(Option<Vec::<String>>)
:Appears if the
status
of the guardrail isFAILED
. A list of recommendations to carry out before retrying the request.blocked_input_messaging(String)
:The message that the guardrail returns when it blocks a prompt.
blocked_outputs_messaging(String)
:The message that the guardrail returns when it blocks a model response.
kms_key_arn(Option<String>)
:The ARN of the KMS key that encrypts the guardrail.
- On failure, responds with
SdkError<GetGuardrailError>
Source§impl Client
impl Client
Sourcepub fn get_imported_model(&self) -> GetImportedModelFluentBuilder
pub fn get_imported_model(&self) -> GetImportedModelFluentBuilder
Constructs a fluent builder for the GetImportedModel
operation.
- The fluent builder is configurable:
model_identifier(impl Into<String>)
/set_model_identifier(Option<String>)
:
required: trueName or Amazon Resource Name (ARN) of the imported model.
- On success, responds with
GetImportedModelOutput
with field(s):model_arn(Option<String>)
:The Amazon Resource Name (ARN) associated with this imported model.
model_name(Option<String>)
:The name of the imported model.
job_name(Option<String>)
:Job name associated with the imported model.
job_arn(Option<String>)
:Job Amazon Resource Name (ARN) associated with the imported model.
model_data_source(Option<ModelDataSource>)
:The data source for this imported model.
creation_time(Option<DateTime>)
:Creation time of the imported model.
model_architecture(Option<String>)
:The architecture of the imported model.
model_kms_key_arn(Option<String>)
:The imported model is encrypted at rest using this key.
instruct_supported(Option<bool>)
:Specifies if the imported model supports converse.
custom_model_units(Option<CustomModelUnits>)
:Information about the hardware utilization for a single copy of the model.
- On failure, responds with
SdkError<GetImportedModelError>
Source§impl Client
impl Client
Sourcepub fn get_inference_profile(&self) -> GetInferenceProfileFluentBuilder
pub fn get_inference_profile(&self) -> GetInferenceProfileFluentBuilder
Constructs a fluent builder for the GetInferenceProfile
operation.
- The fluent builder is configurable:
inference_profile_identifier(impl Into<String>)
/set_inference_profile_identifier(Option<String>)
:
required: trueThe ID or Amazon Resource Name (ARN) of the inference profile.
- On success, responds with
GetInferenceProfileOutput
with field(s):inference_profile_name(String)
:The name of the inference profile.
description(Option<String>)
:The description of the inference profile.
created_at(Option<DateTime>)
:The time at which the inference profile was created.
updated_at(Option<DateTime>)
:The time at which the inference profile was last updated.
inference_profile_arn(String)
:The Amazon Resource Name (ARN) of the inference profile.
models(Vec::<InferenceProfileModel>)
:A list of information about each model in the inference profile.
inference_profile_id(String)
:The unique identifier of the inference profile.
status(InferenceProfileStatus)
:The status of the inference profile.
ACTIVE
means that the inference profile is ready to be used.r#type(InferenceProfileType)
:The type of the inference profile. The following types are possible:
-
SYSTEM_DEFINED
– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles. -
APPLICATION
– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
-
- On failure, responds with
SdkError<GetInferenceProfileError>
Source§impl Client
impl Client
Sourcepub fn get_marketplace_model_endpoint(
&self,
) -> GetMarketplaceModelEndpointFluentBuilder
pub fn get_marketplace_model_endpoint( &self, ) -> GetMarketplaceModelEndpointFluentBuilder
Constructs a fluent builder for the GetMarketplaceModelEndpoint
operation.
- The fluent builder is configurable:
endpoint_arn(impl Into<String>)
/set_endpoint_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the endpoint you want to get information about.
- On success, responds with
GetMarketplaceModelEndpointOutput
with field(s):marketplace_model_endpoint(Option<MarketplaceModelEndpoint>)
:Details about the requested endpoint.
- On failure, responds with
SdkError<GetMarketplaceModelEndpointError>
Source§impl Client
impl Client
Sourcepub fn get_model_copy_job(&self) -> GetModelCopyJobFluentBuilder
pub fn get_model_copy_job(&self) -> GetModelCopyJobFluentBuilder
Constructs a fluent builder for the GetModelCopyJob
operation.
- The fluent builder is configurable:
job_arn(impl Into<String>)
/set_job_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the model copy job.
- On success, responds with
GetModelCopyJobOutput
with field(s):job_arn(String)
:The Amazon Resource Name (ARN) of the model copy job.
status(ModelCopyJobStatus)
:The status of the model copy job.
creation_time(DateTime)
:The time at which the model copy job was created.
target_model_arn(String)
:The Amazon Resource Name (ARN) of the copied model.
target_model_name(Option<String>)
:The name of the copied model.
source_account_id(String)
:The unique identifier of the account that the model being copied originated from.
source_model_arn(String)
:The Amazon Resource Name (ARN) of the original model being copied.
target_model_kms_key_arn(Option<String>)
:The Amazon Resource Name (ARN) of the KMS key encrypting the copied model.
target_model_tags(Option<Vec::<Tag>>)
:The tags associated with the copied model.
failure_message(Option<String>)
:An error message for why the model copy job failed.
source_model_name(Option<String>)
:The name of the original model being copied.
- On failure, responds with
SdkError<GetModelCopyJobError>
Source§impl Client
impl Client
Sourcepub fn get_model_customization_job(
&self,
) -> GetModelCustomizationJobFluentBuilder
pub fn get_model_customization_job( &self, ) -> GetModelCustomizationJobFluentBuilder
Constructs a fluent builder for the GetModelCustomizationJob
operation.
- The fluent builder is configurable:
job_identifier(impl Into<String>)
/set_job_identifier(Option<String>)
:
required: trueIdentifier for the customization job.
- On success, responds with
GetModelCustomizationJobOutput
with field(s):job_arn(String)
:The Amazon Resource Name (ARN) of the customization job.
job_name(String)
:The name of the customization job.
output_model_name(String)
:The name of the output model.
output_model_arn(Option<String>)
:The Amazon Resource Name (ARN) of the output model.
client_request_token(Option<String>)
:The token that you specified in the
CreateCustomizationJob
request.role_arn(String)
:The Amazon Resource Name (ARN) of the IAM role.
status(Option<ModelCustomizationJobStatus>)
:The status of the job. A successful job transitions from in-progress to completed when the output model is ready to use. If the job failed, the failure message contains information about why the job failed.
failure_message(Option<String>)
:Information about why the job failed.
status_details(Option<StatusDetails>)
:For a Distillation job, the details about the statuses of the sub-tasks of the customization job.
creation_time(DateTime)
:Time that the resource was created.
last_modified_time(Option<DateTime>)
:Time that the resource was last modified.
end_time(Option<DateTime>)
:Time that the resource transitioned to terminal state.
base_model_arn(String)
:Amazon Resource Name (ARN) of the base model.
hyper_parameters(Option<HashMap::<String, String>>)
:The hyperparameter values for the job. For details on the format for different models, see Custom model hyperparameters.
training_data_config(Option<TrainingDataConfig>)
:Contains information about the training dataset.
validation_data_config(Option<ValidationDataConfig>)
:Contains information about the validation dataset.
output_data_config(Option<OutputDataConfig>)
:Output data configuration
customization_type(Option<CustomizationType>)
:The type of model customization.
output_model_kms_key_arn(Option<String>)
:The custom model is encrypted at rest using this key.
training_metrics(Option<TrainingMetrics>)
:Contains training metrics from the job creation.
validation_metrics(Option<Vec::<ValidatorMetric>>)
:The loss metric for each validator that you provided in the createjob request.
vpc_config(Option<VpcConfig>)
:VPC configuration for the custom model job.
customization_config(Option<CustomizationConfig>)
:The customization configuration for the model customization job.
- On failure, responds with
SdkError<GetModelCustomizationJobError>
Source§impl Client
impl Client
Sourcepub fn get_model_import_job(&self) -> GetModelImportJobFluentBuilder
pub fn get_model_import_job(&self) -> GetModelImportJobFluentBuilder
Constructs a fluent builder for the GetModelImportJob
operation.
- The fluent builder is configurable:
job_identifier(impl Into<String>)
/set_job_identifier(Option<String>)
:
required: trueThe identifier of the import job.
- On success, responds with
GetModelImportJobOutput
with field(s):job_arn(Option<String>)
:The Amazon Resource Name (ARN) of the import job.
job_name(Option<String>)
:The name of the import job.
imported_model_name(Option<String>)
:The name of the imported model.
imported_model_arn(Option<String>)
:The Amazon Resource Name (ARN) of the imported model.
role_arn(Option<String>)
:The Amazon Resource Name (ARN) of the IAM role associated with this job.
model_data_source(Option<ModelDataSource>)
:The data source for the imported model.
status(Option<ModelImportJobStatus>)
:The status of the job. A successful job transitions from in-progress to completed when the imported model is ready to use. If the job failed, the failure message contains information about why the job failed.
failure_message(Option<String>)
:Information about why the import job failed.
creation_time(Option<DateTime>)
:The time the resource was created.
last_modified_time(Option<DateTime>)
:Time the resource was last modified.
end_time(Option<DateTime>)
:Time that the resource transitioned to terminal state.
vpc_config(Option<VpcConfig>)
:The Virtual Private Cloud (VPC) configuration of the import model job.
imported_model_kms_key_arn(Option<String>)
:The imported model is encrypted at rest using this key.
- On failure, responds with
SdkError<GetModelImportJobError>
Source§impl Client
impl Client
Sourcepub fn get_model_invocation_job(&self) -> GetModelInvocationJobFluentBuilder
pub fn get_model_invocation_job(&self) -> GetModelInvocationJobFluentBuilder
Constructs a fluent builder for the GetModelInvocationJob
operation.
- The fluent builder is configurable:
job_identifier(impl Into<String>)
/set_job_identifier(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the batch inference job.
- On success, responds with
GetModelInvocationJobOutput
with field(s):job_arn(String)
:The Amazon Resource Name (ARN) of the batch inference job.
job_name(Option<String>)
:The name of the batch inference job.
model_id(String)
:The unique identifier of the foundation model used for model inference.
client_request_token(Option<String>)
:A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
role_arn(String)
:The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.
status(Option<ModelInvocationJobStatus>)
:The status of the batch inference job.
The following statuses are possible:
-
Submitted – This job has been submitted to a queue for validation.
-
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
-
Your IAM service role has access to the Amazon S3 buckets containing your files.
-
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn’t check if the
modelInput
value matches the request body for the model. -
Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
-
-
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
-
Expired – This job timed out because it was scheduled but didn’t begin before the set timeout duration. Submit a new job request.
-
InProgress – This job has begun. You can start viewing the results in the output S3 location.
-
Completed – This job has successfully completed. View the output files in the output S3 location.
-
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
-
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.
-
Stopped – This job was stopped by a user.
-
Stopping – This job is being stopped by a user.
-
message(Option<String>)
:If the batch inference job failed, this field contains a message describing why the job failed.
submit_time(DateTime)
:The time at which the batch inference job was submitted.
last_modified_time(Option<DateTime>)
:The time at which the batch inference job was last modified.
end_time(Option<DateTime>)
:The time at which the batch inference job ended.
input_data_config(Option<ModelInvocationJobInputDataConfig>)
:Details about the location of the input to the batch inference job.
output_data_config(Option<ModelInvocationJobOutputDataConfig>)
:Details about the location of the output of the batch inference job.
vpc_config(Option<VpcConfig>)
:The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.
timeout_duration_in_hours(Option<i32>)
:The number of hours after which batch inference job was set to time out.
job_expiration_time(Option<DateTime>)
:The time at which the batch inference job times or timed out.
- On failure, responds with
SdkError<GetModelInvocationJobError>
Source§impl Client
impl Client
Sourcepub fn get_model_invocation_logging_configuration(
&self,
) -> GetModelInvocationLoggingConfigurationFluentBuilder
pub fn get_model_invocation_logging_configuration( &self, ) -> GetModelInvocationLoggingConfigurationFluentBuilder
Constructs a fluent builder for the GetModelInvocationLoggingConfiguration
operation.
- The fluent builder takes no input, just
send
it. - On success, responds with
GetModelInvocationLoggingConfigurationOutput
with field(s):logging_config(Option<LoggingConfig>)
:The current configuration values.
- On failure, responds with
SdkError<GetModelInvocationLoggingConfigurationError>
Source§impl Client
impl Client
Sourcepub fn get_prompt_router(&self) -> GetPromptRouterFluentBuilder
pub fn get_prompt_router(&self) -> GetPromptRouterFluentBuilder
Constructs a fluent builder for the GetPromptRouter
operation.
- The fluent builder is configurable:
prompt_router_arn(impl Into<String>)
/set_prompt_router_arn(Option<String>)
:
required: trueThe prompt router’s ARN
- On success, responds with
GetPromptRouterOutput
with field(s):prompt_router_name(String)
:The router’s name.
routing_criteria(Option<RoutingCriteria>)
:The router’s routing criteria.
description(Option<String>)
:The router’s description.
created_at(Option<DateTime>)
:When the router was created.
updated_at(Option<DateTime>)
:When the router was updated.
prompt_router_arn(String)
:The prompt router’s ARN
models(Vec::<PromptRouterTargetModel>)
:The router’s models.
fallback_model(Option<PromptRouterTargetModel>)
:The router’s fallback model.
status(PromptRouterStatus)
:The router’s status.
r#type(PromptRouterType)
:The router’s type.
- On failure, responds with
SdkError<GetPromptRouterError>
Source§impl Client
impl Client
Sourcepub fn get_provisioned_model_throughput(
&self,
) -> GetProvisionedModelThroughputFluentBuilder
pub fn get_provisioned_model_throughput( &self, ) -> GetProvisionedModelThroughputFluentBuilder
Constructs a fluent builder for the GetProvisionedModelThroughput
operation.
- The fluent builder is configurable:
provisioned_model_id(impl Into<String>)
/set_provisioned_model_id(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) or name of the Provisioned Throughput.
- On success, responds with
GetProvisionedModelThroughputOutput
with field(s):model_units(i32)
:The number of model units allocated to this Provisioned Throughput.
desired_model_units(i32)
:The number of model units that was requested for this Provisioned Throughput.
provisioned_model_name(String)
:The name of the Provisioned Throughput.
provisioned_model_arn(String)
:The Amazon Resource Name (ARN) of the Provisioned Throughput.
model_arn(String)
:The Amazon Resource Name (ARN) of the model associated with this Provisioned Throughput.
desired_model_arn(String)
:The Amazon Resource Name (ARN) of the model requested to be associated to this Provisioned Throughput. This value differs from the
modelArn
if updating hasn’t completed.foundation_model_arn(String)
:The Amazon Resource Name (ARN) of the base model for which the Provisioned Throughput was created, or of the base model that the custom model for which the Provisioned Throughput was created was customized.
status(ProvisionedModelStatus)
:The status of the Provisioned Throughput.
creation_time(DateTime)
:The timestamp of the creation time for this Provisioned Throughput.
last_modified_time(DateTime)
:The timestamp of the last time that this Provisioned Throughput was modified.
failure_message(Option<String>)
:A failure message for any issues that occurred during creation, updating, or deletion of the Provisioned Throughput.
commitment_duration(Option<CommitmentDuration>)
:Commitment duration of the Provisioned Throughput.
commitment_expiration_time(Option<DateTime>)
:The timestamp for when the commitment term for the Provisioned Throughput expires.
- On failure, responds with
SdkError<GetProvisionedModelThroughputError>
Source§impl Client
impl Client
Sourcepub fn list_custom_models(&self) -> ListCustomModelsFluentBuilder
pub fn list_custom_models(&self) -> ListCustomModelsFluentBuilder
Constructs a fluent builder for the ListCustomModels
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
creation_time_before(DateTime)
/set_creation_time_before(Option<DateTime>)
:
required: falseReturn custom models created before the specified time.
creation_time_after(DateTime)
/set_creation_time_after(Option<DateTime>)
:
required: falseReturn custom models created after the specified time.
name_contains(impl Into<String>)
/set_name_contains(Option<String>)
:
required: falseReturn custom models only if the job name contains these characters.
base_model_arn_equals(impl Into<String>)
/set_base_model_arn_equals(Option<String>)
:
required: falseReturn custom models only if the base model Amazon Resource Name (ARN) matches this parameter.
foundation_model_arn_equals(impl Into<String>)
/set_foundation_model_arn_equals(Option<String>)
:
required: falseReturn custom models only if the foundation model Amazon Resource Name (ARN) matches this parameter.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return in the response. If the total number of results is greater than this value, use the token returned in the response in the
nextToken
field when making another request to return the next batch of results.next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the total number of results is greater than the
maxResults
value provided in the request, enter the token returned in thenextToken
field in the response in this field to return the next batch of results.sort_by(SortModelsBy)
/set_sort_by(Option<SortModelsBy>)
:
required: falseThe field to sort by in the returned list of models.
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:
required: falseThe sort order of the results.
is_owned(bool)
/set_is_owned(Option<bool>)
:
required: falseReturn custom models depending on if the current account owns them (
true
) or if they were shared with the current account (false
).
- On success, responds with
ListCustomModelsOutput
with field(s):next_token(Option<String>)
:If the total number of results is greater than the
maxResults
value provided in the request, use this token when making another request in thenextToken
field to return the next batch of results.model_summaries(Option<Vec::<CustomModelSummary>>)
:Model summaries.
- On failure, responds with
SdkError<ListCustomModelsError>
Source§impl Client
impl Client
Sourcepub fn list_evaluation_jobs(&self) -> ListEvaluationJobsFluentBuilder
pub fn list_evaluation_jobs(&self) -> ListEvaluationJobsFluentBuilder
Constructs a fluent builder for the ListEvaluationJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
creation_time_after(DateTime)
/set_creation_time_after(Option<DateTime>)
:
required: falseA filter to only list evaluation jobs created after a specified time.
creation_time_before(DateTime)
/set_creation_time_before(Option<DateTime>)
:
required: falseA filter to only list evaluation jobs created before a specified time.
status_equals(EvaluationJobStatus)
/set_status_equals(Option<EvaluationJobStatus>)
:
required: falseA filter to only list evaluation jobs that are of a certain status.
application_type_equals(ApplicationType)
/set_application_type_equals(Option<ApplicationType>)
:
required: falseA filter to only list evaluation jobs that are either model evaluations or knowledge base evaluations.
name_contains(impl Into<String>)
/set_name_contains(Option<String>)
:
required: falseA filter to only list evaluation jobs that contain a specified string in the job name.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseContinuation token from the previous response, for Amazon Bedrock to list the next set of results.
sort_by(SortJobsBy)
/set_sort_by(Option<SortJobsBy>)
:
required: falseSpecifies a creation time to sort the list of evaluation jobs by when they were created.
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:
required: falseSpecifies whether to sort the list of evaluation jobs by either ascending or descending order.
- On success, responds with
ListEvaluationJobsOutput
with field(s):next_token(Option<String>)
:Continuation token from the previous response, for Amazon Bedrock to list the next set of results.
job_summaries(Option<Vec::<EvaluationSummary>>)
:A list of summaries of the evaluation jobs.
- On failure, responds with
SdkError<ListEvaluationJobsError>
Source§impl Client
impl Client
Sourcepub fn list_foundation_models(&self) -> ListFoundationModelsFluentBuilder
pub fn list_foundation_models(&self) -> ListFoundationModelsFluentBuilder
Constructs a fluent builder for the ListFoundationModels
operation.
- The fluent builder is configurable:
by_provider(impl Into<String>)
/set_by_provider(Option<String>)
:
required: falseReturn models belonging to the model provider that you specify.
by_customization_type(ModelCustomization)
/set_by_customization_type(Option<ModelCustomization>)
:
required: falseReturn models that support the customization type that you specify. For more information, see Custom models in the Amazon Bedrock User Guide.
by_output_modality(ModelModality)
/set_by_output_modality(Option<ModelModality>)
:
required: falseReturn models that support the output modality that you specify.
by_inference_type(InferenceType)
/set_by_inference_type(Option<InferenceType>)
:
required: falseReturn models that support the inference type that you specify. For more information, see Provisioned Throughput in the Amazon Bedrock User Guide.
- On success, responds with
ListFoundationModelsOutput
with field(s):model_summaries(Option<Vec::<FoundationModelSummary>>)
:A list of Amazon Bedrock foundation models.
- On failure, responds with
SdkError<ListFoundationModelsError>
Source§impl Client
impl Client
Sourcepub fn list_guardrails(&self) -> ListGuardrailsFluentBuilder
pub fn list_guardrails(&self) -> ListGuardrailsFluentBuilder
Constructs a fluent builder for the ListGuardrails
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
guardrail_identifier(impl Into<String>)
/set_guardrail_identifier(Option<String>)
:
required: falseThe unique identifier of the guardrail. This can be an ID or the ARN.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return in the response.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf there are more results than were returned in the response, the response returns a
nextToken
that you can send in anotherListGuardrails
request to see the next batch of results.
- On success, responds with
ListGuardrailsOutput
with field(s):guardrails(Vec::<GuardrailSummary>)
:A list of objects, each of which contains details about a guardrail.
next_token(Option<String>)
:If there are more results than were returned in the response, the response returns a
nextToken
that you can send in anotherListGuardrails
request to see the next batch of results.
- On failure, responds with
SdkError<ListGuardrailsError>
Source§impl Client
impl Client
Sourcepub fn list_imported_models(&self) -> ListImportedModelsFluentBuilder
pub fn list_imported_models(&self) -> ListImportedModelsFluentBuilder
Constructs a fluent builder for the ListImportedModels
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
creation_time_before(DateTime)
/set_creation_time_before(Option<DateTime>)
:
required: falseReturn imported models that created before the specified time.
creation_time_after(DateTime)
/set_creation_time_after(Option<DateTime>)
:
required: falseReturn imported models that were created after the specified time.
name_contains(impl Into<String>)
/set_name_contains(Option<String>)
:
required: falseReturn imported models only if the model name contains these characters.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return in the response. If the total number of results is greater than this value, use the token returned in the response in the
nextToken
field when making another request to return the next batch of results.next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the total number of results is greater than the
maxResults
value provided in the request, enter the token returned in thenextToken
field in the response in this field to return the next batch of results.sort_by(SortModelsBy)
/set_sort_by(Option<SortModelsBy>)
:
required: falseThe field to sort by in the returned list of imported models.
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:
required: falseSpecifies whetehr to sort the results in ascending or descending order.
- On success, responds with
ListImportedModelsOutput
with field(s):next_token(Option<String>)
:If the total number of results is greater than the
maxResults
value provided in the request, use this token when making another request in thenextToken
field to return the next batch of results.model_summaries(Option<Vec::<ImportedModelSummary>>)
:Model summaries.
- On failure, responds with
SdkError<ListImportedModelsError>
Source§impl Client
impl Client
Sourcepub fn list_inference_profiles(&self) -> ListInferenceProfilesFluentBuilder
pub fn list_inference_profiles(&self) -> ListInferenceProfilesFluentBuilder
Constructs a fluent builder for the ListInferenceProfiles
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return in the response. If the total number of results is greater than this value, use the token returned in the response in the
nextToken
field when making another request to return the next batch of results.next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the total number of results is greater than the
maxResults
value provided in the request, enter the token returned in thenextToken
field in the response in this field to return the next batch of results.type_equals(InferenceProfileType)
/set_type_equals(Option<InferenceProfileType>)
:
required: falseFilters for inference profiles that match the type you specify.
-
SYSTEM_DEFINED
– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles. -
APPLICATION
– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
-
- On success, responds with
ListInferenceProfilesOutput
with field(s):inference_profile_summaries(Option<Vec::<InferenceProfileSummary>>)
:A list of information about each inference profile that you can use.
next_token(Option<String>)
:If the total number of results is greater than the
maxResults
value provided in the request, use this token when making another request in thenextToken
field to return the next batch of results.
- On failure, responds with
SdkError<ListInferenceProfilesError>
Source§impl Client
impl Client
Sourcepub fn list_marketplace_model_endpoints(
&self,
) -> ListMarketplaceModelEndpointsFluentBuilder
pub fn list_marketplace_model_endpoints( &self, ) -> ListMarketplaceModelEndpointsFluentBuilder
Constructs a fluent builder for the ListMarketplaceModelEndpoints
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return in a single call. If more results are available, the operation returns a
NextToken
value.next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token for the next set of results. You receive this token from a previous
ListMarketplaceModelEndpoints
call.model_source_equals(impl Into<String>)
/set_model_source_equals(Option<String>)
:
required: falseIf specified, only endpoints for the given model source identifier are returned.
- On success, responds with
ListMarketplaceModelEndpointsOutput
with field(s):marketplace_model_endpoints(Option<Vec::<MarketplaceModelEndpointSummary>>)
:An array of endpoint summaries.
next_token(Option<String>)
:The token for the next set of results. Use this token to get the next set of results.
- On failure, responds with
SdkError<ListMarketplaceModelEndpointsError>
Source§impl Client
impl Client
Sourcepub fn list_model_copy_jobs(&self) -> ListModelCopyJobsFluentBuilder
pub fn list_model_copy_jobs(&self) -> ListModelCopyJobsFluentBuilder
Constructs a fluent builder for the ListModelCopyJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
creation_time_after(DateTime)
/set_creation_time_after(Option<DateTime>)
:
required: falseFilters for model copy jobs created after the specified time.
creation_time_before(DateTime)
/set_creation_time_before(Option<DateTime>)
:
required: falseFilters for model copy jobs created before the specified time.
status_equals(ModelCopyJobStatus)
/set_status_equals(Option<ModelCopyJobStatus>)
:
required: falseFilters for model copy jobs whose status matches the value that you specify.
source_account_equals(impl Into<String>)
/set_source_account_equals(Option<String>)
:
required: falseFilters for model copy jobs in which the account that the source model belongs to is equal to the value that you specify.
source_model_arn_equals(impl Into<String>)
/set_source_model_arn_equals(Option<String>)
:
required: falseFilters for model copy jobs in which the Amazon Resource Name (ARN) of the source model to is equal to the value that you specify.
target_model_name_contains(impl Into<String>)
/set_target_model_name_contains(Option<String>)
:
required: falseFilters for model copy jobs in which the name of the copied model contains the string that you specify.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return in the response. If the total number of results is greater than this value, use the token returned in the response in the
nextToken
field when making another request to return the next batch of results.next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the total number of results is greater than the
maxResults
value provided in the request, enter the token returned in thenextToken
field in the response in this field to return the next batch of results.sort_by(SortJobsBy)
/set_sort_by(Option<SortJobsBy>)
:
required: falseThe field to sort by in the returned list of model copy jobs.
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:
required: falseSpecifies whether to sort the results in ascending or descending order.
- On success, responds with
ListModelCopyJobsOutput
with field(s):next_token(Option<String>)
:If the total number of results is greater than the
maxResults
value provided in the request, use this token when making another request in thenextToken
field to return the next batch of results.model_copy_job_summaries(Option<Vec::<ModelCopyJobSummary>>)
:A list of information about each model copy job.
- On failure, responds with
SdkError<ListModelCopyJobsError>
Source§impl Client
impl Client
Sourcepub fn list_model_customization_jobs(
&self,
) -> ListModelCustomizationJobsFluentBuilder
pub fn list_model_customization_jobs( &self, ) -> ListModelCustomizationJobsFluentBuilder
Constructs a fluent builder for the ListModelCustomizationJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
creation_time_after(DateTime)
/set_creation_time_after(Option<DateTime>)
:
required: falseReturn customization jobs created after the specified time.
creation_time_before(DateTime)
/set_creation_time_before(Option<DateTime>)
:
required: falseReturn customization jobs created before the specified time.
status_equals(FineTuningJobStatus)
/set_status_equals(Option<FineTuningJobStatus>)
:
required: falseReturn customization jobs with the specified status.
name_contains(impl Into<String>)
/set_name_contains(Option<String>)
:
required: falseReturn customization jobs only if the job name contains these characters.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return in the response. If the total number of results is greater than this value, use the token returned in the response in the
nextToken
field when making another request to return the next batch of results.next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the total number of results is greater than the
maxResults
value provided in the request, enter the token returned in thenextToken
field in the response in this field to return the next batch of results.sort_by(SortJobsBy)
/set_sort_by(Option<SortJobsBy>)
:
required: falseThe field to sort by in the returned list of jobs.
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:
required: falseThe sort order of the results.
- On success, responds with
ListModelCustomizationJobsOutput
with field(s):next_token(Option<String>)
:If the total number of results is greater than the
maxResults
value provided in the request, use this token when making another request in thenextToken
field to return the next batch of results.model_customization_job_summaries(Option<Vec::<ModelCustomizationJobSummary>>)
:Job summaries.
- On failure, responds with
SdkError<ListModelCustomizationJobsError>
Source§impl Client
impl Client
Sourcepub fn list_model_import_jobs(&self) -> ListModelImportJobsFluentBuilder
pub fn list_model_import_jobs(&self) -> ListModelImportJobsFluentBuilder
Constructs a fluent builder for the ListModelImportJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
creation_time_after(DateTime)
/set_creation_time_after(Option<DateTime>)
:
required: falseReturn import jobs that were created after the specified time.
creation_time_before(DateTime)
/set_creation_time_before(Option<DateTime>)
:
required: falseReturn import jobs that were created before the specified time.
status_equals(ModelImportJobStatus)
/set_status_equals(Option<ModelImportJobStatus>)
:
required: falseReturn imported jobs with the specified status.
name_contains(impl Into<String>)
/set_name_contains(Option<String>)
:
required: falseReturn imported jobs only if the job name contains these characters.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return in the response. If the total number of results is greater than this value, use the token returned in the response in the
nextToken
field when making another request to return the next batch of results.next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the total number of results is greater than the
maxResults
value provided in the request, enter the token returned in thenextToken
field in the response in this field to return the next batch of results.sort_by(SortJobsBy)
/set_sort_by(Option<SortJobsBy>)
:
required: falseThe field to sort by in the returned list of imported jobs.
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:
required: falseSpecifies whether to sort the results in ascending or descending order.
- On success, responds with
ListModelImportJobsOutput
with field(s):next_token(Option<String>)
:If the total number of results is greater than the
maxResults
value provided in the request, enter the token returned in thenextToken
field in the response in this field to return the next batch of results.model_import_job_summaries(Option<Vec::<ModelImportJobSummary>>)
:Import job summaries.
- On failure, responds with
SdkError<ListModelImportJobsError>
Source§impl Client
impl Client
Sourcepub fn list_model_invocation_jobs(&self) -> ListModelInvocationJobsFluentBuilder
pub fn list_model_invocation_jobs(&self) -> ListModelInvocationJobsFluentBuilder
Constructs a fluent builder for the ListModelInvocationJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
submit_time_after(DateTime)
/set_submit_time_after(Option<DateTime>)
:
required: falseSpecify a time to filter for batch inference jobs that were submitted after the time you specify.
submit_time_before(DateTime)
/set_submit_time_before(Option<DateTime>)
:
required: falseSpecify a time to filter for batch inference jobs that were submitted before the time you specify.
status_equals(ModelInvocationJobStatus)
/set_status_equals(Option<ModelInvocationJobStatus>)
:
required: falseSpecify a status to filter for batch inference jobs whose statuses match the string you specify.
The following statuses are possible:
-
Submitted – This job has been submitted to a queue for validation.
-
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
-
Your IAM service role has access to the Amazon S3 buckets containing your files.
-
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn’t check if the
modelInput
value matches the request body for the model. -
Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
-
-
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
-
Expired – This job timed out because it was scheduled but didn’t begin before the set timeout duration. Submit a new job request.
-
InProgress – This job has begun. You can start viewing the results in the output S3 location.
-
Completed – This job has successfully completed. View the output files in the output S3 location.
-
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
-
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.
-
Stopped – This job was stopped by a user.
-
Stopping – This job is being stopped by a user.
-
name_contains(impl Into<String>)
/set_name_contains(Option<String>)
:
required: falseSpecify a string to filter for batch inference jobs whose names contain the string.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return. If there are more results than the number that you specify, a
nextToken
value is returned. Use thenextToken
in a request to return the next batch of results.next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf there were more results than the value you specified in the
maxResults
field in a previousListModelInvocationJobs
request, the response would have returned anextToken
value. To see the next batch of results, send thenextToken
value in another request.sort_by(SortJobsBy)
/set_sort_by(Option<SortJobsBy>)
:
required: falseAn attribute by which to sort the results.
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:
required: falseSpecifies whether to sort the results by ascending or descending order.
- On success, responds with
ListModelInvocationJobsOutput
with field(s):next_token(Option<String>)
:If there are more results than can fit in the response, a
nextToken
is returned. Use thenextToken
in a request to return the next batch of results.invocation_job_summaries(Option<Vec::<ModelInvocationJobSummary>>)
:A list of items, each of which contains a summary about a batch inference job.
- On failure, responds with
SdkError<ListModelInvocationJobsError>
Source§impl Client
impl Client
Sourcepub fn list_prompt_routers(&self) -> ListPromptRoutersFluentBuilder
pub fn list_prompt_routers(&self) -> ListPromptRoutersFluentBuilder
Constructs a fluent builder for the ListPromptRouters
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of prompt routers to return in one page of results.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseSpecify the pagination token from a previous request to retrieve the next page of results.
r#type(PromptRouterType)
/set_type(Option<PromptRouterType>)
:
required: falseThe type of the prompt routers, such as whether it’s default or custom.
- On success, responds with
ListPromptRoutersOutput
with field(s):prompt_router_summaries(Option<Vec::<PromptRouterSummary>>)
:A list of prompt router summaries.
next_token(Option<String>)
:Specify the pagination token from a previous request to retrieve the next page of results.
- On failure, responds with
SdkError<ListPromptRoutersError>
Source§impl Client
impl Client
Sourcepub fn list_provisioned_model_throughputs(
&self,
) -> ListProvisionedModelThroughputsFluentBuilder
pub fn list_provisioned_model_throughputs( &self, ) -> ListProvisionedModelThroughputsFluentBuilder
Constructs a fluent builder for the ListProvisionedModelThroughputs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
creation_time_after(DateTime)
/set_creation_time_after(Option<DateTime>)
:
required: falseA filter that returns Provisioned Throughputs created after the specified time.
creation_time_before(DateTime)
/set_creation_time_before(Option<DateTime>)
:
required: falseA filter that returns Provisioned Throughputs created before the specified time.
status_equals(ProvisionedModelStatus)
/set_status_equals(Option<ProvisionedModelStatus>)
:
required: falseA filter that returns Provisioned Throughputs if their statuses matches the value that you specify.
model_arn_equals(impl Into<String>)
/set_model_arn_equals(Option<String>)
:
required: falseA filter that returns Provisioned Throughputs whose model Amazon Resource Name (ARN) is equal to the value that you specify.
name_contains(impl Into<String>)
/set_name_contains(Option<String>)
:
required: falseA filter that returns Provisioned Throughputs if their name contains the expression that you specify.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseTHe maximum number of results to return in the response. If there are more results than the number you specified, the response returns a
nextToken
value. To see the next batch of results, send thenextToken
value in another list request.next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf there are more results than the number you specified in the
maxResults
field, the response returns anextToken
value. To see the next batch of results, specify thenextToken
value in this field.sort_by(SortByProvisionedModels)
/set_sort_by(Option<SortByProvisionedModels>)
:
required: falseThe field by which to sort the returned list of Provisioned Throughputs.
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:
required: falseThe sort order of the results.
- On success, responds with
ListProvisionedModelThroughputsOutput
with field(s):next_token(Option<String>)
:If there are more results than the number you specified in the
maxResults
field, this value is returned. To see the next batch of results, include this value in thenextToken
field in another list request.provisioned_model_summaries(Option<Vec::<ProvisionedModelSummary>>)
:A list of summaries, one for each Provisioned Throughput in the response.
- On failure, responds with
SdkError<ListProvisionedModelThroughputsError>
Source§impl Client
impl Client
Constructs a fluent builder for the ListTagsForResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the resource.
- On success, responds with
ListTagsForResourceOutput
with field(s):tags(Option<Vec::<Tag>>)
:An array of the tags associated with this resource.
- On failure, responds with
SdkError<ListTagsForResourceError>
Source§impl Client
impl Client
Sourcepub fn put_model_invocation_logging_configuration(
&self,
) -> PutModelInvocationLoggingConfigurationFluentBuilder
pub fn put_model_invocation_logging_configuration( &self, ) -> PutModelInvocationLoggingConfigurationFluentBuilder
Constructs a fluent builder for the PutModelInvocationLoggingConfiguration
operation.
- The fluent builder is configurable:
logging_config(LoggingConfig)
/set_logging_config(Option<LoggingConfig>)
:
required: trueThe logging configuration values to set.
- On success, responds with
PutModelInvocationLoggingConfigurationOutput
- On failure, responds with
SdkError<PutModelInvocationLoggingConfigurationError>
Source§impl Client
impl Client
Sourcepub fn register_marketplace_model_endpoint(
&self,
) -> RegisterMarketplaceModelEndpointFluentBuilder
pub fn register_marketplace_model_endpoint( &self, ) -> RegisterMarketplaceModelEndpointFluentBuilder
Constructs a fluent builder for the RegisterMarketplaceModelEndpoint
operation.
- The fluent builder is configurable:
endpoint_identifier(impl Into<String>)
/set_endpoint_identifier(Option<String>)
:
required: trueThe ARN of the Amazon SageMaker endpoint you want to register with Amazon Bedrock Marketplace.
model_source_identifier(impl Into<String>)
/set_model_source_identifier(Option<String>)
:
required: trueThe ARN of the model from Amazon Bedrock Marketplace that is deployed on the endpoint.
- On success, responds with
RegisterMarketplaceModelEndpointOutput
with field(s):marketplace_model_endpoint(Option<MarketplaceModelEndpoint>)
:Details about the registered endpoint.
- On failure, responds with
SdkError<RegisterMarketplaceModelEndpointError>
Source§impl Client
impl Client
Sourcepub fn stop_evaluation_job(&self) -> StopEvaluationJobFluentBuilder
pub fn stop_evaluation_job(&self) -> StopEvaluationJobFluentBuilder
Constructs a fluent builder for the StopEvaluationJob
operation.
- The fluent builder is configurable:
job_identifier(impl Into<String>)
/set_job_identifier(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the evaluation job you want to stop.
- On success, responds with
StopEvaluationJobOutput
- On failure, responds with
SdkError<StopEvaluationJobError>
Source§impl Client
impl Client
Sourcepub fn stop_model_customization_job(
&self,
) -> StopModelCustomizationJobFluentBuilder
pub fn stop_model_customization_job( &self, ) -> StopModelCustomizationJobFluentBuilder
Constructs a fluent builder for the StopModelCustomizationJob
operation.
- The fluent builder is configurable:
job_identifier(impl Into<String>)
/set_job_identifier(Option<String>)
:
required: trueJob identifier of the job to stop.
- On success, responds with
StopModelCustomizationJobOutput
- On failure, responds with
SdkError<StopModelCustomizationJobError>
Source§impl Client
impl Client
Sourcepub fn stop_model_invocation_job(&self) -> StopModelInvocationJobFluentBuilder
pub fn stop_model_invocation_job(&self) -> StopModelInvocationJobFluentBuilder
Constructs a fluent builder for the StopModelInvocationJob
operation.
- The fluent builder is configurable:
job_identifier(impl Into<String>)
/set_job_identifier(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the batch inference job to stop.
- On success, responds with
StopModelInvocationJobOutput
- On failure, responds with
SdkError<StopModelInvocationJobError>
Source§impl Client
impl Client
Sourcepub fn tag_resource(&self) -> TagResourceFluentBuilder
pub fn tag_resource(&self) -> TagResourceFluentBuilder
Constructs a fluent builder for the TagResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the resource to tag.
tags(Tag)
/set_tags(Option<Vec::<Tag>>)
:
required: trueTags to associate with the resource.
- On success, responds with
TagResourceOutput
- On failure, responds with
SdkError<TagResourceError>
Source§impl Client
impl Client
Sourcepub fn untag_resource(&self) -> UntagResourceFluentBuilder
pub fn untag_resource(&self) -> UntagResourceFluentBuilder
Constructs a fluent builder for the UntagResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the resource to untag.
tag_keys(impl Into<String>)
/set_tag_keys(Option<Vec::<String>>)
:
required: trueTag keys of the tags to remove from the resource.
- On success, responds with
UntagResourceOutput
- On failure, responds with
SdkError<UntagResourceError>
Source§impl Client
impl Client
Sourcepub fn update_guardrail(&self) -> UpdateGuardrailFluentBuilder
pub fn update_guardrail(&self) -> UpdateGuardrailFluentBuilder
Constructs a fluent builder for the UpdateGuardrail
operation.
- The fluent builder is configurable:
guardrail_identifier(impl Into<String>)
/set_guardrail_identifier(Option<String>)
:
required: trueThe unique identifier of the guardrail. This can be an ID or the ARN.
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueA name for the guardrail.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseA description of the guardrail.
topic_policy_config(GuardrailTopicPolicyConfig)
/set_topic_policy_config(Option<GuardrailTopicPolicyConfig>)
:
required: falseThe topic policy to configure for the guardrail.
content_policy_config(GuardrailContentPolicyConfig)
/set_content_policy_config(Option<GuardrailContentPolicyConfig>)
:
required: falseThe content policy to configure for the guardrail.
word_policy_config(GuardrailWordPolicyConfig)
/set_word_policy_config(Option<GuardrailWordPolicyConfig>)
:
required: falseThe word policy to configure for the guardrail.
sensitive_information_policy_config(GuardrailSensitiveInformationPolicyConfig)
/set_sensitive_information_policy_config(Option<GuardrailSensitiveInformationPolicyConfig>)
:
required: falseThe sensitive information policy to configure for the guardrail.
contextual_grounding_policy_config(GuardrailContextualGroundingPolicyConfig)
/set_contextual_grounding_policy_config(Option<GuardrailContextualGroundingPolicyConfig>)
:
required: falseThe contextual grounding policy configuration used to update a guardrail.
cross_region_config(GuardrailCrossRegionConfig)
/set_cross_region_config(Option<GuardrailCrossRegionConfig>)
:
required: falseThe system-defined guardrail profile that you’re using with your guardrail. Guardrail profiles define the destination Amazon Web Services Regions where guardrail inference requests can be automatically routed.
For more information, see the Amazon Bedrock User Guide.
blocked_input_messaging(impl Into<String>)
/set_blocked_input_messaging(Option<String>)
:
required: trueThe message to return when the guardrail blocks a prompt.
blocked_outputs_messaging(impl Into<String>)
/set_blocked_outputs_messaging(Option<String>)
:
required: trueThe message to return when the guardrail blocks a model response.
kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:
required: falseThe ARN of the KMS key with which to encrypt the guardrail.
- On success, responds with
UpdateGuardrailOutput
with field(s):guardrail_id(String)
:The unique identifier of the guardrail
guardrail_arn(String)
:The ARN of the guardrail.
version(String)
:The version of the guardrail.
updated_at(DateTime)
:The date and time at which the guardrail was updated.
- On failure, responds with
SdkError<UpdateGuardrailError>
Source§impl Client
impl Client
Sourcepub fn update_marketplace_model_endpoint(
&self,
) -> UpdateMarketplaceModelEndpointFluentBuilder
pub fn update_marketplace_model_endpoint( &self, ) -> UpdateMarketplaceModelEndpointFluentBuilder
Constructs a fluent builder for the UpdateMarketplaceModelEndpoint
operation.
- The fluent builder is configurable:
endpoint_arn(impl Into<String>)
/set_endpoint_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the endpoint you want to update.
endpoint_config(EndpointConfig)
/set_endpoint_config(Option<EndpointConfig>)
:
required: trueThe new configuration for the endpoint, including the number and type of instances to use.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseA unique, case-sensitive identifier that you provide to ensure the idempotency of the request. This token is listed as not required because Amazon Web Services SDKs automatically generate it for you and set this parameter. If you’re not using the Amazon Web Services SDK or the CLI, you must provide this token or the action will fail.
- On success, responds with
UpdateMarketplaceModelEndpointOutput
with field(s):marketplace_model_endpoint(Option<MarketplaceModelEndpoint>)
:Details about the updated endpoint.
- On failure, responds with
SdkError<UpdateMarketplaceModelEndpointError>
Source§impl Client
impl Client
Sourcepub fn update_provisioned_model_throughput(
&self,
) -> UpdateProvisionedModelThroughputFluentBuilder
pub fn update_provisioned_model_throughput( &self, ) -> UpdateProvisionedModelThroughputFluentBuilder
Constructs a fluent builder for the UpdateProvisionedModelThroughput
operation.
- The fluent builder is configurable:
provisioned_model_id(impl Into<String>)
/set_provisioned_model_id(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) or name of the Provisioned Throughput to update.
desired_provisioned_model_name(impl Into<String>)
/set_desired_provisioned_model_name(Option<String>)
:
required: falseThe new name for this Provisioned Throughput.
desired_model_id(impl Into<String>)
/set_desired_model_id(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of the new model to associate with this Provisioned Throughput. You can’t specify this field if this Provisioned Throughput is associated with a base model.
If this Provisioned Throughput is associated with a custom model, you can specify one of the following options:
-
The base model from which the custom model was customized.
-
Another custom model that was customized from the same base model as the custom model.
-
- On success, responds with
UpdateProvisionedModelThroughputOutput
- On failure, responds with
SdkError<UpdateProvisionedModelThroughputError>
Source§impl Client
impl Client
Sourcepub fn from_conf(conf: Config) -> Self
pub fn from_conf(conf: Config) -> Self
Creates a new client from the service Config
.
§Panics
This method will panic in the following cases:
- Retries or timeouts are enabled without a
sleep_impl
configured. - Identity caching is enabled without a
sleep_impl
andtime_source
configured. - No
behavior_version
is provided.
The panic message for each of these will have instructions on how to resolve them.
Source§impl Client
impl Client
Sourcepub fn new(sdk_config: &SdkConfig) -> Self
pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
§Panics
- This method will panic if the
sdk_config
is missing an async sleep implementation. If you experience this panic, set thesleep_impl
on the Config passed into this function to fix it. - This method will panic if the
sdk_config
is missing an HTTP connector. If you experience this panic, set thehttp_connector
on the Config passed into this function to fix it. - This method will panic if no
BehaviorVersion
is provided. If you experience this panic, setbehavior_version
on the Config or enable thebehavior-version-latest
Cargo feature.
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Client
impl !RefUnwindSafe for Client
impl Send for Client
impl Sync for Client
impl Unpin for Client
impl !UnwindSafe for Client
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);