pub struct BedrockClient { /* private fields */ }Expand description
Client for Amazon Bedrock Runtime
Client for invoking operations on Amazon Bedrock Runtime. Each operation on Amazon Bedrock Runtime is a method on this
this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.
§Constructing a Client
A Config is required to construct a client. For most use cases, the aws-config
crate should be used to automatically resolve this config using
aws_config::load_from_env(), since this will resolve an SdkConfig which can be shared
across multiple different AWS SDK clients. This config resolution process can be customized
by calling aws_config::from_env() instead, which returns a ConfigLoader that uses
the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
let config = aws_config::load_from_env().await;
let client = aws_sdk_bedrockruntime::Client::new(&config);Occasionally, SDKs may have additional service-specific values that can be set on the Config that
is absent from SdkConfig, or slightly different settings for a specific client may be desired.
The Builder struct implements From<&SdkConfig>, so setting these specific settings can be
done as follows:
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_bedrockruntime::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();See the aws-config docs and Config for more information on customizing configuration.
Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
§Using the Client
A client has a function for every operation that can be performed by the service.
For example, the ApplyGuardrail operation has
a Client::apply_guardrail, function which returns a builder for that operation.
The fluent builder ultimately has a send() function that returns an async future that
returns a result, as illustrated below:
let result = client.apply_guardrail()
.guardrail_identifier("example")
.send()
.await;The underlying HTTP requests that get made by this can be modified with the customize_operation
function on the fluent builder. See the customize module for more
information.
Implementations§
Source§impl Client
impl Client
Sourcepub fn from_conf(conf: Config) -> Client
pub fn from_conf(conf: Config) -> Client
Creates a new client from the service Config.
§Panics
This method will panic in the following cases:
- Retries or timeouts are enabled without a
sleep_implconfigured. - Identity caching is enabled without a
sleep_implandtime_sourceconfigured. - No
behavior_versionis provided.
The panic message for each of these will have instructions on how to resolve them.
Source§impl Client
impl Client
Sourcepub fn new(sdk_config: &SdkConfig) -> Client
pub fn new(sdk_config: &SdkConfig) -> Client
Creates a new client from an SDK Config.
§Panics
- This method will panic if the
sdk_configis missing an async sleep implementation. If you experience this panic, set thesleep_implon the Config passed into this function to fix it. - This method will panic if the
sdk_configis missing an HTTP connector. If you experience this panic, set thehttp_connectoron the Config passed into this function to fix it. - This method will panic if no
BehaviorVersionis provided. If you experience this panic, setbehavior_versionon the Config or enable thebehavior-version-latestCargo feature.
Source§impl Client
impl Client
Sourcepub fn apply_guardrail(&self) -> ApplyGuardrailFluentBuilder
pub fn apply_guardrail(&self) -> ApplyGuardrailFluentBuilder
Constructs a fluent builder for the ApplyGuardrail operation.
- The fluent builder is configurable:
guardrail_identifier(impl Into<String>)/set_guardrail_identifier(Option<String>):
required: trueThe guardrail identifier used in the request to apply the guardrail.
guardrail_version(impl Into<String>)/set_guardrail_version(Option<String>):
required: trueThe guardrail version used in the request to apply the guardrail.
source(GuardrailContentSource)/set_source(Option<GuardrailContentSource>):
required: trueThe source of data used in the request to apply the guardrail.
content(GuardrailContentBlock)/set_content(Option<Vec::<GuardrailContentBlock>>):
required: trueThe content details used in the request to apply the guardrail.
output_scope(GuardrailOutputScope)/set_output_scope(Option<GuardrailOutputScope>):
required: falseSpecifies the scope of the output that you get in the response. Set to
FULLto return the entire output, including any detected and non-detected entries in the response for enhanced debugging.Note that the full output scope doesn’t apply to word filters or regex in sensitive information filters. It does apply to all other filtering policies, including sensitive information with filters that can detect personally identifiable information (PII).
- On success, responds with
ApplyGuardrailOutputwith field(s):usage(Option<GuardrailUsage>):The usage details in the response from the guardrail.
action(GuardrailAction):The action taken in the response from the guardrail.
action_reason(Option<String>):The reason for the action taken when harmful content is detected.
outputs(Vec::<GuardrailOutputContent>):The output details in the response from the guardrail.
assessments(Vec::<GuardrailAssessment>):The assessment details in the response from the guardrail.
guardrail_coverage(Option<GuardrailCoverage>):The guardrail coverage details in the apply guardrail response.
- On failure, responds with
SdkError<ApplyGuardrailError>
Source§impl Client
impl Client
Sourcepub fn converse(&self) -> ConverseFluentBuilder
pub fn converse(&self) -> ConverseFluentBuilder
Constructs a fluent builder for the Converse operation.
- The fluent builder is configurable:
model_id(impl Into<String>)/set_model_id(Option<String>):
required: trueSpecifies the model or throughput with which to run inference, or the prompt resource to use in inference. The value depends on the resource that you use:
-
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
-
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
-
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
-
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
-
To include a prompt that was defined in Prompt management, specify the ARN of the prompt version to use.
The Converse API doesn’t support imported models.
-
messages(Message)/set_messages(Option<Vec::<Message>>):
required: falseThe messages that you want to send to the model.
system(SystemContentBlock)/set_system(Option<Vec::<SystemContentBlock>>):
required: falseA prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation.
inference_config(InferenceConfiguration)/set_inference_config(Option<InferenceConfiguration>):
required: falseInference parameters to pass to the model.
ConverseandConverseStreamsupport a base set of inference parameters. If you need to pass additional parameters that the model supports, use theadditionalModelRequestFieldsrequest field.tool_config(ToolConfiguration)/set_tool_config(Option<ToolConfiguration>):
required: falseConfiguration information for the tools that the model can use when generating a response.
For information about models that support tool use, see Supported models and model features.
guardrail_config(GuardrailConfiguration)/set_guardrail_config(Option<GuardrailConfiguration>):
required: falseConfiguration information for a guardrail that you want to use in the request. If you include
guardContentblocks in thecontentfield in themessagesfield, the guardrail operates only on those messages. If you include noguardContentblocks, the guardrail operates on all messages in the request body and in any included prompt resource.additional_model_request_fields(Document)/set_additional_model_request_fields(Option<Document>):
required: falseAdditional inference parameters that the model supports, beyond the base set of inference parameters that
ConverseandConverseStreamsupport in theinferenceConfigfield. For more information, see Model parameters.prompt_variables(impl Into<String>, PromptVariableValues)/set_prompt_variables(Option<HashMap::<String, PromptVariableValues>>):
required: falseContains a map of variables in a prompt from Prompt management to objects containing the values to fill in for them when running model invocation. This field is ignored if you don’t specify a prompt resource in the
modelIdfield.additional_model_response_field_paths(impl Into<String>)/set_additional_model_response_field_paths(Option<Vec::<String>>):
required: falseAdditional model parameters field paths to return in the response.
ConverseandConverseStreamreturn the requested fields as a JSON Pointer object in theadditionalModelResponseFieldsfield. The following is example JSON foradditionalModelResponseFieldPaths.[ “/stop_sequence” ]For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation.
ConverseandConverseStreamreject an empty JSON Pointer or incorrectly structured JSON Pointer with a400error code. if the JSON Pointer is valid, but the requested field is not in the model response, it is ignored byConverse.request_metadata(impl Into<String>, impl Into<String>)/set_request_metadata(Option<HashMap::<String, String>>):
required: falseKey-value pairs that you can use to filter invocation logs.
performance_config(PerformanceConfiguration)/set_performance_config(Option<PerformanceConfiguration>):
required: falseModel performance settings for the request.
- On success, responds with
ConverseOutputwith field(s):output(Option<ConverseOutput>):The result from the call to
Converse.stop_reason(StopReason):The reason why the model stopped generating output.
usage(Option<TokenUsage>):The total number of tokens used in the call to
Converse. The total includes the tokens input to the model and the tokens generated by the model.metrics(Option<ConverseMetrics>):Metrics for the call to
Converse.additional_model_response_fields(Option<Document>):Additional fields in the response that are unique to the model.
trace(Option<ConverseTrace>):A trace object that contains information about the Guardrail behavior.
performance_config(Option<PerformanceConfiguration>):Model performance settings for the request.
- On failure, responds with
SdkError<ConverseError>
Source§impl Client
impl Client
Sourcepub fn converse_stream(&self) -> ConverseStreamFluentBuilder
pub fn converse_stream(&self) -> ConverseStreamFluentBuilder
Constructs a fluent builder for the ConverseStream operation.
- The fluent builder is configurable:
model_id(impl Into<String>)/set_model_id(Option<String>):
required: trueSpecifies the model or throughput with which to run inference, or the prompt resource to use in inference. The value depends on the resource that you use:
-
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
-
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
-
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
-
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
-
To include a prompt that was defined in Prompt management, specify the ARN of the prompt version to use.
The Converse API doesn’t support imported models.
-
messages(Message)/set_messages(Option<Vec::<Message>>):
required: falseThe messages that you want to send to the model.
system(SystemContentBlock)/set_system(Option<Vec::<SystemContentBlock>>):
required: falseA prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation.
inference_config(InferenceConfiguration)/set_inference_config(Option<InferenceConfiguration>):
required: falseInference parameters to pass to the model.
ConverseandConverseStreamsupport a base set of inference parameters. If you need to pass additional parameters that the model supports, use theadditionalModelRequestFieldsrequest field.tool_config(ToolConfiguration)/set_tool_config(Option<ToolConfiguration>):
required: falseConfiguration information for the tools that the model can use when generating a response.
For information about models that support streaming tool use, see Supported models and model features.
guardrail_config(GuardrailStreamConfiguration)/set_guardrail_config(Option<GuardrailStreamConfiguration>):
required: falseConfiguration information for a guardrail that you want to use in the request. If you include
guardContentblocks in thecontentfield in themessagesfield, the guardrail operates only on those messages. If you include noguardContentblocks, the guardrail operates on all messages in the request body and in any included prompt resource.additional_model_request_fields(Document)/set_additional_model_request_fields(Option<Document>):
required: falseAdditional inference parameters that the model supports, beyond the base set of inference parameters that
ConverseandConverseStreamsupport in theinferenceConfigfield. For more information, see Model parameters.prompt_variables(impl Into<String>, PromptVariableValues)/set_prompt_variables(Option<HashMap::<String, PromptVariableValues>>):
required: falseContains a map of variables in a prompt from Prompt management to objects containing the values to fill in for them when running model invocation. This field is ignored if you don’t specify a prompt resource in the
modelIdfield.additional_model_response_field_paths(impl Into<String>)/set_additional_model_response_field_paths(Option<Vec::<String>>):
required: falseAdditional model parameters field paths to return in the response.
ConverseandConverseStreamreturn the requested fields as a JSON Pointer object in theadditionalModelResponseFieldsfield. The following is example JSON foradditionalModelResponseFieldPaths.[ “/stop_sequence” ]For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation.
ConverseandConverseStreamreject an empty JSON Pointer or incorrectly structured JSON Pointer with a400error code. if the JSON Pointer is valid, but the requested field is not in the model response, it is ignored byConverse.request_metadata(impl Into<String>, impl Into<String>)/set_request_metadata(Option<HashMap::<String, String>>):
required: falseKey-value pairs that you can use to filter invocation logs.
performance_config(PerformanceConfiguration)/set_performance_config(Option<PerformanceConfiguration>):
required: falseModel performance settings for the request.
- On success, responds with
ConverseStreamOutputwith field(s):stream(EventReceiver<ConverseStreamOutput, ConverseStreamOutputError>):The output stream that the model generated.
- On failure, responds with
SdkError<ConverseStreamError>
Source§impl Client
impl Client
Sourcepub fn count_tokens(&self) -> CountTokensFluentBuilder
pub fn count_tokens(&self) -> CountTokensFluentBuilder
Constructs a fluent builder for the CountTokens operation.
- The fluent builder is configurable:
model_id(impl Into<String>)/set_model_id(Option<String>):
required: trueThe unique identifier or ARN of the foundation model to use for token counting. Each model processes tokens differently, so the token count is specific to the model you specify.
input(CountTokensInput)/set_input(Option<CountTokensInput>):
required: trueThe input for which to count tokens. The structure of this parameter depends on whether you’re counting tokens for an
InvokeModelorConverserequest:-
For
InvokeModelrequests, provide the request body in theinvokeModelfield -
For
Converserequests, provide the messages and system content in theconversefield
The input format must be compatible with the model specified in the
modelIdparameter.-
- On success, responds with
CountTokensOutputwith field(s):input_tokens(i32):The number of tokens in the provided input according to the specified model’s tokenization rules. This count represents the number of input tokens that would be processed if the same input were sent to the model in an inference request. Use this value to estimate costs and ensure your inputs stay within model token limits.
- On failure, responds with
SdkError<CountTokensError>
Source§impl Client
impl Client
Sourcepub fn get_async_invoke(&self) -> GetAsyncInvokeFluentBuilder
pub fn get_async_invoke(&self) -> GetAsyncInvokeFluentBuilder
Constructs a fluent builder for the GetAsyncInvoke operation.
- The fluent builder is configurable:
invocation_arn(impl Into<String>)/set_invocation_arn(Option<String>):
required: trueThe invocation’s ARN.
- On success, responds with
GetAsyncInvokeOutputwith field(s):invocation_arn(String):The invocation’s ARN.
model_arn(String):The invocation’s model ARN.
client_request_token(Option<String>):The invocation’s idempotency token.
status(AsyncInvokeStatus):The invocation’s status.
failure_message(Option<String>):An error message.
submit_time(DateTime):When the invocation request was submitted.
last_modified_time(Option<DateTime>):The invocation’s last modified time.
end_time(Option<DateTime>):When the invocation ended.
output_data_config(Option<AsyncInvokeOutputDataConfig>):Output data settings.
- On failure, responds with
SdkError<GetAsyncInvokeError>
Source§impl Client
impl Client
Sourcepub fn invoke_model(&self) -> InvokeModelFluentBuilder
pub fn invoke_model(&self) -> InvokeModelFluentBuilder
Constructs a fluent builder for the InvokeModel operation.
- The fluent builder is configurable:
body(Blob)/set_body(Option<Blob>):
required: falseThe prompt and inference parameters in the format specified in the
contentTypein the header. You must provide the body in JSON format. To see the format and content of the request and response bodies for different models, refer to Inference parameters. For more information, see Run inference in the Bedrock User Guide.content_type(impl Into<String>)/set_content_type(Option<String>):
required: falseThe MIME type of the input data in the request. You must specify
application/json.accept(impl Into<String>)/set_accept(Option<String>):
required: falseThe desired MIME type of the inference body in the response. The default value is
application/json.model_id(impl Into<String>)/set_model_id(Option<String>):
required: trueThe unique identifier of the model to invoke to run inference.
The
modelIdto provide depends on the type of model or throughput that you use:-
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
-
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
-
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
-
If you use a custom model, specify the ARN of the custom model deployment (for on-demand inference) or the ARN of your provisioned model (for Provisioned Throughput). For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
-
If you use an imported model, specify the ARN of the imported model. You can get the model ARN from a successful call to CreateModelImportJob or from the Imported models page in the Amazon Bedrock console.
-
trace(Trace)/set_trace(Option<Trace>):
required: falseSpecifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.
guardrail_identifier(impl Into<String>)/set_guardrail_identifier(Option<String>):
required: falseThe unique identifier of the guardrail that you want to use. If you don’t provide a value, no guardrail is applied to the invocation.
An error will be thrown in the following situations.
-
You don’t provide a guardrail identifier but you specify the
amazon-bedrock-guardrailConfigfield in the request body. -
You enable the guardrail but the
contentTypeisn’tapplication/json. -
You provide a guardrail identifier, but
guardrailVersionisn’t specified.
-
guardrail_version(impl Into<String>)/set_guardrail_version(Option<String>):
required: falseThe version number for the guardrail. The value can also be
DRAFT.performance_config_latency(PerformanceConfigLatency)/set_performance_config_latency(Option<PerformanceConfigLatency>):
required: falseModel performance settings for the request.
- On success, responds with
InvokeModelOutputwith field(s):body(Blob):Inference response from the model in the format specified in the
contentTypeheader. To see the format and content of the request and response bodies for different models, refer to Inference parameters.content_type(String):The MIME type of the inference result.
performance_config_latency(Option<PerformanceConfigLatency>):Model performance settings for the request.
- On failure, responds with
SdkError<InvokeModelError>
Source§impl Client
impl Client
Sourcepub fn invoke_model_with_bidirectional_stream(
&self,
) -> InvokeModelWithBidirectionalStreamFluentBuilder
pub fn invoke_model_with_bidirectional_stream( &self, ) -> InvokeModelWithBidirectionalStreamFluentBuilder
Constructs a fluent builder for the InvokeModelWithBidirectionalStream operation.
- The fluent builder is configurable:
model_id(impl Into<String>)/set_model_id(Option<String>):
required: trueThe model ID or ARN of the model ID to use. Currently, only
amazon.nova-sonic-v1:0is supported.body(EventStreamSender<InvokeModelWithBidirectionalStreamInput, InvokeModelWithBidirectionalStreamInputError>)/set_body(EventStreamSender<InvokeModelWithBidirectionalStreamInput, InvokeModelWithBidirectionalStreamInputError>):
required: trueThe prompt and inference parameters in the format specified in the
BidirectionalInputPayloadPartin the header. You must provide the body in JSON format. To see the format and content of the request and response bodies for different models, refer to Inference parameters. For more information, see Run inference in the Bedrock User Guide.
- On success, responds with
InvokeModelWithBidirectionalStreamOutputwith field(s):body(EventReceiver<InvokeModelWithBidirectionalStreamOutput, InvokeModelWithBidirectionalStreamOutputError>):Streaming response from the model in the format specified by the
BidirectionalOutputPayloadPartheader.
- On failure, responds with
SdkError<InvokeModelWithBidirectionalStreamError>
Source§impl Client
impl Client
Sourcepub fn invoke_model_with_response_stream(
&self,
) -> InvokeModelWithResponseStreamFluentBuilder
pub fn invoke_model_with_response_stream( &self, ) -> InvokeModelWithResponseStreamFluentBuilder
Constructs a fluent builder for the InvokeModelWithResponseStream operation.
- The fluent builder is configurable:
body(Blob)/set_body(Option<Blob>):
required: falseThe prompt and inference parameters in the format specified in the
contentTypein the header. You must provide the body in JSON format. To see the format and content of the request and response bodies for different models, refer to Inference parameters. For more information, see Run inference in the Bedrock User Guide.content_type(impl Into<String>)/set_content_type(Option<String>):
required: falseThe MIME type of the input data in the request. You must specify
application/json.accept(impl Into<String>)/set_accept(Option<String>):
required: falseThe desired MIME type of the inference body in the response. The default value is
application/json.model_id(impl Into<String>)/set_model_id(Option<String>):
required: trueThe unique identifier of the model to invoke to run inference.
The
modelIdto provide depends on the type of model or throughput that you use:-
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
-
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
-
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
-
If you use a custom model, specify the ARN of the custom model deployment (for on-demand inference) or the ARN of your provisioned model (for Provisioned Throughput). For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
-
If you use an imported model, specify the ARN of the imported model. You can get the model ARN from a successful call to CreateModelImportJob or from the Imported models page in the Amazon Bedrock console.
-
trace(Trace)/set_trace(Option<Trace>):
required: falseSpecifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.
guardrail_identifier(impl Into<String>)/set_guardrail_identifier(Option<String>):
required: falseThe unique identifier of the guardrail that you want to use. If you don’t provide a value, no guardrail is applied to the invocation.
An error is thrown in the following situations.
-
You don’t provide a guardrail identifier but you specify the
amazon-bedrock-guardrailConfigfield in the request body. -
You enable the guardrail but the
contentTypeisn’tapplication/json. -
You provide a guardrail identifier, but
guardrailVersionisn’t specified.
-
guardrail_version(impl Into<String>)/set_guardrail_version(Option<String>):
required: falseThe version number for the guardrail. The value can also be
DRAFT.performance_config_latency(PerformanceConfigLatency)/set_performance_config_latency(Option<PerformanceConfigLatency>):
required: falseModel performance settings for the request.
- On success, responds with
InvokeModelWithResponseStreamOutputwith field(s):body(EventReceiver<ResponseStream, ResponseStreamError>):Inference response from the model in the format specified by the
contentTypeheader. To see the format and content of this field for different models, refer to Inference parameters.content_type(String):The MIME type of the inference result.
performance_config_latency(Option<PerformanceConfigLatency>):Model performance settings for the request.
- On failure, responds with
SdkError<InvokeModelWithResponseStreamError>
Source§impl Client
impl Client
Sourcepub fn list_async_invokes(&self) -> ListAsyncInvokesFluentBuilder
pub fn list_async_invokes(&self) -> ListAsyncInvokesFluentBuilder
Constructs a fluent builder for the ListAsyncInvokes operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
submit_time_after(DateTime)/set_submit_time_after(Option<DateTime>):
required: falseInclude invocations submitted after this time.
submit_time_before(DateTime)/set_submit_time_before(Option<DateTime>):
required: falseInclude invocations submitted before this time.
status_equals(AsyncInvokeStatus)/set_status_equals(Option<AsyncInvokeStatus>):
required: falseFilter invocations by status.
max_results(i32)/set_max_results(Option<i32>):
required: falseThe maximum number of invocations to return in one page of results.
next_token(impl Into<String>)/set_next_token(Option<String>):
required: falseSpecify the pagination token from a previous request to retrieve the next page of results.
sort_by(SortAsyncInvocationBy)/set_sort_by(Option<SortAsyncInvocationBy>):
required: falseHow to sort the response.
sort_order(SortOrder)/set_sort_order(Option<SortOrder>):
required: falseThe sorting order for the response.
- On success, responds with
ListAsyncInvokesOutputwith field(s):next_token(Option<String>):Specify the pagination token from a previous request to retrieve the next page of results.
async_invoke_summaries(Option<Vec::<AsyncInvokeSummary>>):A list of invocation summaries.
- On failure, responds with
SdkError<ListAsyncInvokesError>
Source§impl Client
impl Client
Sourcepub fn start_async_invoke(&self) -> StartAsyncInvokeFluentBuilder
pub fn start_async_invoke(&self) -> StartAsyncInvokeFluentBuilder
Constructs a fluent builder for the StartAsyncInvoke operation.
- The fluent builder is configurable:
client_request_token(impl Into<String>)/set_client_request_token(Option<String>):
required: falseSpecify idempotency token to ensure that requests are not duplicated.
model_id(impl Into<String>)/set_model_id(Option<String>):
required: trueThe model to invoke.
model_input(Document)/set_model_input(Option<Document>):
required: trueInput to send to the model.
output_data_config(AsyncInvokeOutputDataConfig)/set_output_data_config(Option<AsyncInvokeOutputDataConfig>):
required: trueWhere to store the output.
tags(Tag)/set_tags(Option<Vec::<Tag>>):
required: falseTags to apply to the invocation.
- On success, responds with
StartAsyncInvokeOutputwith field(s):invocation_arn(String):The ARN of the invocation.
- On failure, responds with
SdkError<StartAsyncInvokeError>
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Client
impl !RefUnwindSafe for Client
impl Send for Client
impl Sync for Client
impl Unpin for Client
impl !UnwindSafe for Client
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more