pub struct Client { /* private fields */ }
Expand description
Client for Amazon Bedrock Runtime
Client for invoking operations on Amazon Bedrock Runtime. Each operation on Amazon Bedrock Runtime is a method on this
this struct. .send()
MUST be invoked on the generated operations to dispatch the request to the service.
§Constructing a Client
A Config
is required to construct a client. For most use cases, the aws-config
crate should be used to automatically resolve this config using
aws_config::load_from_env()
, since this will resolve an SdkConfig
which can be shared
across multiple different AWS SDK clients. This config resolution process can be customized
by calling aws_config::from_env()
instead, which returns a ConfigLoader
that uses
the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
let config = aws_config::load_from_env().await;
let client = aws_sdk_bedrockruntime::Client::new(&config);
Occasionally, SDKs may have additional service-specific values that can be set on the Config
that
is absent from SdkConfig
, or slightly different settings for a specific client may be desired.
The Builder
struct implements From<&SdkConfig>
, so setting these specific settings can be
done as follows:
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_bedrockruntime::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
See the aws-config
docs and Config
for more information on customizing configuration.
Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
§Using the Client
A client has a function for every operation that can be performed by the service.
For example, the ApplyGuardrail
operation has
a Client::apply_guardrail
, function which returns a builder for that operation.
The fluent builder ultimately has a send()
function that returns an async future that
returns a result, as illustrated below:
let result = client.apply_guardrail()
.guardrail_identifier("example")
.send()
.await;
The underlying HTTP requests that get made by this can be modified with the customize_operation
function on the fluent builder. See the customize
module for more
information.
Implementations§
Source§impl Client
impl Client
Sourcepub fn apply_guardrail(&self) -> ApplyGuardrailFluentBuilder
pub fn apply_guardrail(&self) -> ApplyGuardrailFluentBuilder
Constructs a fluent builder for the ApplyGuardrail
operation.
- The fluent builder is configurable:
guardrail_identifier(impl Into<String>)
/set_guardrail_identifier(Option<String>)
:
required: trueThe guardrail identifier used in the request to apply the guardrail.
guardrail_version(impl Into<String>)
/set_guardrail_version(Option<String>)
:
required: trueThe guardrail version used in the request to apply the guardrail.
source(GuardrailContentSource)
/set_source(Option<GuardrailContentSource>)
:
required: trueThe source of data used in the request to apply the guardrail.
content(GuardrailContentBlock)
/set_content(Option<Vec::<GuardrailContentBlock>>)
:
required: trueThe content details used in the request to apply the guardrail.
output_scope(GuardrailOutputScope)
/set_output_scope(Option<GuardrailOutputScope>)
:
required: falseSpecifies the scope of the output that you get in the response. Set to
FULL
to return the entire output, including any detected and non-detected entries in the response for enhanced debugging.Note that the full output scope doesn’t apply to word filters or regex in sensitive information filters. It does apply to all other filtering policies, including sensitive information with filters that can detect personally identifiable information (PII).
- On success, responds with
ApplyGuardrailOutput
with field(s):usage(Option<GuardrailUsage>)
:The usage details in the response from the guardrail.
action(GuardrailAction)
:The action taken in the response from the guardrail.
action_reason(Option<String>)
:The reason for the action taken when harmful content is detected.
outputs(Vec::<GuardrailOutputContent>)
:The output details in the response from the guardrail.
assessments(Vec::<GuardrailAssessment>)
:The assessment details in the response from the guardrail.
guardrail_coverage(Option<GuardrailCoverage>)
:The guardrail coverage details in the apply guardrail response.
- On failure, responds with
SdkError<ApplyGuardrailError>
Source§impl Client
impl Client
Sourcepub fn converse(&self) -> ConverseFluentBuilder
pub fn converse(&self) -> ConverseFluentBuilder
Constructs a fluent builder for the Converse
operation.
- The fluent builder is configurable:
model_id(impl Into<String>)
/set_model_id(Option<String>)
:
required: trueSpecifies the model or throughput with which to run inference, or the prompt resource to use in inference. The value depends on the resource that you use:
-
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
-
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
-
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
-
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
-
To include a prompt that was defined in Prompt management, specify the ARN of the prompt version to use.
The Converse API doesn’t support imported models.
-
messages(Message)
/set_messages(Option<Vec::<Message>>)
:
required: falseThe messages that you want to send to the model.
system(SystemContentBlock)
/set_system(Option<Vec::<SystemContentBlock>>)
:
required: falseA prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation.
inference_config(InferenceConfiguration)
/set_inference_config(Option<InferenceConfiguration>)
:
required: falseInference parameters to pass to the model.
Converse
andConverseStream
support a base set of inference parameters. If you need to pass additional parameters that the model supports, use theadditionalModelRequestFields
request field.tool_config(ToolConfiguration)
/set_tool_config(Option<ToolConfiguration>)
:
required: falseConfiguration information for the tools that the model can use when generating a response.
For information about models that support tool use, see Supported models and model features.
guardrail_config(GuardrailConfiguration)
/set_guardrail_config(Option<GuardrailConfiguration>)
:
required: falseConfiguration information for a guardrail that you want to use in the request. If you include
guardContent
blocks in thecontent
field in themessages
field, the guardrail operates only on those messages. If you include noguardContent
blocks, the guardrail operates on all messages in the request body and in any included prompt resource.additional_model_request_fields(Document)
/set_additional_model_request_fields(Option<Document>)
:
required: falseAdditional inference parameters that the model supports, beyond the base set of inference parameters that
Converse
andConverseStream
support in theinferenceConfig
field. For more information, see Model parameters.prompt_variables(impl Into<String>, PromptVariableValues)
/set_prompt_variables(Option<HashMap::<String, PromptVariableValues>>)
:
required: falseContains a map of variables in a prompt from Prompt management to objects containing the values to fill in for them when running model invocation. This field is ignored if you don’t specify a prompt resource in the
modelId
field.additional_model_response_field_paths(impl Into<String>)
/set_additional_model_response_field_paths(Option<Vec::<String>>)
:
required: falseAdditional model parameters field paths to return in the response.
Converse
andConverseStream
return the requested fields as a JSON Pointer object in theadditionalModelResponseFields
field. The following is example JSON foradditionalModelResponseFieldPaths
.[ “/stop_sequence” ]
For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation.
Converse
andConverseStream
reject an empty JSON Pointer or incorrectly structured JSON Pointer with a400
error code. if the JSON Pointer is valid, but the requested field is not in the model response, it is ignored byConverse
.request_metadata(impl Into<String>, impl Into<String>)
/set_request_metadata(Option<HashMap::<String, String>>)
:
required: falseKey-value pairs that you can use to filter invocation logs.
performance_config(PerformanceConfiguration)
/set_performance_config(Option<PerformanceConfiguration>)
:
required: falseModel performance settings for the request.
- On success, responds with
ConverseOutput
with field(s):output(Option<ConverseOutput>)
:The result from the call to
Converse
.stop_reason(StopReason)
:The reason why the model stopped generating output.
usage(Option<TokenUsage>)
:The total number of tokens used in the call to
Converse
. The total includes the tokens input to the model and the tokens generated by the model.metrics(Option<ConverseMetrics>)
:Metrics for the call to
Converse
.additional_model_response_fields(Option<Document>)
:Additional fields in the response that are unique to the model.
trace(Option<ConverseTrace>)
:A trace object that contains information about the Guardrail behavior.
performance_config(Option<PerformanceConfiguration>)
:Model performance settings for the request.
- On failure, responds with
SdkError<ConverseError>
Source§impl Client
impl Client
Sourcepub fn converse_stream(&self) -> ConverseStreamFluentBuilder
pub fn converse_stream(&self) -> ConverseStreamFluentBuilder
Constructs a fluent builder for the ConverseStream
operation.
- The fluent builder is configurable:
model_id(impl Into<String>)
/set_model_id(Option<String>)
:
required: trueSpecifies the model or throughput with which to run inference, or the prompt resource to use in inference. The value depends on the resource that you use:
-
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
-
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
-
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
-
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
-
To include a prompt that was defined in Prompt management, specify the ARN of the prompt version to use.
The Converse API doesn’t support imported models.
-
messages(Message)
/set_messages(Option<Vec::<Message>>)
:
required: falseThe messages that you want to send to the model.
system(SystemContentBlock)
/set_system(Option<Vec::<SystemContentBlock>>)
:
required: falseA prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation.
inference_config(InferenceConfiguration)
/set_inference_config(Option<InferenceConfiguration>)
:
required: falseInference parameters to pass to the model.
Converse
andConverseStream
support a base set of inference parameters. If you need to pass additional parameters that the model supports, use theadditionalModelRequestFields
request field.tool_config(ToolConfiguration)
/set_tool_config(Option<ToolConfiguration>)
:
required: falseConfiguration information for the tools that the model can use when generating a response.
For information about models that support streaming tool use, see Supported models and model features.
guardrail_config(GuardrailStreamConfiguration)
/set_guardrail_config(Option<GuardrailStreamConfiguration>)
:
required: falseConfiguration information for a guardrail that you want to use in the request. If you include
guardContent
blocks in thecontent
field in themessages
field, the guardrail operates only on those messages. If you include noguardContent
blocks, the guardrail operates on all messages in the request body and in any included prompt resource.additional_model_request_fields(Document)
/set_additional_model_request_fields(Option<Document>)
:
required: falseAdditional inference parameters that the model supports, beyond the base set of inference parameters that
Converse
andConverseStream
support in theinferenceConfig
field. For more information, see Model parameters.prompt_variables(impl Into<String>, PromptVariableValues)
/set_prompt_variables(Option<HashMap::<String, PromptVariableValues>>)
:
required: falseContains a map of variables in a prompt from Prompt management to objects containing the values to fill in for them when running model invocation. This field is ignored if you don’t specify a prompt resource in the
modelId
field.additional_model_response_field_paths(impl Into<String>)
/set_additional_model_response_field_paths(Option<Vec::<String>>)
:
required: falseAdditional model parameters field paths to return in the response.
Converse
andConverseStream
return the requested fields as a JSON Pointer object in theadditionalModelResponseFields
field. The following is example JSON foradditionalModelResponseFieldPaths
.[ “/stop_sequence” ]
For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation.
Converse
andConverseStream
reject an empty JSON Pointer or incorrectly structured JSON Pointer with a400
error code. if the JSON Pointer is valid, but the requested field is not in the model response, it is ignored byConverse
.request_metadata(impl Into<String>, impl Into<String>)
/set_request_metadata(Option<HashMap::<String, String>>)
:
required: falseKey-value pairs that you can use to filter invocation logs.
performance_config(PerformanceConfiguration)
/set_performance_config(Option<PerformanceConfiguration>)
:
required: falseModel performance settings for the request.
- On success, responds with
ConverseStreamOutput
with field(s):stream(EventReceiver<ConverseStreamOutput, ConverseStreamOutputError>)
:The output stream that the model generated.
- On failure, responds with
SdkError<ConverseStreamError>
Source§impl Client
impl Client
Sourcepub fn get_async_invoke(&self) -> GetAsyncInvokeFluentBuilder
pub fn get_async_invoke(&self) -> GetAsyncInvokeFluentBuilder
Constructs a fluent builder for the GetAsyncInvoke
operation.
- The fluent builder is configurable:
invocation_arn(impl Into<String>)
/set_invocation_arn(Option<String>)
:
required: trueThe invocation’s ARN.
- On success, responds with
GetAsyncInvokeOutput
with field(s):invocation_arn(String)
:The invocation’s ARN.
model_arn(String)
:The invocation’s model ARN.
client_request_token(Option<String>)
:The invocation’s idempotency token.
status(AsyncInvokeStatus)
:The invocation’s status.
failure_message(Option<String>)
:An error message.
submit_time(DateTime)
:When the invocation request was submitted.
last_modified_time(Option<DateTime>)
:The invocation’s last modified time.
end_time(Option<DateTime>)
:When the invocation ended.
output_data_config(Option<AsyncInvokeOutputDataConfig>)
:Output data settings.
- On failure, responds with
SdkError<GetAsyncInvokeError>
Source§impl Client
impl Client
Sourcepub fn invoke_model(&self) -> InvokeModelFluentBuilder
pub fn invoke_model(&self) -> InvokeModelFluentBuilder
Constructs a fluent builder for the InvokeModel
operation.
- The fluent builder is configurable:
body(Blob)
/set_body(Option<Blob>)
:
required: falseThe prompt and inference parameters in the format specified in the
contentType
in the header. You must provide the body in JSON format. To see the format and content of the request and response bodies for different models, refer to Inference parameters. For more information, see Run inference in the Bedrock User Guide.content_type(impl Into<String>)
/set_content_type(Option<String>)
:
required: falseThe MIME type of the input data in the request. You must specify
application/json
.accept(impl Into<String>)
/set_accept(Option<String>)
:
required: falseThe desired MIME type of the inference body in the response. The default value is
application/json
.model_id(impl Into<String>)
/set_model_id(Option<String>)
:
required: trueThe unique identifier of the model to invoke to run inference.
The
modelId
to provide depends on the type of model or throughput that you use:-
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
-
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
-
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
-
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
-
If you use an imported model, specify the ARN of the imported model. You can get the model ARN from a successful call to CreateModelImportJob or from the Imported models page in the Amazon Bedrock console.
-
trace(Trace)
/set_trace(Option<Trace>)
:
required: falseSpecifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.
guardrail_identifier(impl Into<String>)
/set_guardrail_identifier(Option<String>)
:
required: falseThe unique identifier of the guardrail that you want to use. If you don’t provide a value, no guardrail is applied to the invocation.
An error will be thrown in the following situations.
-
You don’t provide a guardrail identifier but you specify the
amazon-bedrock-guardrailConfig
field in the request body. -
You enable the guardrail but the
contentType
isn’tapplication/json
. -
You provide a guardrail identifier, but
guardrailVersion
isn’t specified.
-
guardrail_version(impl Into<String>)
/set_guardrail_version(Option<String>)
:
required: falseThe version number for the guardrail. The value can also be
DRAFT
.performance_config_latency(PerformanceConfigLatency)
/set_performance_config_latency(Option<PerformanceConfigLatency>)
:
required: falseModel performance settings for the request.
- On success, responds with
InvokeModelOutput
with field(s):body(Blob)
:Inference response from the model in the format specified in the
contentType
header. To see the format and content of the request and response bodies for different models, refer to Inference parameters.content_type(String)
:The MIME type of the inference result.
performance_config_latency(Option<PerformanceConfigLatency>)
:Model performance settings for the request.
- On failure, responds with
SdkError<InvokeModelError>
Source§impl Client
impl Client
Sourcepub fn invoke_model_with_bidirectional_stream(
&self,
) -> InvokeModelWithBidirectionalStreamFluentBuilder
pub fn invoke_model_with_bidirectional_stream( &self, ) -> InvokeModelWithBidirectionalStreamFluentBuilder
Constructs a fluent builder for the InvokeModelWithBidirectionalStream
operation.
- The fluent builder is configurable:
model_id(impl Into<String>)
/set_model_id(Option<String>)
:
required: trueThe model ID or ARN of the model ID to use. Currently, only
amazon.nova-sonic-v1:0
is supported.body(EventStreamSender<InvokeModelWithBidirectionalStreamInput, InvokeModelWithBidirectionalStreamInputError>)
/set_body(EventStreamSender<InvokeModelWithBidirectionalStreamInput, InvokeModelWithBidirectionalStreamInputError>)
:
required: trueThe prompt and inference parameters in the format specified in the
BidirectionalInputPayloadPart
in the header. You must provide the body in JSON format. To see the format and content of the request and response bodies for different models, refer to Inference parameters. For more information, see Run inference in the Bedrock User Guide.
- On success, responds with
InvokeModelWithBidirectionalStreamOutput
with field(s):body(EventReceiver<InvokeModelWithBidirectionalStreamOutput, InvokeModelWithBidirectionalStreamOutputError>)
:Streaming response from the model in the format specified by the
BidirectionalOutputPayloadPart
header.
- On failure, responds with
SdkError<InvokeModelWithBidirectionalStreamError>
Source§impl Client
impl Client
Sourcepub fn invoke_model_with_response_stream(
&self,
) -> InvokeModelWithResponseStreamFluentBuilder
pub fn invoke_model_with_response_stream( &self, ) -> InvokeModelWithResponseStreamFluentBuilder
Constructs a fluent builder for the InvokeModelWithResponseStream
operation.
- The fluent builder is configurable:
body(Blob)
/set_body(Option<Blob>)
:
required: falseThe prompt and inference parameters in the format specified in the
contentType
in the header. You must provide the body in JSON format. To see the format and content of the request and response bodies for different models, refer to Inference parameters. For more information, see Run inference in the Bedrock User Guide.content_type(impl Into<String>)
/set_content_type(Option<String>)
:
required: falseThe MIME type of the input data in the request. You must specify
application/json
.accept(impl Into<String>)
/set_accept(Option<String>)
:
required: falseThe desired MIME type of the inference body in the response. The default value is
application/json
.model_id(impl Into<String>)
/set_model_id(Option<String>)
:
required: trueThe unique identifier of the model to invoke to run inference.
The
modelId
to provide depends on the type of model or throughput that you use:-
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
-
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
-
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
-
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
-
If you use an imported model, specify the ARN of the imported model. You can get the model ARN from a successful call to CreateModelImportJob or from the Imported models page in the Amazon Bedrock console.
-
trace(Trace)
/set_trace(Option<Trace>)
:
required: falseSpecifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.
guardrail_identifier(impl Into<String>)
/set_guardrail_identifier(Option<String>)
:
required: falseThe unique identifier of the guardrail that you want to use. If you don’t provide a value, no guardrail is applied to the invocation.
An error is thrown in the following situations.
-
You don’t provide a guardrail identifier but you specify the
amazon-bedrock-guardrailConfig
field in the request body. -
You enable the guardrail but the
contentType
isn’tapplication/json
. -
You provide a guardrail identifier, but
guardrailVersion
isn’t specified.
-
guardrail_version(impl Into<String>)
/set_guardrail_version(Option<String>)
:
required: falseThe version number for the guardrail. The value can also be
DRAFT
.performance_config_latency(PerformanceConfigLatency)
/set_performance_config_latency(Option<PerformanceConfigLatency>)
:
required: falseModel performance settings for the request.
- On success, responds with
InvokeModelWithResponseStreamOutput
with field(s):body(EventReceiver<ResponseStream, ResponseStreamError>)
:Inference response from the model in the format specified by the
contentType
header. To see the format and content of this field for different models, refer to Inference parameters.content_type(String)
:The MIME type of the inference result.
performance_config_latency(Option<PerformanceConfigLatency>)
:Model performance settings for the request.
- On failure, responds with
SdkError<InvokeModelWithResponseStreamError>
Source§impl Client
impl Client
Sourcepub fn list_async_invokes(&self) -> ListAsyncInvokesFluentBuilder
pub fn list_async_invokes(&self) -> ListAsyncInvokesFluentBuilder
Constructs a fluent builder for the ListAsyncInvokes
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
submit_time_after(DateTime)
/set_submit_time_after(Option<DateTime>)
:
required: falseInclude invocations submitted after this time.
submit_time_before(DateTime)
/set_submit_time_before(Option<DateTime>)
:
required: falseInclude invocations submitted before this time.
status_equals(AsyncInvokeStatus)
/set_status_equals(Option<AsyncInvokeStatus>)
:
required: falseFilter invocations by status.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of invocations to return in one page of results.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseSpecify the pagination token from a previous request to retrieve the next page of results.
sort_by(SortAsyncInvocationBy)
/set_sort_by(Option<SortAsyncInvocationBy>)
:
required: falseHow to sort the response.
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:
required: falseThe sorting order for the response.
- On success, responds with
ListAsyncInvokesOutput
with field(s):next_token(Option<String>)
:Specify the pagination token from a previous request to retrieve the next page of results.
async_invoke_summaries(Option<Vec::<AsyncInvokeSummary>>)
:A list of invocation summaries.
- On failure, responds with
SdkError<ListAsyncInvokesError>
Source§impl Client
impl Client
Sourcepub fn start_async_invoke(&self) -> StartAsyncInvokeFluentBuilder
pub fn start_async_invoke(&self) -> StartAsyncInvokeFluentBuilder
Constructs a fluent builder for the StartAsyncInvoke
operation.
- The fluent builder is configurable:
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseSpecify idempotency token to ensure that requests are not duplicated.
model_id(impl Into<String>)
/set_model_id(Option<String>)
:
required: trueThe model to invoke.
model_input(Document)
/set_model_input(Option<Document>)
:
required: trueInput to send to the model.
output_data_config(AsyncInvokeOutputDataConfig)
/set_output_data_config(Option<AsyncInvokeOutputDataConfig>)
:
required: trueWhere to store the output.
tags(Tag)
/set_tags(Option<Vec::<Tag>>)
:
required: falseTags to apply to the invocation.
- On success, responds with
StartAsyncInvokeOutput
with field(s):invocation_arn(String)
:The ARN of the invocation.
- On failure, responds with
SdkError<StartAsyncInvokeError>
Source§impl Client
impl Client
Sourcepub fn from_conf(conf: Config) -> Self
pub fn from_conf(conf: Config) -> Self
Creates a new client from the service Config
.
§Panics
This method will panic in the following cases:
- Retries or timeouts are enabled without a
sleep_impl
configured. - Identity caching is enabled without a
sleep_impl
andtime_source
configured. - No
behavior_version
is provided.
The panic message for each of these will have instructions on how to resolve them.
Source§impl Client
impl Client
Sourcepub fn new(sdk_config: &SdkConfig) -> Self
pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
§Panics
- This method will panic if the
sdk_config
is missing an async sleep implementation. If you experience this panic, set thesleep_impl
on the Config passed into this function to fix it. - This method will panic if the
sdk_config
is missing an HTTP connector. If you experience this panic, set thehttp_connector
on the Config passed into this function to fix it. - This method will panic if no
BehaviorVersion
is provided. If you experience this panic, setbehavior_version
on the Config or enable thebehavior-version-latest
Cargo feature.
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Client
impl !RefUnwindSafe for Client
impl Send for Client
impl Sync for Client
impl Unpin for Client
impl !UnwindSafe for Client
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);