pub struct Client { /* private fields */ }
Expand description
Client for AWS Clean Rooms ML
Client for invoking operations on AWS Clean Rooms ML. Each operation on AWS Clean Rooms ML is a method on this
this struct. .send()
MUST be invoked on the generated operations to dispatch the request to the service.
§Constructing a Client
A Config
is required to construct a client. For most use cases, the aws-config
crate should be used to automatically resolve this config using
aws_config::load_from_env()
, since this will resolve an SdkConfig
which can be shared
across multiple different AWS SDK clients. This config resolution process can be customized
by calling aws_config::from_env()
instead, which returns a ConfigLoader
that uses
the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
let config = aws_config::load_from_env().await;
let client = aws_sdk_cleanroomsml::Client::new(&config);
Occasionally, SDKs may have additional service-specific values that can be set on the Config
that
is absent from SdkConfig
, or slightly different settings for a specific client may be desired.
The Builder
struct implements From<&SdkConfig>
, so setting these specific settings can be
done as follows:
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_cleanroomsml::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
See the aws-config
docs and Config
for more information on customizing configuration.
Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
§Using the Client
A client has a function for every operation that can be performed by the service.
For example, the CancelTrainedModel
operation has
a Client::cancel_trained_model
, function which returns a builder for that operation.
The fluent builder ultimately has a send()
function that returns an async future that
returns a result, as illustrated below:
let result = client.cancel_trained_model()
.membership_identifier("example")
.send()
.await;
The underlying HTTP requests that get made by this can be modified with the customize_operation
function on the fluent builder. See the customize
module for more
information.
Implementations§
Source§impl Client
impl Client
Sourcepub fn cancel_trained_model(&self) -> CancelTrainedModelFluentBuilder
pub fn cancel_trained_model(&self) -> CancelTrainedModelFluentBuilder
Constructs a fluent builder for the CancelTrainedModel
operation.
- The fluent builder is configurable:
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the trained model job that you want to cancel.
trained_model_arn(impl Into<String>)
/set_trained_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the trained model job that you want to cancel.
version_identifier(impl Into<String>)
/set_version_identifier(Option<String>)
:
required: falseThe version identifier of the trained model to cancel. This parameter allows you to specify which version of the trained model you want to cancel when multiple versions exist.
If
versionIdentifier
is not specified, the base model will be cancelled.
- On success, responds with
CancelTrainedModelOutput
- On failure, responds with
SdkError<CancelTrainedModelError>
Source§impl Client
impl Client
Sourcepub fn cancel_trained_model_inference_job(
&self,
) -> CancelTrainedModelInferenceJobFluentBuilder
pub fn cancel_trained_model_inference_job( &self, ) -> CancelTrainedModelInferenceJobFluentBuilder
Constructs a fluent builder for the CancelTrainedModelInferenceJob
operation.
- The fluent builder is configurable:
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the trained model inference job that you want to cancel.
trained_model_inference_job_arn(impl Into<String>)
/set_trained_model_inference_job_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the trained model inference job that you want to cancel.
- On success, responds with
CancelTrainedModelInferenceJobOutput
- On failure, responds with
SdkError<CancelTrainedModelInferenceJobError>
Source§impl Client
impl Client
Sourcepub fn create_audience_model(&self) -> CreateAudienceModelFluentBuilder
pub fn create_audience_model(&self) -> CreateAudienceModelFluentBuilder
Constructs a fluent builder for the CreateAudienceModel
operation.
- The fluent builder is configurable:
training_data_start_time(DateTime)
/set_training_data_start_time(Option<DateTime>)
:
required: falseThe start date and time of the training window.
training_data_end_time(DateTime)
/set_training_data_end_time(Option<DateTime>)
:
required: falseThe end date and time of the training window.
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the audience model resource.
training_dataset_arn(impl Into<String>)
/set_training_dataset_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the training dataset for this audience model.
kms_key_arn(impl Into<String>)
/set_kms_key_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of the KMS key. This key is used to encrypt and decrypt customer-owned data in the trained ML model and the associated data.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseThe optional metadata that you apply to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe description of the audience model.
- On success, responds with
CreateAudienceModelOutput
with field(s):audience_model_arn(String)
:The Amazon Resource Name (ARN) of the audience model.
- On failure, responds with
SdkError<CreateAudienceModelError>
Source§impl Client
impl Client
Sourcepub fn create_configured_audience_model(
&self,
) -> CreateConfiguredAudienceModelFluentBuilder
pub fn create_configured_audience_model( &self, ) -> CreateConfiguredAudienceModelFluentBuilder
Constructs a fluent builder for the CreateConfiguredAudienceModel
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the configured audience model.
audience_model_arn(impl Into<String>)
/set_audience_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the audience model to use for the configured audience model.
output_config(ConfiguredAudienceModelOutputConfig)
/set_output_config(Option<ConfiguredAudienceModelOutputConfig>)
:
required: trueConfigure the Amazon S3 location and IAM Role for audiences created using this configured audience model. Each audience will have a unique location. The IAM Role must have
s3:PutObject
permission on the destination Amazon S3 location. If the destination is protected with Amazon S3 KMS-SSE, then the Role must also have the required KMS permissions.description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe description of the configured audience model.
shared_audience_metrics(SharedAudienceMetrics)
/set_shared_audience_metrics(Option<Vec::<SharedAudienceMetrics>>)
:
required: trueWhether audience metrics are shared.
min_matching_seed_size(i32)
/set_min_matching_seed_size(Option<i32>)
:
required: falseThe minimum number of users from the seed audience that must match with users in the training data of the audience model. The default value is 500.
audience_size_config(AudienceSizeConfig)
/set_audience_size_config(Option<AudienceSizeConfig>)
:
required: falseConfigure the list of output sizes of audiences that can be created using this configured audience model. A request to
StartAudienceGenerationJob
that uses this configured audience model must have anaudienceSize
selected from this list. You can use theABSOLUTE
AudienceSize
to configure out audience sizes using the count of identifiers in the output. You can use thePercentage
AudienceSize
to configure sizes in the range 1-100 percent.tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseThe optional metadata that you apply to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
child_resource_tag_on_create_policy(TagOnCreatePolicy)
/set_child_resource_tag_on_create_policy(Option<TagOnCreatePolicy>)
:
required: falseConfigure how the service tags audience generation jobs created using this configured audience model. If you specify
NONE
, the tags from theStartAudienceGenerationJob
request determine the tags of the audience generation job. If you specifyFROM_PARENT_RESOURCE
, the audience generation job inherits the tags from the configured audience model, by default. Tags in theStartAudienceGenerationJob
will override the default.When the client is in a different account than the configured audience model, the tags from the client are never applied to a resource in the caller’s account.
- On success, responds with
CreateConfiguredAudienceModelOutput
with field(s):configured_audience_model_arn(String)
:The Amazon Resource Name (ARN) of the configured audience model.
- On failure, responds with
SdkError<CreateConfiguredAudienceModelError>
Source§impl Client
impl Client
Sourcepub fn create_configured_model_algorithm(
&self,
) -> CreateConfiguredModelAlgorithmFluentBuilder
pub fn create_configured_model_algorithm( &self, ) -> CreateConfiguredModelAlgorithmFluentBuilder
Constructs a fluent builder for the CreateConfiguredModelAlgorithm
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the configured model algorithm.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe description of the configured model algorithm.
role_arn(impl Into<String>)
/set_role_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the role that is used to access the repository.
training_container_config(ContainerConfig)
/set_training_container_config(Option<ContainerConfig>)
:
required: falseConfiguration information for the training container, including entrypoints and arguments.
inference_container_config(InferenceContainerConfig)
/set_inference_container_config(Option<InferenceContainerConfig>)
:
required: falseConfiguration information for the inference container that is used when you run an inference job on a configured model algorithm.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseThe optional metadata that you apply to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
kms_key_arn(impl Into<String>)
/set_kms_key_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of the KMS key. This key is used to encrypt and decrypt customer-owned data in the configured ML model algorithm and associated data.
- On success, responds with
CreateConfiguredModelAlgorithmOutput
with field(s):configured_model_algorithm_arn(String)
:The Amazon Resource Name (ARN) of the configured model algorithm.
- On failure, responds with
SdkError<CreateConfiguredModelAlgorithmError>
Source§impl Client
impl Client
Sourcepub fn create_configured_model_algorithm_association(
&self,
) -> CreateConfiguredModelAlgorithmAssociationFluentBuilder
pub fn create_configured_model_algorithm_association( &self, ) -> CreateConfiguredModelAlgorithmAssociationFluentBuilder
Constructs a fluent builder for the CreateConfiguredModelAlgorithmAssociation
operation.
- The fluent builder is configurable:
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member who is associating this configured model algorithm.
configured_model_algorithm_arn(impl Into<String>)
/set_configured_model_algorithm_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured model algorithm that you want to associate.
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the configured model algorithm association.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe description of the configured model algorithm association.
privacy_configuration(PrivacyConfiguration)
/set_privacy_configuration(Option<PrivacyConfiguration>)
:
required: falseSpecifies the privacy configuration information for the configured model algorithm association. This information includes the maximum data size that can be exported.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseThe optional metadata that you apply to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
- On success, responds with
CreateConfiguredModelAlgorithmAssociationOutput
with field(s):configured_model_algorithm_association_arn(String)
:The Amazon Resource Name (ARN) of the configured model algorithm association.
- On failure, responds with
SdkError<CreateConfiguredModelAlgorithmAssociationError>
Source§impl Client
impl Client
Sourcepub fn create_ml_input_channel(&self) -> CreateMLInputChannelFluentBuilder
pub fn create_ml_input_channel(&self) -> CreateMLInputChannelFluentBuilder
Constructs a fluent builder for the CreateMLInputChannel
operation.
- The fluent builder is configurable:
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member that is creating the ML input channel.
configured_model_algorithm_associations(impl Into<String>)
/set_configured_model_algorithm_associations(Option<Vec::<String>>)
:
required: trueThe associated configured model algorithms that are necessary to create this ML input channel.
input_channel(InputChannel)
/set_input_channel(Option<InputChannel>)
:
required: trueThe input data that is used to create this ML input channel.
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the ML input channel.
retention_in_days(i32)
/set_retention_in_days(Option<i32>)
:
required: trueThe number of days that the data in the ML input channel is retained.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe description of the ML input channel.
kms_key_arn(impl Into<String>)
/set_kms_key_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of the KMS key that is used to access the input channel.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseThe optional metadata that you apply to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
- On success, responds with
CreateMlInputChannelOutput
with field(s):ml_input_channel_arn(String)
:The Amazon Resource Name (ARN) of the ML input channel.
- On failure, responds with
SdkError<CreateMLInputChannelError>
Source§impl Client
impl Client
Sourcepub fn create_trained_model(&self) -> CreateTrainedModelFluentBuilder
pub fn create_trained_model(&self) -> CreateTrainedModelFluentBuilder
Constructs a fluent builder for the CreateTrainedModel
operation.
- The fluent builder is configurable:
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member that is creating the trained model.
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the trained model.
configured_model_algorithm_association_arn(impl Into<String>)
/set_configured_model_algorithm_association_arn(Option<String>)
:
required: trueThe associated configured model algorithm used to train this model.
hyperparameters(impl Into<String>, impl Into<String>)
/set_hyperparameters(Option<HashMap::<String, String>>)
:
required: falseAlgorithm-specific parameters that influence the quality of the model. You set hyperparameters before you start the learning process.
environment(impl Into<String>, impl Into<String>)
/set_environment(Option<HashMap::<String, String>>)
:
required: falseThe environment variables to set in the Docker container.
resource_config(ResourceConfig)
/set_resource_config(Option<ResourceConfig>)
:
required: trueInformation about the EC2 resources that are used to train this model.
stopping_condition(StoppingCondition)
/set_stopping_condition(Option<StoppingCondition>)
:
required: falseThe criteria that is used to stop model training.
incremental_training_data_channels(IncrementalTrainingDataChannel)
/set_incremental_training_data_channels(Option<Vec::<IncrementalTrainingDataChannel>>)
:
required: falseSpecifies the incremental training data channels for the trained model.
Incremental training allows you to create a new trained model with updates without retraining from scratch. You can specify up to one incremental training data channel that references a previously trained model and its version.
Limit: Maximum of 20 channels total (including both
incrementalTrainingDataChannels
anddataChannels
).data_channels(ModelTrainingDataChannel)
/set_data_channels(Option<Vec::<ModelTrainingDataChannel>>)
:
required: trueDefines the data channels that are used as input for the trained model request.
Limit: Maximum of 20 channels total (including both
dataChannels
andincrementalTrainingDataChannels
).training_input_mode(TrainingInputMode)
/set_training_input_mode(Option<TrainingInputMode>)
:
required: falseThe input mode for accessing the training data. This parameter determines how the training data is made available to the training algorithm. Valid values are:
-
File
- The training data is downloaded to the training instance and made available as files. -
FastFile
- The training data is streamed directly from Amazon S3 to the training algorithm, providing faster access for large datasets. -
Pipe
- The training data is streamed to the training algorithm using named pipes, which can improve performance for certain algorithms.
-
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe description of the trained model.
kms_key_arn(impl Into<String>)
/set_kms_key_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of the KMS key. This key is used to encrypt and decrypt customer-owned data in the trained ML model and the associated data.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseThe optional metadata that you apply to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
- On success, responds with
CreateTrainedModelOutput
with field(s):trained_model_arn(String)
:The Amazon Resource Name (ARN) of the trained model.
version_identifier(Option<String>)
:The unique version identifier assigned to the newly created trained model. This identifier can be used to reference this specific version of the trained model in subsequent operations such as inference jobs or incremental training.
The initial version identifier for the base version of the trained model is “NULL”.
- On failure, responds with
SdkError<CreateTrainedModelError>
Source§impl Client
impl Client
Sourcepub fn create_training_dataset(&self) -> CreateTrainingDatasetFluentBuilder
pub fn create_training_dataset(&self) -> CreateTrainingDatasetFluentBuilder
Constructs a fluent builder for the CreateTrainingDataset
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the training dataset. This name must be unique in your account and region.
role_arn(impl Into<String>)
/set_role_arn(Option<String>)
:
required: trueThe ARN of the IAM role that Clean Rooms ML can assume to read the data referred to in the
dataSource
field of each dataset.Passing a role across AWS accounts is not allowed. If you pass a role that isn’t in your account, you get an
AccessDeniedException
error.training_data(Dataset)
/set_training_data(Option<Vec::<Dataset>>)
:
required: trueAn array of information that lists the Dataset objects, which specifies the dataset type and details on its location and schema. You must provide a role that has read access to these tables.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseThe optional metadata that you apply to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe description of the training dataset.
- On success, responds with
CreateTrainingDatasetOutput
with field(s):training_dataset_arn(String)
:The Amazon Resource Name (ARN) of the training dataset resource.
- On failure, responds with
SdkError<CreateTrainingDatasetError>
Source§impl Client
impl Client
Sourcepub fn delete_audience_generation_job(
&self,
) -> DeleteAudienceGenerationJobFluentBuilder
pub fn delete_audience_generation_job( &self, ) -> DeleteAudienceGenerationJobFluentBuilder
Constructs a fluent builder for the DeleteAudienceGenerationJob
operation.
- The fluent builder is configurable:
audience_generation_job_arn(impl Into<String>)
/set_audience_generation_job_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the audience generation job that you want to delete.
- On success, responds with
DeleteAudienceGenerationJobOutput
- On failure, responds with
SdkError<DeleteAudienceGenerationJobError>
Source§impl Client
impl Client
Sourcepub fn delete_audience_model(&self) -> DeleteAudienceModelFluentBuilder
pub fn delete_audience_model(&self) -> DeleteAudienceModelFluentBuilder
Constructs a fluent builder for the DeleteAudienceModel
operation.
- The fluent builder is configurable:
audience_model_arn(impl Into<String>)
/set_audience_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the audience model that you want to delete.
- On success, responds with
DeleteAudienceModelOutput
- On failure, responds with
SdkError<DeleteAudienceModelError>
Source§impl Client
impl Client
Sourcepub fn delete_configured_audience_model(
&self,
) -> DeleteConfiguredAudienceModelFluentBuilder
pub fn delete_configured_audience_model( &self, ) -> DeleteConfiguredAudienceModelFluentBuilder
Constructs a fluent builder for the DeleteConfiguredAudienceModel
operation.
- The fluent builder is configurable:
configured_audience_model_arn(impl Into<String>)
/set_configured_audience_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured audience model that you want to delete.
- On success, responds with
DeleteConfiguredAudienceModelOutput
- On failure, responds with
SdkError<DeleteConfiguredAudienceModelError>
Source§impl Client
impl Client
Sourcepub fn delete_configured_audience_model_policy(
&self,
) -> DeleteConfiguredAudienceModelPolicyFluentBuilder
pub fn delete_configured_audience_model_policy( &self, ) -> DeleteConfiguredAudienceModelPolicyFluentBuilder
Constructs a fluent builder for the DeleteConfiguredAudienceModelPolicy
operation.
- The fluent builder is configurable:
configured_audience_model_arn(impl Into<String>)
/set_configured_audience_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured audience model policy that you want to delete.
- On success, responds with
DeleteConfiguredAudienceModelPolicyOutput
- On failure, responds with
SdkError<DeleteConfiguredAudienceModelPolicyError>
Source§impl Client
impl Client
Sourcepub fn delete_configured_model_algorithm(
&self,
) -> DeleteConfiguredModelAlgorithmFluentBuilder
pub fn delete_configured_model_algorithm( &self, ) -> DeleteConfiguredModelAlgorithmFluentBuilder
Constructs a fluent builder for the DeleteConfiguredModelAlgorithm
operation.
- The fluent builder is configurable:
configured_model_algorithm_arn(impl Into<String>)
/set_configured_model_algorithm_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured model algorithm that you want to delete.
- On success, responds with
DeleteConfiguredModelAlgorithmOutput
- On failure, responds with
SdkError<DeleteConfiguredModelAlgorithmError>
Source§impl Client
impl Client
Sourcepub fn delete_configured_model_algorithm_association(
&self,
) -> DeleteConfiguredModelAlgorithmAssociationFluentBuilder
pub fn delete_configured_model_algorithm_association( &self, ) -> DeleteConfiguredModelAlgorithmAssociationFluentBuilder
Constructs a fluent builder for the DeleteConfiguredModelAlgorithmAssociation
operation.
- The fluent builder is configurable:
configured_model_algorithm_association_arn(impl Into<String>)
/set_configured_model_algorithm_association_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured model algorithm association that you want to delete.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member that is deleting the configured model algorithm association.
- On success, responds with
DeleteConfiguredModelAlgorithmAssociationOutput
- On failure, responds with
SdkError<DeleteConfiguredModelAlgorithmAssociationError>
Source§impl Client
impl Client
Sourcepub fn delete_ml_configuration(&self) -> DeleteMLConfigurationFluentBuilder
pub fn delete_ml_configuration(&self) -> DeleteMLConfigurationFluentBuilder
Constructs a fluent builder for the DeleteMLConfiguration
operation.
- The fluent builder is configurable:
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the of the member that is deleting the ML modeling configuration.
- On success, responds with
DeleteMlConfigurationOutput
- On failure, responds with
SdkError<DeleteMLConfigurationError>
Source§impl Client
impl Client
Sourcepub fn delete_ml_input_channel_data(
&self,
) -> DeleteMLInputChannelDataFluentBuilder
pub fn delete_ml_input_channel_data( &self, ) -> DeleteMLInputChannelDataFluentBuilder
Constructs a fluent builder for the DeleteMLInputChannelData
operation.
- The fluent builder is configurable:
ml_input_channel_arn(impl Into<String>)
/set_ml_input_channel_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the ML input channel that you want to delete.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the membership that contains the ML input channel you want to delete.
- On success, responds with
DeleteMlInputChannelDataOutput
- On failure, responds with
SdkError<DeleteMLInputChannelDataError>
Source§impl Client
impl Client
Sourcepub fn delete_trained_model_output(
&self,
) -> DeleteTrainedModelOutputFluentBuilder
pub fn delete_trained_model_output( &self, ) -> DeleteTrainedModelOutputFluentBuilder
Constructs a fluent builder for the DeleteTrainedModelOutput
operation.
- The fluent builder is configurable:
trained_model_arn(impl Into<String>)
/set_trained_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the trained model whose output you want to delete.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member that is deleting the trained model output.
version_identifier(impl Into<String>)
/set_version_identifier(Option<String>)
:
required: falseThe version identifier of the trained model to delete. If not specified, the operation will delete the base version of the trained model. When specified, only the particular version will be deleted.
- On success, responds with
DeleteTrainedModelOutputOutput
- On failure, responds with
SdkError<DeleteTrainedModelOutputError>
Source§impl Client
impl Client
Sourcepub fn delete_training_dataset(&self) -> DeleteTrainingDatasetFluentBuilder
pub fn delete_training_dataset(&self) -> DeleteTrainingDatasetFluentBuilder
Constructs a fluent builder for the DeleteTrainingDataset
operation.
- The fluent builder is configurable:
training_dataset_arn(impl Into<String>)
/set_training_dataset_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the training dataset that you want to delete.
- On success, responds with
DeleteTrainingDatasetOutput
- On failure, responds with
SdkError<DeleteTrainingDatasetError>
Source§impl Client
impl Client
Sourcepub fn get_audience_generation_job(
&self,
) -> GetAudienceGenerationJobFluentBuilder
pub fn get_audience_generation_job( &self, ) -> GetAudienceGenerationJobFluentBuilder
Constructs a fluent builder for the GetAudienceGenerationJob
operation.
- The fluent builder is configurable:
audience_generation_job_arn(impl Into<String>)
/set_audience_generation_job_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the audience generation job that you are interested in.
- On success, responds with
GetAudienceGenerationJobOutput
with field(s):create_time(DateTime)
:The time at which the audience generation job was created.
update_time(DateTime)
:The most recent time at which the audience generation job was updated.
audience_generation_job_arn(String)
:The Amazon Resource Name (ARN) of the audience generation job.
name(String)
:The name of the audience generation job.
description(Option<String>)
:The description of the audience generation job.
status(AudienceGenerationJobStatus)
:The status of the audience generation job.
status_details(Option<StatusDetails>)
:Details about the status of the audience generation job.
configured_audience_model_arn(String)
:The Amazon Resource Name (ARN) of the configured audience model used for this audience generation job.
seed_audience(Option<AudienceGenerationJobDataSource>)
:The seed audience that was used for this audience generation job. This field will be null if the account calling the API is the account that started this audience generation job.
include_seed_in_output(Option<bool>)
:Configure whether the seed users are included in the output audience. By default, Clean Rooms ML removes seed users from the output audience. If you specify
TRUE
, the seed users will appear first in the output. Clean Rooms ML does not explicitly reveal whether a user was in the seed, but the recipient of the audience will know that the firstminimumSeedSize
count of users are from the seed.collaboration_id(Option<String>)
:The identifier of the collaboration that this audience generation job is associated with.
metrics(Option<AudienceQualityMetrics>)
:The relevance scores for different audience sizes and the recall score of the generated audience.
started_by(Option<String>)
:The AWS account that started this audience generation job.
tags(Option<HashMap::<String, String>>)
:The tags that are associated to this audience generation job.
protected_query_identifier(Option<String>)
:The unique identifier of the protected query for this audience generation job.
- On failure, responds with
SdkError<GetAudienceGenerationJobError>
Source§impl Client
impl Client
Sourcepub fn get_audience_model(&self) -> GetAudienceModelFluentBuilder
pub fn get_audience_model(&self) -> GetAudienceModelFluentBuilder
Constructs a fluent builder for the GetAudienceModel
operation.
- The fluent builder is configurable:
audience_model_arn(impl Into<String>)
/set_audience_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the audience model that you are interested in.
- On success, responds with
GetAudienceModelOutput
with field(s):create_time(DateTime)
:The time at which the audience model was created.
update_time(DateTime)
:The most recent time at which the audience model was updated.
training_data_start_time(Option<DateTime>)
:The start date specified for the training window.
training_data_end_time(Option<DateTime>)
:The end date specified for the training window.
audience_model_arn(String)
:The Amazon Resource Name (ARN) of the audience model.
name(String)
:The name of the audience model.
training_dataset_arn(String)
:The Amazon Resource Name (ARN) of the training dataset that was used for this audience model.
status(AudienceModelStatus)
:The status of the audience model.
status_details(Option<StatusDetails>)
:Details about the status of the audience model.
kms_key_arn(Option<String>)
:The KMS key ARN used for the audience model.
tags(Option<HashMap::<String, String>>)
:The tags that are assigned to the audience model.
description(Option<String>)
:The description of the audience model.
- On failure, responds with
SdkError<GetAudienceModelError>
Source§impl Client
impl Client
Sourcepub fn get_collaboration_configured_model_algorithm_association(
&self,
) -> GetCollaborationConfiguredModelAlgorithmAssociationFluentBuilder
pub fn get_collaboration_configured_model_algorithm_association( &self, ) -> GetCollaborationConfiguredModelAlgorithmAssociationFluentBuilder
Constructs a fluent builder for the GetCollaborationConfiguredModelAlgorithmAssociation
operation.
- The fluent builder is configurable:
configured_model_algorithm_association_arn(impl Into<String>)
/set_configured_model_algorithm_association_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured model algorithm association that you want to return information about.
collaboration_identifier(impl Into<String>)
/set_collaboration_identifier(Option<String>)
:
required: trueThe collaboration ID for the collaboration that contains the configured model algorithm association that you want to return information about.
- On success, responds with
GetCollaborationConfiguredModelAlgorithmAssociationOutput
with field(s):create_time(DateTime)
:The time at which the configured model algorithm association was created.
update_time(DateTime)
:The most recent time at which the configured model algorithm association was updated.
configured_model_algorithm_association_arn(String)
:The Amazon Resource Name (ARN) of the configured model algorithm association.
membership_identifier(String)
:The membership ID of the member that created the configured model algorithm association.
collaboration_identifier(String)
:The collaboration ID of the collaboration that contains the configured model algorithm association.
configured_model_algorithm_arn(String)
:The Amazon Resource Name (ARN) of the configured model algorithm association.
name(String)
:The name of the configured model algorithm association.
description(Option<String>)
:The description of the configured model algorithm association.
creator_account_id(String)
:The account ID of the member that created the configured model algorithm association.
privacy_configuration(Option<PrivacyConfiguration>)
:Information about the privacy configuration for a configured model algorithm association.
- On failure, responds with
SdkError<GetCollaborationConfiguredModelAlgorithmAssociationError>
Source§impl Client
impl Client
Sourcepub fn get_collaboration_ml_input_channel(
&self,
) -> GetCollaborationMLInputChannelFluentBuilder
pub fn get_collaboration_ml_input_channel( &self, ) -> GetCollaborationMLInputChannelFluentBuilder
Constructs a fluent builder for the GetCollaborationMLInputChannel
operation.
- The fluent builder is configurable:
ml_input_channel_arn(impl Into<String>)
/set_ml_input_channel_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the ML input channel that you want to get.
collaboration_identifier(impl Into<String>)
/set_collaboration_identifier(Option<String>)
:
required: trueThe collaboration ID of the collaboration that contains the ML input channel that you want to get.
- On success, responds with
GetCollaborationMlInputChannelOutput
with field(s):membership_identifier(String)
:The membership ID of the membership that contains the ML input channel.
collaboration_identifier(String)
:The collaboration ID of the collaboration that contains the ML input channel.
ml_input_channel_arn(String)
:The Amazon Resource Name (ARN) of the ML input channel.
name(String)
:The name of the ML input channel.
configured_model_algorithm_associations(Vec::<String>)
:The configured model algorithm associations that were used to create the ML input channel.
status(MlInputChannelStatus)
:The status of the ML input channel.
status_details(Option<StatusDetails>)
:Details about the status of a resource.
retention_in_days(i32)
:The number of days to retain the data for the ML input channel.
number_of_records(Option<i64>)
:The number of records in the ML input channel.
description(Option<String>)
:The description of the ML input channel.
create_time(DateTime)
:The time at which the ML input channel was created.
update_time(DateTime)
:The most recent time at which the ML input channel was updated.
creator_account_id(String)
:The account ID of the member who created the ML input channel.
- On failure, responds with
SdkError<GetCollaborationMLInputChannelError>
Source§impl Client
impl Client
Sourcepub fn get_collaboration_trained_model(
&self,
) -> GetCollaborationTrainedModelFluentBuilder
pub fn get_collaboration_trained_model( &self, ) -> GetCollaborationTrainedModelFluentBuilder
Constructs a fluent builder for the GetCollaborationTrainedModel
operation.
- The fluent builder is configurable:
trained_model_arn(impl Into<String>)
/set_trained_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the trained model that you want to return information about.
collaboration_identifier(impl Into<String>)
/set_collaboration_identifier(Option<String>)
:
required: trueThe collaboration ID that contains the trained model that you want to return information about.
version_identifier(impl Into<String>)
/set_version_identifier(Option<String>)
:
required: falseThe version identifier of the trained model to retrieve. If not specified, the operation returns information about the latest version of the trained model.
- On success, responds with
GetCollaborationTrainedModelOutput
with field(s):membership_identifier(String)
:The membership ID of the member that created the trained model.
collaboration_identifier(String)
:The collaboration ID of the collaboration that contains the trained model.
trained_model_arn(String)
:The Amazon Resource Name (ARN) of the trained model.
version_identifier(Option<String>)
:The version identifier of the trained model. This unique identifier distinguishes this version from other versions of the same trained model.
incremental_training_data_channels(Option<Vec::<IncrementalTrainingDataChannelOutput>>)
:Information about the incremental training data channels used to create this version of the trained model. This includes details about the base model that was used for incremental training and the channel configuration.
name(String)
:The name of the trained model.
description(Option<String>)
:The description of the trained model.
status(TrainedModelStatus)
:The status of the trained model.
status_details(Option<StatusDetails>)
:Details about the status of a resource.
configured_model_algorithm_association_arn(String)
:The Amazon Resource Name (ARN) of the configured model algorithm association that was used to create this trained model.
resource_config(Option<ResourceConfig>)
:The EC2 resource configuration that was used to train this model.
training_input_mode(Option<TrainingInputMode>)
:The input mode that was used for accessing the training data when this trained model was created. This indicates how the training data was made available to the training algorithm.
stopping_condition(Option<StoppingCondition>)
:The stopping condition that determined when model training ended.
metrics_status(Option<MetricsStatus>)
:The status of the model metrics.
metrics_status_details(Option<String>)
:Details about the status information for the model metrics.
logs_status(Option<LogsStatus>)
:Status information for the logs.
logs_status_details(Option<String>)
:Details about the status information for the logs.
training_container_image_digest(Option<String>)
:Information about the training container image.
create_time(DateTime)
:The time at which the trained model was created.
update_time(DateTime)
:The most recent time at which the trained model was updated.
creator_account_id(String)
:The account ID of the member that created the trained model.
- On failure, responds with
SdkError<GetCollaborationTrainedModelError>
Source§impl Client
impl Client
Sourcepub fn get_configured_audience_model(
&self,
) -> GetConfiguredAudienceModelFluentBuilder
pub fn get_configured_audience_model( &self, ) -> GetConfiguredAudienceModelFluentBuilder
Constructs a fluent builder for the GetConfiguredAudienceModel
operation.
- The fluent builder is configurable:
configured_audience_model_arn(impl Into<String>)
/set_configured_audience_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured audience model that you are interested in.
- On success, responds with
GetConfiguredAudienceModelOutput
with field(s):create_time(DateTime)
:The time at which the configured audience model was created.
update_time(DateTime)
:The most recent time at which the configured audience model was updated.
configured_audience_model_arn(String)
:The Amazon Resource Name (ARN) of the configured audience model.
name(String)
:The name of the configured audience model.
audience_model_arn(String)
:The Amazon Resource Name (ARN) of the audience model used for this configured audience model.
output_config(Option<ConfiguredAudienceModelOutputConfig>)
:The output configuration of the configured audience model
description(Option<String>)
:The description of the configured audience model.
status(ConfiguredAudienceModelStatus)
:The status of the configured audience model.
shared_audience_metrics(Vec::<SharedAudienceMetrics>)
:Whether audience metrics are shared.
min_matching_seed_size(Option<i32>)
:The minimum number of users from the seed audience that must match with users in the training data of the audience model.
audience_size_config(Option<AudienceSizeConfig>)
:The list of output sizes of audiences that can be created using this configured audience model. A request to
StartAudienceGenerationJob
that uses this configured audience model must have anaudienceSize
selected from this list. You can use theABSOLUTE
AudienceSize
to configure out audience sizes using the count of identifiers in the output. You can use thePercentage
AudienceSize
to configure sizes in the range 1-100 percent.tags(Option<HashMap::<String, String>>)
:The tags that are associated to this configured audience model.
child_resource_tag_on_create_policy(Option<TagOnCreatePolicy>)
:Provides the
childResourceTagOnCreatePolicy
that was used for this configured audience model.
- On failure, responds with
SdkError<GetConfiguredAudienceModelError>
Source§impl Client
impl Client
Sourcepub fn get_configured_audience_model_policy(
&self,
) -> GetConfiguredAudienceModelPolicyFluentBuilder
pub fn get_configured_audience_model_policy( &self, ) -> GetConfiguredAudienceModelPolicyFluentBuilder
Constructs a fluent builder for the GetConfiguredAudienceModelPolicy
operation.
- The fluent builder is configurable:
configured_audience_model_arn(impl Into<String>)
/set_configured_audience_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured audience model that you are interested in.
- On success, responds with
GetConfiguredAudienceModelPolicyOutput
with field(s):configured_audience_model_arn(String)
:The Amazon Resource Name (ARN) of the configured audience model.
configured_audience_model_policy(String)
:The configured audience model policy. This is a JSON IAM resource policy.
policy_hash(String)
:A cryptographic hash of the contents of the policy used to prevent unexpected concurrent modification of the policy.
- On failure, responds with
SdkError<GetConfiguredAudienceModelPolicyError>
Source§impl Client
impl Client
Sourcepub fn get_configured_model_algorithm(
&self,
) -> GetConfiguredModelAlgorithmFluentBuilder
pub fn get_configured_model_algorithm( &self, ) -> GetConfiguredModelAlgorithmFluentBuilder
Constructs a fluent builder for the GetConfiguredModelAlgorithm
operation.
- The fluent builder is configurable:
configured_model_algorithm_arn(impl Into<String>)
/set_configured_model_algorithm_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured model algorithm that you want to return information about.
- On success, responds with
GetConfiguredModelAlgorithmOutput
with field(s):create_time(DateTime)
:The time at which the configured model algorithm was created.
update_time(DateTime)
:The most recent time at which the configured model algorithm was updated.
configured_model_algorithm_arn(String)
:The Amazon Resource Name (ARN) of the configured model algorithm.
name(String)
:The name of the configured model algorithm.
training_container_config(Option<ContainerConfig>)
:The configuration information of the training container for the configured model algorithm.
inference_container_config(Option<InferenceContainerConfig>)
:Configuration information for the inference container.
role_arn(String)
:The Amazon Resource Name (ARN) of the service role that was used to create the configured model algorithm.
description(Option<String>)
:The description of the configured model algorithm.
tags(Option<HashMap::<String, String>>)
:The optional metadata that you applied to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
kms_key_arn(Option<String>)
:The Amazon Resource Name (ARN) of the KMS key. This key is used to encrypt and decrypt customer-owned data in the configured ML model and associated data.
- On failure, responds with
SdkError<GetConfiguredModelAlgorithmError>
Source§impl Client
impl Client
Sourcepub fn get_configured_model_algorithm_association(
&self,
) -> GetConfiguredModelAlgorithmAssociationFluentBuilder
pub fn get_configured_model_algorithm_association( &self, ) -> GetConfiguredModelAlgorithmAssociationFluentBuilder
Constructs a fluent builder for the GetConfiguredModelAlgorithmAssociation
operation.
- The fluent builder is configurable:
configured_model_algorithm_association_arn(impl Into<String>)
/set_configured_model_algorithm_association_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured model algorithm association that you want to return information about.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member that created the configured model algorithm association.
- On success, responds with
GetConfiguredModelAlgorithmAssociationOutput
with field(s):create_time(DateTime)
:The time at which the configured model algorithm association was created.
update_time(DateTime)
:The most recent time at which the configured model algorithm association was updated.
configured_model_algorithm_association_arn(String)
:The Amazon Resource Name (ARN) of the configured model algorithm association.
membership_identifier(String)
:The membership ID of the member that created the configured model algorithm association.
collaboration_identifier(String)
:The collaboration ID of the collaboration that contains the configured model algorithm association.
configured_model_algorithm_arn(String)
:The Amazon Resource Name (ARN) of the configured model algorithm that was associated to the collaboration.
name(String)
:The name of the configured model algorithm association.
privacy_configuration(Option<PrivacyConfiguration>)
:The privacy configuration information for the configured model algorithm association.
description(Option<String>)
:The description of the configured model algorithm association.
tags(Option<HashMap::<String, String>>)
:The optional metadata that you applied to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
- On failure, responds with
SdkError<GetConfiguredModelAlgorithmAssociationError>
Source§impl Client
impl Client
Sourcepub fn get_ml_configuration(&self) -> GetMLConfigurationFluentBuilder
pub fn get_ml_configuration(&self) -> GetMLConfigurationFluentBuilder
Constructs a fluent builder for the GetMLConfiguration
operation.
- The fluent builder is configurable:
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member that owns the ML configuration you want to return information about.
- On success, responds with
GetMlConfigurationOutput
with field(s):membership_identifier(String)
:The membership ID of the member that owns the ML configuration you requested.
default_output_location(Option<MlOutputConfiguration>)
:The Amazon S3 location where ML model output is stored.
create_time(DateTime)
:The time at which the ML configuration was created.
update_time(DateTime)
:The most recent time at which the ML configuration was updated.
- On failure, responds with
SdkError<GetMLConfigurationError>
Source§impl Client
impl Client
Sourcepub fn get_ml_input_channel(&self) -> GetMLInputChannelFluentBuilder
pub fn get_ml_input_channel(&self) -> GetMLInputChannelFluentBuilder
Constructs a fluent builder for the GetMLInputChannel
operation.
- The fluent builder is configurable:
ml_input_channel_arn(impl Into<String>)
/set_ml_input_channel_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the ML input channel that you want to get.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the membership that contains the ML input channel that you want to get.
- On success, responds with
GetMlInputChannelOutput
with field(s):membership_identifier(String)
:The membership ID of the membership that contains the ML input channel.
collaboration_identifier(String)
:The collaboration ID of the collaboration that contains the ML input channel.
ml_input_channel_arn(String)
:The Amazon Resource Name (ARN) of the ML input channel.
name(String)
:The name of the ML input channel.
configured_model_algorithm_associations(Vec::<String>)
:The configured model algorithm associations that were used to create the ML input channel.
status(MlInputChannelStatus)
:The status of the ML input channel.
status_details(Option<StatusDetails>)
:Details about the status of a resource.
retention_in_days(i32)
:The number of days to keep the data in the ML input channel.
number_of_records(Option<i64>)
:The number of records in the ML input channel.
description(Option<String>)
:The description of the ML input channel.
create_time(DateTime)
:The time at which the ML input channel was created.
update_time(DateTime)
:The most recent time at which the ML input channel was updated.
input_channel(Option<InputChannel>)
:The input channel that was used to create the ML input channel.
protected_query_identifier(Option<String>)
:The ID of the protected query that was used to create the ML input channel.
number_of_files(Option<f64>)
:The number of files in the ML input channel.
size_in_gb(Option<f64>)
:The size, in GB, of the ML input channel.
kms_key_arn(Option<String>)
:The Amazon Resource Name (ARN) of the KMS key that was used to create the ML input channel.
tags(Option<HashMap::<String, String>>)
:The optional metadata that you applied to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
- On failure, responds with
SdkError<GetMLInputChannelError>
Source§impl Client
impl Client
Sourcepub fn get_trained_model(&self) -> GetTrainedModelFluentBuilder
pub fn get_trained_model(&self) -> GetTrainedModelFluentBuilder
Constructs a fluent builder for the GetTrainedModel
operation.
- The fluent builder is configurable:
trained_model_arn(impl Into<String>)
/set_trained_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the trained model that you are interested in.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member that created the trained model that you are interested in.
version_identifier(impl Into<String>)
/set_version_identifier(Option<String>)
:
required: falseThe version identifier of the trained model to retrieve. If not specified, the operation returns information about the latest version of the trained model.
- On success, responds with
GetTrainedModelOutput
with field(s):membership_identifier(String)
:The membership ID of the member that created the trained model.
collaboration_identifier(String)
:The collaboration ID of the collaboration that contains the trained model.
trained_model_arn(String)
:The Amazon Resource Name (ARN) of the trained model.
version_identifier(Option<String>)
:The version identifier of the trained model. This unique identifier distinguishes this version from other versions of the same trained model.
incremental_training_data_channels(Option<Vec::<IncrementalTrainingDataChannelOutput>>)
:Information about the incremental training data channels used to create this version of the trained model. This includes details about the base model that was used for incremental training and the channel configuration.
name(String)
:The name of the trained model.
description(Option<String>)
:The description of the trained model.
status(TrainedModelStatus)
:The status of the trained model.
status_details(Option<StatusDetails>)
:Details about the status of a resource.
configured_model_algorithm_association_arn(String)
:The Amazon Resource Name (ARN) of the configured model algorithm association that was used to create the trained model.
resource_config(Option<ResourceConfig>)
:The EC2 resource configuration that was used to create the trained model.
training_input_mode(Option<TrainingInputMode>)
:The input mode that was used for accessing the training data when this trained model was created. This indicates how the training data was made available to the training algorithm.
stopping_condition(Option<StoppingCondition>)
:The stopping condition that was used to terminate model training.
metrics_status(Option<MetricsStatus>)
:The status of the model metrics.
metrics_status_details(Option<String>)
:Details about the metrics status for the trained model.
logs_status(Option<LogsStatus>)
:The logs status for the trained model.
logs_status_details(Option<String>)
:Details about the logs status for the trained model.
training_container_image_digest(Option<String>)
:Information about the training image container.
create_time(DateTime)
:The time at which the trained model was created.
update_time(DateTime)
:The most recent time at which the trained model was updated.
hyperparameters(Option<HashMap::<String, String>>)
:The hyperparameters that were used to create the trained model.
environment(Option<HashMap::<String, String>>)
:The EC2 environment that was used to create the trained model.
kms_key_arn(Option<String>)
:The Amazon Resource Name (ARN) of the KMS key. This key is used to encrypt and decrypt customer-owned data in the trained ML model and associated data.
tags(Option<HashMap::<String, String>>)
:The optional metadata that you applied to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
data_channels(Vec::<ModelTrainingDataChannel>)
:The data channels that were used for the trained model.
- On failure, responds with
SdkError<GetTrainedModelError>
Source§impl Client
impl Client
Sourcepub fn get_trained_model_inference_job(
&self,
) -> GetTrainedModelInferenceJobFluentBuilder
pub fn get_trained_model_inference_job( &self, ) -> GetTrainedModelInferenceJobFluentBuilder
Constructs a fluent builder for the GetTrainedModelInferenceJob
operation.
- The fluent builder is configurable:
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueProvides the membership ID of the membership that contains the trained model inference job that you are interested in.
trained_model_inference_job_arn(impl Into<String>)
/set_trained_model_inference_job_arn(Option<String>)
:
required: trueProvides the Amazon Resource Name (ARN) of the trained model inference job that you are interested in.
- On success, responds with
GetTrainedModelInferenceJobOutput
with field(s):create_time(DateTime)
:The time at which the trained model inference job was created.
update_time(DateTime)
:The most recent time at which the trained model inference job was updated.
trained_model_inference_job_arn(String)
:The Amazon Resource Name (ARN) of the trained model inference job.
configured_model_algorithm_association_arn(Option<String>)
:The Amazon Resource Name (ARN) of the configured model algorithm association that was used for the trained model inference job.
name(String)
:The name of the trained model inference job.
status(TrainedModelInferenceJobStatus)
:The status of the trained model inference job.
trained_model_arn(String)
:The Amazon Resource Name (ARN) for the trained model that was used for the trained model inference job.
trained_model_version_identifier(Option<String>)
:The version identifier of the trained model used for this inference job. This identifies the specific version of the trained model that was used to generate the inference results.
resource_config(Option<InferenceResourceConfig>)
:The resource configuration information for the trained model inference job.
output_configuration(Option<InferenceOutputConfiguration>)
:The output configuration information for the trained model inference job.
membership_identifier(String)
:The membership ID of the membership that contains the trained model inference job.
data_source(Option<ModelInferenceDataSource>)
:The data source that was used for the trained model inference job.
container_execution_parameters(Option<InferenceContainerExecutionParameters>)
:The execution parameters for the model inference job container.
status_details(Option<StatusDetails>)
:Details about the status of a resource.
description(Option<String>)
:The description of the trained model inference job.
inference_container_image_digest(Option<String>)
:Information about the training container image.
environment(Option<HashMap::<String, String>>)
:The environment variables to set in the Docker container.
kms_key_arn(Option<String>)
:The Amazon Resource Name (ARN) of the KMS key. This key is used to encrypt and decrypt customer-owned data in the ML inference job and associated data.
metrics_status(Option<MetricsStatus>)
:The metrics status for the trained model inference job.
metrics_status_details(Option<String>)
:Details about the metrics status for the trained model inference job.
logs_status(Option<LogsStatus>)
:The logs status for the trained model inference job.
logs_status_details(Option<String>)
:Details about the logs status for the trained model inference job.
tags(Option<HashMap::<String, String>>)
:The optional metadata that you applied to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
- On failure, responds with
SdkError<GetTrainedModelInferenceJobError>
Source§impl Client
impl Client
Sourcepub fn get_training_dataset(&self) -> GetTrainingDatasetFluentBuilder
pub fn get_training_dataset(&self) -> GetTrainingDatasetFluentBuilder
Constructs a fluent builder for the GetTrainingDataset
operation.
- The fluent builder is configurable:
training_dataset_arn(impl Into<String>)
/set_training_dataset_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the training dataset that you are interested in.
- On success, responds with
GetTrainingDatasetOutput
with field(s):create_time(DateTime)
:The time at which the training dataset was created.
update_time(DateTime)
:The most recent time at which the training dataset was updated.
training_dataset_arn(String)
:The Amazon Resource Name (ARN) of the training dataset.
name(String)
:The name of the training dataset.
training_data(Vec::<Dataset>)
:Metadata about the requested training data.
status(TrainingDatasetStatus)
:The status of the training dataset.
role_arn(String)
:The IAM role used to read the training data.
tags(Option<HashMap::<String, String>>)
:The tags that are assigned to this training dataset.
description(Option<String>)
:The description of the training dataset.
- On failure, responds with
SdkError<GetTrainingDatasetError>
Source§impl Client
impl Client
Sourcepub fn list_audience_export_jobs(&self) -> ListAudienceExportJobsFluentBuilder
pub fn list_audience_export_jobs(&self) -> ListAudienceExportJobsFluentBuilder
Constructs a fluent builder for the ListAudienceExportJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
audience_generation_job_arn(impl Into<String>)
/set_audience_generation_job_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of the audience generation job that you are interested in.
- On success, responds with
ListAudienceExportJobsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
audience_export_jobs(Vec::<AudienceExportJobSummary>)
:The audience export jobs that match the request.
- On failure, responds with
SdkError<ListAudienceExportJobsError>
Source§impl Client
impl Client
Sourcepub fn list_audience_generation_jobs(
&self,
) -> ListAudienceGenerationJobsFluentBuilder
pub fn list_audience_generation_jobs( &self, ) -> ListAudienceGenerationJobsFluentBuilder
Constructs a fluent builder for the ListAudienceGenerationJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
configured_audience_model_arn(impl Into<String>)
/set_configured_audience_model_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of the configured audience model that was used for the audience generation jobs that you are interested in.
collaboration_id(impl Into<String>)
/set_collaboration_id(Option<String>)
:
required: falseThe identifier of the collaboration that contains the audience generation jobs that you are interested in.
- On success, responds with
ListAudienceGenerationJobsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
audience_generation_jobs(Vec::<AudienceGenerationJobSummary>)
:The audience generation jobs that match the request.
- On failure, responds with
SdkError<ListAudienceGenerationJobsError>
Source§impl Client
impl Client
Sourcepub fn list_audience_models(&self) -> ListAudienceModelsFluentBuilder
pub fn list_audience_models(&self) -> ListAudienceModelsFluentBuilder
Constructs a fluent builder for the ListAudienceModels
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
- On success, responds with
ListAudienceModelsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
audience_models(Vec::<AudienceModelSummary>)
:The audience models that match the request.
- On failure, responds with
SdkError<ListAudienceModelsError>
Source§impl Client
impl Client
Sourcepub fn list_collaboration_configured_model_algorithm_associations(
&self,
) -> ListCollaborationConfiguredModelAlgorithmAssociationsFluentBuilder
pub fn list_collaboration_configured_model_algorithm_associations( &self, ) -> ListCollaborationConfiguredModelAlgorithmAssociationsFluentBuilder
Constructs a fluent builder for the ListCollaborationConfiguredModelAlgorithmAssociations
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
collaboration_identifier(impl Into<String>)
/set_collaboration_identifier(Option<String>)
:
required: trueThe collaboration ID of the collaboration that contains the configured model algorithm associations that you are interested in.
- On success, responds with
ListCollaborationConfiguredModelAlgorithmAssociationsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
collaboration_configured_model_algorithm_associations(Vec::<CollaborationConfiguredModelAlgorithmAssociationSummary>)
:The configured model algorithm associations that belong to this collaboration.
- On failure, responds with
SdkError<ListCollaborationConfiguredModelAlgorithmAssociationsError>
Source§impl Client
impl Client
Sourcepub fn list_collaboration_ml_input_channels(
&self,
) -> ListCollaborationMLInputChannelsFluentBuilder
pub fn list_collaboration_ml_input_channels( &self, ) -> ListCollaborationMLInputChannelsFluentBuilder
Constructs a fluent builder for the ListCollaborationMLInputChannels
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return.
collaboration_identifier(impl Into<String>)
/set_collaboration_identifier(Option<String>)
:
required: trueThe collaboration ID of the collaboration that contains the ML input channels that you want to list.
- On success, responds with
ListCollaborationMlInputChannelsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
collaboration_ml_input_channels_list(Vec::<CollaborationMlInputChannelSummary>)
:The list of ML input channels that you wanted.
- On failure, responds with
SdkError<ListCollaborationMLInputChannelsError>
Source§impl Client
impl Client
Sourcepub fn list_collaboration_trained_model_export_jobs(
&self,
) -> ListCollaborationTrainedModelExportJobsFluentBuilder
pub fn list_collaboration_trained_model_export_jobs( &self, ) -> ListCollaborationTrainedModelExportJobsFluentBuilder
Constructs a fluent builder for the ListCollaborationTrainedModelExportJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
collaboration_identifier(impl Into<String>)
/set_collaboration_identifier(Option<String>)
:
required: trueThe collaboration ID of the collaboration that contains the trained model export jobs that you are interested in.
trained_model_arn(impl Into<String>)
/set_trained_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the trained model that was used to create the export jobs that you are interested in.
trained_model_version_identifier(impl Into<String>)
/set_trained_model_version_identifier(Option<String>)
:
required: falseThe version identifier of the trained model to filter export jobs by. When specified, only export jobs for this specific version of the trained model are returned.
- On success, responds with
ListCollaborationTrainedModelExportJobsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
collaboration_trained_model_export_jobs(Vec::<CollaborationTrainedModelExportJobSummary>)
:The exports jobs that exist for the requested trained model in the requested collaboration.
- On failure, responds with
SdkError<ListCollaborationTrainedModelExportJobsError>
Source§impl Client
impl Client
Sourcepub fn list_collaboration_trained_model_inference_jobs(
&self,
) -> ListCollaborationTrainedModelInferenceJobsFluentBuilder
pub fn list_collaboration_trained_model_inference_jobs( &self, ) -> ListCollaborationTrainedModelInferenceJobsFluentBuilder
Constructs a fluent builder for the ListCollaborationTrainedModelInferenceJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
collaboration_identifier(impl Into<String>)
/set_collaboration_identifier(Option<String>)
:
required: trueThe collaboration ID of the collaboration that contains the trained model inference jobs that you are interested in.
trained_model_arn(impl Into<String>)
/set_trained_model_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of the trained model that was used to create the trained model inference jobs that you are interested in.
trained_model_version_identifier(impl Into<String>)
/set_trained_model_version_identifier(Option<String>)
:
required: falseThe version identifier of the trained model to filter inference jobs by. When specified, only inference jobs that used this specific version of the trained model are returned.
- On success, responds with
ListCollaborationTrainedModelInferenceJobsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
collaboration_trained_model_inference_jobs(Vec::<CollaborationTrainedModelInferenceJobSummary>)
:The trained model inference jobs that you are interested in.
- On failure, responds with
SdkError<ListCollaborationTrainedModelInferenceJobsError>
Source§impl Client
impl Client
Sourcepub fn list_collaboration_trained_models(
&self,
) -> ListCollaborationTrainedModelsFluentBuilder
pub fn list_collaboration_trained_models( &self, ) -> ListCollaborationTrainedModelsFluentBuilder
Constructs a fluent builder for the ListCollaborationTrainedModels
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
collaboration_identifier(impl Into<String>)
/set_collaboration_identifier(Option<String>)
:
required: trueThe collaboration ID of the collaboration that contains the trained models you are interested in.
- On success, responds with
ListCollaborationTrainedModelsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
collaboration_trained_models(Vec::<CollaborationTrainedModelSummary>)
:The trained models in the collaboration that you requested.
- On failure, responds with
SdkError<ListCollaborationTrainedModelsError>
Source§impl Client
impl Client
Sourcepub fn list_configured_audience_models(
&self,
) -> ListConfiguredAudienceModelsFluentBuilder
pub fn list_configured_audience_models( &self, ) -> ListConfiguredAudienceModelsFluentBuilder
Constructs a fluent builder for the ListConfiguredAudienceModels
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
- On success, responds with
ListConfiguredAudienceModelsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
configured_audience_models(Vec::<ConfiguredAudienceModelSummary>)
:The configured audience models.
- On failure, responds with
SdkError<ListConfiguredAudienceModelsError>
Source§impl Client
impl Client
Sourcepub fn list_configured_model_algorithm_associations(
&self,
) -> ListConfiguredModelAlgorithmAssociationsFluentBuilder
pub fn list_configured_model_algorithm_associations( &self, ) -> ListConfiguredModelAlgorithmAssociationsFluentBuilder
Constructs a fluent builder for the ListConfiguredModelAlgorithmAssociations
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member that created the configured model algorithm associations you are interested in.
- On success, responds with
ListConfiguredModelAlgorithmAssociationsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
configured_model_algorithm_associations(Vec::<ConfiguredModelAlgorithmAssociationSummary>)
:The list of configured model algorithm associations.
- On failure, responds with
SdkError<ListConfiguredModelAlgorithmAssociationsError>
Source§impl Client
impl Client
Sourcepub fn list_configured_model_algorithms(
&self,
) -> ListConfiguredModelAlgorithmsFluentBuilder
pub fn list_configured_model_algorithms( &self, ) -> ListConfiguredModelAlgorithmsFluentBuilder
Constructs a fluent builder for the ListConfiguredModelAlgorithms
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
- On success, responds with
ListConfiguredModelAlgorithmsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
configured_model_algorithms(Vec::<ConfiguredModelAlgorithmSummary>)
:The list of configured model algorithms.
- On failure, responds with
SdkError<ListConfiguredModelAlgorithmsError>
Source§impl Client
impl Client
Sourcepub fn list_ml_input_channels(&self) -> ListMLInputChannelsFluentBuilder
pub fn list_ml_input_channels(&self) -> ListMLInputChannelsFluentBuilder
Constructs a fluent builder for the ListMLInputChannels
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of ML input channels to return.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the membership that contains the ML input channels that you want to list.
- On success, responds with
ListMlInputChannelsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
ml_input_channels_list(Vec::<MlInputChannelSummary>)
:The list of ML input channels that you wanted.
- On failure, responds with
SdkError<ListMLInputChannelsError>
Source§impl Client
impl Client
Constructs a fluent builder for the ListTagsForResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the resource that you are interested in.
- On success, responds with
ListTagsForResourceOutput
with field(s):tags(HashMap::<String, String>)
:The tags that are associated with the resource.
- On failure, responds with
SdkError<ListTagsForResourceError>
Source§impl Client
impl Client
Sourcepub fn list_trained_model_inference_jobs(
&self,
) -> ListTrainedModelInferenceJobsFluentBuilder
pub fn list_trained_model_inference_jobs( &self, ) -> ListTrainedModelInferenceJobsFluentBuilder
Constructs a fluent builder for the ListTrainedModelInferenceJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership
trained_model_arn(impl Into<String>)
/set_trained_model_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of a trained model that was used to create the trained model inference jobs that you are interested in.
trained_model_version_identifier(impl Into<String>)
/set_trained_model_version_identifier(Option<String>)
:
required: falseThe version identifier of the trained model to filter inference jobs by. When specified, only inference jobs that used this specific version of the trained model are returned.
- On success, responds with
ListTrainedModelInferenceJobsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
trained_model_inference_jobs(Vec::<TrainedModelInferenceJobSummary>)
:Returns the requested trained model inference jobs.
- On failure, responds with
SdkError<ListTrainedModelInferenceJobsError>
Source§impl Client
impl Client
Sourcepub fn list_trained_model_versions(
&self,
) -> ListTrainedModelVersionsFluentBuilder
pub fn list_trained_model_versions( &self, ) -> ListTrainedModelVersionsFluentBuilder
Constructs a fluent builder for the ListTrainedModelVersions
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe pagination token from a previous
ListTrainedModelVersions
request. Use this token to retrieve the next page of results.max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of trained model versions to return in a single page. The default value is 10, and the maximum value is 100.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership identifier for the collaboration that contains the trained model.
trained_model_arn(impl Into<String>)
/set_trained_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the trained model for which to list versions.
status(TrainedModelStatus)
/set_status(Option<TrainedModelStatus>)
:
required: falseFilter the results to only include trained model versions with the specified status. Valid values include
CREATE_PENDING
,CREATE_IN_PROGRESS
,ACTIVE
,CREATE_FAILED
, and others.
- On success, responds with
ListTrainedModelVersionsOutput
with field(s):next_token(Option<String>)
:The pagination token to use in a subsequent
ListTrainedModelVersions
request to retrieve the next page of results. This value is null when there are no more results to return.trained_models(Vec::<TrainedModelSummary>)
:A list of trained model versions that match the specified criteria. Each entry contains summary information about a trained model version, including its version identifier, status, and creation details.
- On failure, responds with
SdkError<ListTrainedModelVersionsError>
Source§impl Client
impl Client
Sourcepub fn list_trained_models(&self) -> ListTrainedModelsFluentBuilder
pub fn list_trained_models(&self) -> ListTrainedModelsFluentBuilder
Constructs a fluent builder for the ListTrainedModels
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member that created the trained models you are interested in.
- On success, responds with
ListTrainedModelsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
trained_models(Vec::<TrainedModelSummary>)
:The list of trained models.
- On failure, responds with
SdkError<ListTrainedModelsError>
Source§impl Client
impl Client
Sourcepub fn list_training_datasets(&self) -> ListTrainingDatasetsFluentBuilder
pub fn list_training_datasets(&self) -> ListTrainingDatasetsFluentBuilder
Constructs a fluent builder for the ListTrainingDatasets
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseThe token value retrieved from a previous call to access the next page of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum size of the results that is returned per call.
- On success, responds with
ListTrainingDatasetsOutput
with field(s):next_token(Option<String>)
:The token value used to access the next page of results.
training_datasets(Vec::<TrainingDatasetSummary>)
:The training datasets that match the request.
- On failure, responds with
SdkError<ListTrainingDatasetsError>
Source§impl Client
impl Client
Sourcepub fn put_configured_audience_model_policy(
&self,
) -> PutConfiguredAudienceModelPolicyFluentBuilder
pub fn put_configured_audience_model_policy( &self, ) -> PutConfiguredAudienceModelPolicyFluentBuilder
Constructs a fluent builder for the PutConfiguredAudienceModelPolicy
operation.
- The fluent builder is configurable:
configured_audience_model_arn(impl Into<String>)
/set_configured_audience_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured audience model that the resource policy will govern.
configured_audience_model_policy(impl Into<String>)
/set_configured_audience_model_policy(Option<String>)
:
required: trueThe IAM resource policy.
previous_policy_hash(impl Into<String>)
/set_previous_policy_hash(Option<String>)
:
required: falseA cryptographic hash of the contents of the policy used to prevent unexpected concurrent modification of the policy.
policy_existence_condition(PolicyExistenceCondition)
/set_policy_existence_condition(Option<PolicyExistenceCondition>)
:
required: falseUse this to prevent unexpected concurrent modification of the policy.
- On success, responds with
PutConfiguredAudienceModelPolicyOutput
with field(s):configured_audience_model_policy(String)
:The IAM resource policy.
policy_hash(String)
:A cryptographic hash of the contents of the policy used to prevent unexpected concurrent modification of the policy.
- On failure, responds with
SdkError<PutConfiguredAudienceModelPolicyError>
Source§impl Client
impl Client
Sourcepub fn put_ml_configuration(&self) -> PutMLConfigurationFluentBuilder
pub fn put_ml_configuration(&self) -> PutMLConfigurationFluentBuilder
Constructs a fluent builder for the PutMLConfiguration
operation.
- The fluent builder is configurable:
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member that is being configured.
default_output_location(MlOutputConfiguration)
/set_default_output_location(Option<MlOutputConfiguration>)
:
required: trueThe default Amazon S3 location where ML output is stored for the specified member.
- On success, responds with
PutMlConfigurationOutput
- On failure, responds with
SdkError<PutMLConfigurationError>
Source§impl Client
impl Client
Sourcepub fn start_audience_export_job(&self) -> StartAudienceExportJobFluentBuilder
pub fn start_audience_export_job(&self) -> StartAudienceExportJobFluentBuilder
Constructs a fluent builder for the StartAudienceExportJob
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the audience export job.
audience_generation_job_arn(impl Into<String>)
/set_audience_generation_job_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the audience generation job that you want to export.
audience_size(AudienceSize)
/set_audience_size(Option<AudienceSize>)
:
required: trueThe size of the generated audience. Must match one of the sizes in the configured audience model.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe description of the audience export job.
- On success, responds with
StartAudienceExportJobOutput
- On failure, responds with
SdkError<StartAudienceExportJobError>
Source§impl Client
impl Client
Sourcepub fn start_audience_generation_job(
&self,
) -> StartAudienceGenerationJobFluentBuilder
pub fn start_audience_generation_job( &self, ) -> StartAudienceGenerationJobFluentBuilder
Constructs a fluent builder for the StartAudienceGenerationJob
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the audience generation job.
configured_audience_model_arn(impl Into<String>)
/set_configured_audience_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured audience model that is used for this audience generation job.
seed_audience(AudienceGenerationJobDataSource)
/set_seed_audience(Option<AudienceGenerationJobDataSource>)
:
required: trueThe seed audience that is used to generate the audience.
include_seed_in_output(bool)
/set_include_seed_in_output(Option<bool>)
:
required: falseWhether the seed audience is included in the audience generation output.
collaboration_id(impl Into<String>)
/set_collaboration_id(Option<String>)
:
required: falseThe identifier of the collaboration that contains the audience generation job.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe description of the audience generation job.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseThe optional metadata that you apply to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
- On success, responds with
StartAudienceGenerationJobOutput
with field(s):audience_generation_job_arn(String)
:The Amazon Resource Name (ARN) of the audience generation job.
- On failure, responds with
SdkError<StartAudienceGenerationJobError>
Source§impl Client
impl Client
Sourcepub fn start_trained_model_export_job(
&self,
) -> StartTrainedModelExportJobFluentBuilder
pub fn start_trained_model_export_job( &self, ) -> StartTrainedModelExportJobFluentBuilder
Constructs a fluent builder for the StartTrainedModelExportJob
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the trained model export job.
trained_model_arn(impl Into<String>)
/set_trained_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the trained model that you want to export.
trained_model_version_identifier(impl Into<String>)
/set_trained_model_version_identifier(Option<String>)
:
required: falseThe version identifier of the trained model to export. This specifies which version of the trained model should be exported to the specified destination.
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the member that is receiving the exported trained model artifacts.
output_configuration(TrainedModelExportOutputConfiguration)
/set_output_configuration(Option<TrainedModelExportOutputConfiguration>)
:
required: trueThe output configuration information for the trained model export job.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe description of the trained model export job.
- On success, responds with
StartTrainedModelExportJobOutput
- On failure, responds with
SdkError<StartTrainedModelExportJobError>
Source§impl Client
impl Client
Sourcepub fn start_trained_model_inference_job(
&self,
) -> StartTrainedModelInferenceJobFluentBuilder
pub fn start_trained_model_inference_job( &self, ) -> StartTrainedModelInferenceJobFluentBuilder
Constructs a fluent builder for the StartTrainedModelInferenceJob
operation.
- The fluent builder is configurable:
membership_identifier(impl Into<String>)
/set_membership_identifier(Option<String>)
:
required: trueThe membership ID of the membership that contains the trained model inference job.
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the trained model inference job.
trained_model_arn(impl Into<String>)
/set_trained_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the trained model that is used for this trained model inference job.
trained_model_version_identifier(impl Into<String>)
/set_trained_model_version_identifier(Option<String>)
:
required: falseThe version identifier of the trained model to use for inference. This specifies which version of the trained model should be used to generate predictions on the input data.
configured_model_algorithm_association_arn(impl Into<String>)
/set_configured_model_algorithm_association_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of the configured model algorithm association that is used for this trained model inference job.
resource_config(InferenceResourceConfig)
/set_resource_config(Option<InferenceResourceConfig>)
:
required: trueDefines the resource configuration for the trained model inference job.
output_configuration(InferenceOutputConfiguration)
/set_output_configuration(Option<InferenceOutputConfiguration>)
:
required: trueDefines the output configuration information for the trained model inference job.
data_source(ModelInferenceDataSource)
/set_data_source(Option<ModelInferenceDataSource>)
:
required: trueDefines the data source that is used for the trained model inference job.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe description of the trained model inference job.
container_execution_parameters(InferenceContainerExecutionParameters)
/set_container_execution_parameters(Option<InferenceContainerExecutionParameters>)
:
required: falseThe execution parameters for the container.
environment(impl Into<String>, impl Into<String>)
/set_environment(Option<HashMap::<String, String>>)
:
required: falseThe environment variables to set in the Docker container.
kms_key_arn(impl Into<String>)
/set_kms_key_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of the KMS key. This key is used to encrypt and decrypt customer-owned data in the ML inference job and associated data.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseThe optional metadata that you apply to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
- On success, responds with
StartTrainedModelInferenceJobOutput
with field(s):trained_model_inference_job_arn(String)
:The Amazon Resource Name (ARN) of the trained model inference job.
- On failure, responds with
SdkError<StartTrainedModelInferenceJobError>
Source§impl Client
impl Client
Sourcepub fn tag_resource(&self) -> TagResourceFluentBuilder
pub fn tag_resource(&self) -> TagResourceFluentBuilder
Constructs a fluent builder for the TagResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the resource that you want to assign tags.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: trueThe optional metadata that you apply to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
-
- On success, responds with
TagResourceOutput
- On failure, responds with
SdkError<TagResourceError>
Source§impl Client
impl Client
Sourcepub fn untag_resource(&self) -> UntagResourceFluentBuilder
pub fn untag_resource(&self) -> UntagResourceFluentBuilder
Constructs a fluent builder for the UntagResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the resource that you want to remove tags from.
tag_keys(impl Into<String>)
/set_tag_keys(Option<Vec::<String>>)
:
required: trueThe key values of tags that you want to remove.
- On success, responds with
UntagResourceOutput
- On failure, responds with
SdkError<UntagResourceError>
Source§impl Client
impl Client
Sourcepub fn update_configured_audience_model(
&self,
) -> UpdateConfiguredAudienceModelFluentBuilder
pub fn update_configured_audience_model( &self, ) -> UpdateConfiguredAudienceModelFluentBuilder
Constructs a fluent builder for the UpdateConfiguredAudienceModel
operation.
- The fluent builder is configurable:
configured_audience_model_arn(impl Into<String>)
/set_configured_audience_model_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the configured audience model that you want to update.
output_config(ConfiguredAudienceModelOutputConfig)
/set_output_config(Option<ConfiguredAudienceModelOutputConfig>)
:
required: falseThe new output configuration.
audience_model_arn(impl Into<String>)
/set_audience_model_arn(Option<String>)
:
required: falseThe Amazon Resource Name (ARN) of the new audience model that you want to use.
shared_audience_metrics(SharedAudienceMetrics)
/set_shared_audience_metrics(Option<Vec::<SharedAudienceMetrics>>)
:
required: falseThe new value for whether to share audience metrics.
min_matching_seed_size(i32)
/set_min_matching_seed_size(Option<i32>)
:
required: falseThe minimum number of users from the seed audience that must match with users in the training data of the audience model.
audience_size_config(AudienceSizeConfig)
/set_audience_size_config(Option<AudienceSizeConfig>)
:
required: falseThe new audience size configuration.
description(impl Into<String>)
/set_description(Option<String>)
:
required: falseThe new description of the configured audience model.
- On success, responds with
UpdateConfiguredAudienceModelOutput
with field(s):configured_audience_model_arn(String)
:The Amazon Resource Name (ARN) of the configured audience model that was updated.
- On failure, responds with
SdkError<UpdateConfiguredAudienceModelError>
Source§impl Client
impl Client
Sourcepub fn from_conf(conf: Config) -> Self
pub fn from_conf(conf: Config) -> Self
Creates a new client from the service Config
.
§Panics
This method will panic in the following cases:
- Retries or timeouts are enabled without a
sleep_impl
configured. - Identity caching is enabled without a
sleep_impl
andtime_source
configured. - No
behavior_version
is provided.
The panic message for each of these will have instructions on how to resolve them.
Source§impl Client
impl Client
Sourcepub fn new(sdk_config: &SdkConfig) -> Self
pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
§Panics
- This method will panic if the
sdk_config
is missing an async sleep implementation. If you experience this panic, set thesleep_impl
on the Config passed into this function to fix it. - This method will panic if the
sdk_config
is missing an HTTP connector. If you experience this panic, set thehttp_connector
on the Config passed into this function to fix it. - This method will panic if no
BehaviorVersion
is provided. If you experience this panic, setbehavior_version
on the Config or enable thebehavior-version-latest
Cargo feature.
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Client
impl !RefUnwindSafe for Client
impl Send for Client
impl Sync for Client
impl Unpin for Client
impl !UnwindSafe for Client
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);