Expand description

Data structures used by operation inputs/outputs.

Modules

See Tag

See Usd

Structs

A structure describing the source of an action.

Lists the properties of an action. An action represents an action or activity. Some examples are a workflow step and a model deployment. Generally, an action involves at least one input artifact or output artifact.

A structure of additional Inference Specification. Additional Inference Specification specifies details about inference jobs that can be run with models based on this model package

Edge Manager agent version.

An Amazon CloudWatch alarm configured to monitor metrics on an endpoint.

Specifies the training algorithm to use in a CreateTrainingJob request.

Specifies the validation and image scan statuses of the algorithm.

Represents the overall status of an algorithm.

Provides summary information about an algorithm.

Defines a training job and a batch transform job that Amazon SageMaker runs to validate your algorithm.

Specifies configurations for one or more training jobs that Amazon SageMaker runs to test the algorithm.

Configures how labels are consolidated across human workers and processes output data.

Details about an Amazon SageMaker app.

The configuration for running a SageMaker image as a KernelGateway app.

Configuration to run a processing job in a specified container image.

A structure describing the source of an artifact.

The ID and ID type of an artifact source.

Lists a summary of the properties of an artifact. An artifact represents a URI addressable object or data. Some examples are a dataset and a model.

Lists a summary of the properties of an association. An association is an entity that links other lineage or experiment entities. An example would be an association between a training job and a model.

Configures the behavior of the client used by Amazon SageMaker to interact with the model container during asynchronous inference.

Specifies configuration for how an endpoint performs asynchronous inference.

Specifies the configuration for notifications of inference results for asynchronous inference.

Specifies the configuration for asynchronous inference invocation outputs.

Configuration for Athena Dataset Definition input.

Information about a candidate produced by an AutoML training job, including its status, steps, and other properties.

Information about the steps for a candidate and what step it is working on.

A channel is a named input source that training algorithms can consume. For more information, see .

A list of container definitions that describe the different containers that make up an AutoML candidate. For more information, see .

The data source for the Autopilot job.

The artifacts that are generated during an AutoML job.

How long a job is allowed to run, or how many candidates a job is allowed to generate.

A collection of settings used for an AutoML job.

Specifies a metric to minimize or maximize as the objective of a job.

Provides a summary about an AutoML job.

The output data configuration.

The reason for a partial failure of an AutoML job.

Security options.

The Amazon S3 data source.

Automatic rollback configuration for handling endpoint deployment failures and recovery.

The error code and error description associated with the resource.

Provides summary information about the model package.

Contains bias metrics for a model.

Update policy for a blue/green deployment. If this update policy is specified, SageMaker creates a new fleet during the deployment while maintaining the old fleet. SageMaker flips traffic to the new fleet according to the specified traffic routing configuration. Only one update policy should be used in the deployment configuration. If no update policy is specified, SageMaker uses a blue/green deployment strategy with all at once traffic shifting by default.

Details on the cache hit of a pipeline execution step.

Metadata about a callback step.

The location of artifacts for an AutoML candidate job.

The properties of an AutoML candidate job.

Specifies the endpoint capacity to activate for production.

Environment parameters you want to benchmark your load test against.

A list of categorical hyperparameters to tune.

Defines the possible values for a categorical hyperparameter.

A channel is a named input source that training algorithms can consume.

Defines a named input source, called a channel, to be used by an algorithm.

Contains information about the output location for managed spot training checkpoint data.

The container for the metadata for the ClarifyCheck step. For more information, see the topic on ClarifyCheck step in the Amazon SageMaker Developer Guide.

Specifies summary information about a Git repository.

Use this parameter to configure your Amazon Cognito workforce. A single Cognito workforce is created using and corresponds to a single Amazon Cognito user pool.

Identifies a Amazon Cognito user group. A user group can be used in on or more work teams.

Configuration information for the Debugger output tensor collections.

A summary of a model compilation job.

Metadata for a Condition step.

Describes the container, as part of model definition.

A structure describing the source of a context.

Lists a summary of the properties of a context. A context provides a logical grouping of other entities.

A list of continuous hyperparameters to tune.

Defines the possible values for a continuous hyperparameter.

A custom SageMaker image. For more information, see Bring your own SageMaker image.

The meta data of the Glue table which serves as data catalog for the OfflineStore.

The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.

Information about the container that a data quality monitoring job runs.

Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.

The input for the data quality monitoring job. Currently endpoints are supported for input.

Describes the location of the channel data.

Configuration for Dataset Definition inputs. The Dataset Definition input must specify exactly one of either AthenaDatasetDefinition or RedshiftDatasetDefinition types.

Configuration information for the Debugger hook parameters, metric and tensor collections, and storage paths. To learn more about how to configure the DebugHookConfig parameter, see Use the SageMaker and Debugger Configuration API Operations to Create, Update, and Debug Your Training Job.

Configuration information for SageMaker Debugger rules for debugging. To learn more about how to configure the DebugRuleConfiguration parameter, see Use the SageMaker and Debugger Configuration API Operations to Create, Update, and Debug Your Training Job.

Information about the status of the rule evaluation.

Gets the Amazon EC2 Container Registry path of the docker image of the model that is hosted in this ProductionVariant.

The deployment configuration for an endpoint, which contains the desired deployment strategy and rollback configurations.

Specifies weight and capacity values for a production variant.

Information of a particular device.

Summary of the device fleet.

Status of devices.

Summary of the device.

The domain's details.

A collection of settings that apply to the SageMaker Domain. These settings are specified through the CreateDomain API call.

A collection of Domain configuration settings to update.

Represents the drift check baselines that can be used when the model monitor is set using the model package.

Represents the drift check bias baselines that can be used when the model monitor is set using the model package.

Represents the drift check explainability baselines that can be used when the model monitor is set using the model package.

Represents the drift check data quality baselines that can be used when the model monitor is set using the model package.

Represents the drift check model quality baselines that can be used when the model monitor is set using the model package.

A directed edge connecting two lineage entities.

The model on the edge device.

Status of edge devices with this model.

Summary of model on edge device.

The output configuration.

Summary of edge packaging job.

The output of a SageMaker Edge Manager deployable resource.

The configurations and outcomes of an Amazon EMR step execution.

A hosted endpoint for real-time inference.

Provides summary information for an endpoint configuration.

Input object for the endpoint

The endpoint configuration for the load test.

The endpoint configuration made by Inference Recommender during a recommendation job.

Provides summary information for an endpoint.

A list of environment parameters suggested by the Amazon SageMaker Inference Recommender.

Specifies the range of environment parameters

The properties of an experiment as returned by the Search API.

Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:

The source of the experiment.

A summary of the properties of an experiment. To get the complete set of properties, call the DescribeExperiment API and provide the ExperimentName.

Contains explainability metrics for a model.

A list of features. You must include FeatureName and FeatureType. Valid feature FeatureTypes are Integral, Fractional and String.

Amazon SageMaker Feature Store stores features in a collection called Feature Group. A Feature Group can be visualized as a table which has rows, with a unique identifier for each row where each column in the table is a feature. In principle, a Feature Group is composed of features and values per features.

The name, Arn, CreationTime, FeatureGroup values, LastUpdatedTime and EnableOnlineStorage status of a FeatureGroup.

Contains details regarding the file source.

The Amazon Elastic File System (EFS) storage configuration for a SageMaker image.

Specifies a file system data source for a channel.

A conditional statement for a search expression that includes a resource property, a Boolean operator, and a value. Resources that match the statement are returned in the results from the Search API.

The best candidate result from an AutoML training job.

Shows the final value for the objective metric for a training job that was launched by a hyperparameter tuning job. You define the objective metric in the HyperParameterTuningJobObjective parameter of HyperParameterTuningJobConfig.

Contains information about where human output will be stored.

Contains summary information about the flow definition.

Specifies configuration details for a Git repository in your Amazon Web Services account.

Specifies configuration details for a Git repository when the repository is updated.

Defines under what conditions SageMaker creates a human loop. Used within . See for the required format of activation conditions.

Provides information about how and under what conditions SageMaker creates a human loop. If HumanLoopActivationConfig is not given, then all requests go to humans.

Describes the work to be performed by human workers.

Container for configuring the source of human task requests.

Information required for human workers to complete a labeling task.

Container for human task user interface information.

Specifies which training algorithm to use for training jobs that a hyperparameter tuning job launches and the metrics to monitor.

Defines a hyperparameter to be used by an algorithm.

Defines the training jobs launched by a hyperparameter tuning job.

Specifies summary information about a training job.

Configures a hyperparameter tuning job.

Defines the objective metric for a hyperparameter tuning job. Hyperparameter tuning uses the value of this metric to evaluate the training jobs it launches, and returns the training job that results in either the highest or lowest value for this metric, depending on the value you specify for the Type parameter.

Provides summary information about a hyperparameter tuning job.

Specifies the configuration for a hyperparameter tuning job that uses one or more previous hyperparameter tuning jobs as a starting point. The results of previous tuning jobs are used to inform which combinations of hyperparameters to search over in the new tuning job.

A SageMaker image. A SageMaker image represents a set of container images that are derived from a common base container image. Each of these container images is represented by a SageMaker ImageVersion.

Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).

A version of a SageMaker Image. A version represents an existing container image.

Specifies details about how containers in a multi-container endpoint are run.

A list of recommendations made by Amazon SageMaker Inference Recommender.

A structure that contains a list of recommendation jobs.

Defines how to perform inference generation after a training job is run.

Contains information about the location of input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.

For a hyperparameter of the integer type, specifies the range that a hyperparameter tuning job searches.

Defines the possible values for an integer hyperparameter.

The JupyterServer app settings.

The KernelGateway app settings.

The configuration for the file system and kernels in a SageMaker image running as a KernelGateway app.

The specification of a Jupyter kernel.

Provides a breakdown of the number of objects labeled.

Provides counts for human-labeled tasks in the labeling job.

Provides configuration information for auto-labeling of your data objects. A LabelingJobAlgorithmsConfig object must be supplied in order to use auto-labeling.

Attributes of the data specified by the customer. Use these to describe the data to be labeled.

Provides information about the location of input data.

Provides summary information for a work team.

Input configuration information for a labeling job.

Specifies the location of the output produced by the labeling job.

Output configuration information for a labeling job.

Configure encryption on the storage volume attached to the ML compute instance used to run automated data labeling model training and inference.

The Amazon S3 location of the input data objects.

An Amazon SNS data source used for streaming labeling jobs.

A set of conditions for stopping a labeling job. If any of the conditions are met, the job is automatically stopped. You can use these conditions to control the cost of data labeling.

Provides summary information about a labeling job.

Metadata for a Lambda step.

Lists a summary of the properties of a lineage group. A lineage group provides a group of shareable lineage entity resources.

Defines an Amazon Cognito or your own OIDC IdP user group that is part of a work team.

Metadata properties of the tracking entity, trial, or trial component.

The name, value, and date and time of a metric that was emitted to Amazon CloudWatch.

Information about the metric for a candidate produced by an AutoML job.

Specifies a metric that the training algorithm writes to stderr or stdout. Amazon SageMakerhyperparameter tuning captures all defined metrics. You specify one metric that a hyperparameter tuning job uses as its objective metric to choose the best training job.

Provides information about the location that is configured for storing model artifacts.

Docker container image configuration object for the model bias job.

The configuration for a baseline model bias job.

Inputs for the model bias job.

Configures the timeout and maximum number of retries for processing a transform job invocation.

Defines the model configuration. Includes the specification name and environment parameters.

Data quality constraints and statistics for a model.

Specifies how to generate the endpoint name for an automatic one-click Autopilot model deployment.

Provides information about the endpoint of the model deployment.

Provides information to verify the integrity of stored model artifacts.

Docker container image configuration object for the model explainability job.

The configuration for a baseline model explainability job.

Inputs for the model explainability job.

Input object for the model.

The model latency threshold.

Part of the search expression. You can specify the name and value (domain, task, framework, framework version, task, and model).

One or more filters that searches for the specified resource or resources in a search. All resource objects that satisfy the expression's condition are included in the search results

A summary of the model metadata.

Contains metrics captured from a model.

A versioned model that can be deployed for SageMaker inference.

Describes the Docker container for the model package.

A group of versioned models in the model registry.

Summary information about a model group.

Specifies the validation and image scan statuses of the model package.

Represents the overall status of a model package.

Provides summary information about a model package.

Contains data, such as the inputs and targeted instance types that are used in the process of validating the model package.

Specifies batch transform jobs that Amazon SageMaker runs to validate your model package.

Model quality statistics and constraints.

Container image configuration object for the monitoring job.

Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.

The input for the model quality monitoring job. Currently endponts are supported for input for model quality monitoring jobs.

Metadata for Model steps.

Provides summary information about a model.

Container image configuration object for the monitoring job.

Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.

Configuration for the cluster used to run model monitoring jobs.

The constraints resource for a monitoring job.

Summary of information about the last monitoring job to run.

The ground truth labels for the dataset used for the monitoring job.

The inputs for a monitoring job.

Defines the monitoring job.

Summary information about a monitoring job.

The networking configuration for the monitoring job.

The output object for a monitoring job.

The output configuration for monitoring jobs.

Identifies the resources to deploy for a monitoring job.

Information about where and how you want to store the results of a monitoring job.

A schedule for a model monitoring job. For information about model monitor, see Amazon SageMaker Model Monitor.

Configures the monitoring schedule and defines the monitoring job.

Summarizes the monitoring schedule.

The statistics resource for a monitoring job.

A time limit for how long the monitoring job is allowed to run before stopping.

Specifies additional configuration for hosting multi-model endpoints.

The VpcConfig configuration object that specifies the VPC that you want the compilation jobs to connect to. For more information on controlling access to your Amazon S3 buckets used for compilation job, see Give Amazon SageMaker Compilation Jobs Access to Resources in Your Amazon VPC.

A list of nested Filter objects. A resource must satisfy the conditions of all filters to be included in the results returned from the Search API.

Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.

Provides a summary of a notebook instance lifecycle configuration.

Contains the notebook instance lifecycle configuration script.

Provides summary information for an Amazon SageMaker notebook instance.

Configures Amazon SNS notifications of available or expiring work items for work teams.

Specifies the number of training jobs that this hyperparameter tuning job launched, categorized by the status of their objective metric. The objective metric status shows whether the final objective metric for the training job has been evaluated by the tuning job and used in the hyperparameter tuning process.

The configuration of an OfflineStore.

The status of OfflineStore.

Use this parameter to configure your OIDC Identity Provider (IdP).

Your OIDC IdP workforce configuration.

A list of user groups that exist in your OIDC Identity Provider (IdP). One to ten groups can be used to create a single private work team. When you add a user group to the list of Groups, you can add that user group to one or more private work teams. If you add a user group to a private work team, all workers in that user group are added to the work team.

Use this to specify the Amazon Web Services Key Management Service (KMS) Key ID, or KMSKeyId, for at rest data encryption. You can turn OnlineStore on or off by specifying the EnableOnlineStore flag at General Assembly; the default value is False.

The security configuration for OnlineStore.

Contains information about the output location for the compiled model and the target device that the model runs on. TargetDevice and TargetPlatform are mutually exclusive, so you need to choose one between the two to specify your target device or platform. If you cannot find your device you want to use from the TargetDevice list, use TargetPlatform to describe the platform of your edge device and CompilerOptions if there are specific settings that are required or recommended to use for particular TargetPlatform.

Provides information about how to store model training results (model artifacts).

An output parameter of a pipeline step.

Configuration that controls the parallelism of the pipeline. By default, the parallelism configuration specified applies to all executions of the pipeline unless overridden.

Assigns a value to a named Pipeline parameter.

Defines the possible values for categorical, continuous, and integer hyperparameters to be used by an algorithm.

Specifies ranges of integer, continuous, and categorical hyperparameters that a hyperparameter tuning job searches. The hyperparameter tuning job launches training jobs with hyperparameter values within these ranges to find the combination of values that result in the training job with the best performance as measured by the objective metric of the hyperparameter tuning job.

The trial that a trial component is associated with and the experiment the trial is part of. A component might not be associated with a trial. A component can be associated with multiple trials.

A previously completed or stopped hyperparameter tuning job to be used as a starting point for a new hyperparameter tuning job.

The summary of an in-progress deployment when an endpoint is creating or updating with a new endpoint configuration.

The production variant summary for a deployment when an endpoint is creating or updating with the CreateEndpoint or UpdateEndpoint operations. Describes the VariantStatus , weight and capacity for a production variant associated with an endpoint.

Defines the traffic pattern.

A SageMaker Model Building Pipeline instance.

The location of the pipeline definition stored in Amazon S3.

An execution of a pipeline.

An execution of a step in a pipeline.

Metadata for a step execution.

A pipeline execution summary.

Specifies the names of the experiment and trial created by a pipeline.

A summary of a pipeline.

Configuration for the cluster used to run a processing job.

Configuration for processing job outputs in Amazon SageMaker Feature Store.

The inputs for a processing job. The processing input must specify exactly one of either S3Input or DatasetDefinition types.

An Amazon SageMaker processing job that is used to analyze data and evaluate models. For more information, see Process Data and Evaluate Models.

Metadata for a processing job step.

Summary of information about a processing job.

Describes the results of a processing job. The processing output must specify exactly one of either S3Output or FeatureStoreOutput types.

Configuration for uploading output from the processing container.

Identifies the resources, ML compute instances, and ML storage volumes to deploy for a processing job. In distributed training, you specify more than one instance.

Configuration for downloading input data from Amazon S3 into the processing container.

Configuration for uploading output data to Amazon S3 from the processing container.

Configures conditions under which the processing job should be stopped, such as how long the processing job has been running. After the condition is met, the processing job is stopped.

Identifies a model that you want to host and the resources chosen to deploy for hosting it. If you are deploying multiple models, tell Amazon SageMaker how to distribute traffic among the models by specifying variant weights.

Specifies configuration for a core dump from the model container when the process crashes.

Describes the status of the production variant.

Describes weight and capacities for a production variant associated with an endpoint. If you sent a request to the UpdateEndpointWeightsAndCapacities API and the endpoint status is Updating, you get different desired and current values.

Configuration information for Debugger system monitoring, framework profiling, and storage paths.

Configuration information for updating the Debugger profile parameters, system and framework metrics configurations, and storage paths.

Configuration information for profiling rules.

Information about the status of the rule evaluation.

The properties of a project as returned by the Search API.

Information about a project.

Part of the SuggestionQuery type. Specifies a hint for retrieving property names that begin with the specified text.

A property name returned from a GetSearchSuggestions call that specifies a value in the PropertyNameQuery field.

A key value pair used when you provision a project as a service catalog product. For information, see What is Amazon Web Services Service Catalog.

Defines the amount of money paid to an Amazon Mechanical Turk worker for each task performed.

Container for the metadata for a Quality check step. For more information, see the topic on QualityCheck step in the Amazon SageMaker Developer Guide.

A set of filters to narrow the set of lineage entities connected to the StartArn(s) returned by the QueryLineage API action.

A collection of settings that apply to an RSessionGateway app.

A collection of settings that configure user interaction with the RStudioServerPro app. RStudioServerProAppSettings cannot be updated. The RStudioServerPro app must be deleted and a new one created to make any changes.

A collection of settings that configure the RStudioServerPro Domain-level app.

A collection of settings that update the current configuration for the RStudioServerPro Domain-level app.

The input configuration of the recommendation job.

Specifies the maximum number of jobs that can run in parallel and the maximum number of jobs that can run.

Specifies conditions for stopping a job. When a job reaches a stopping condition limit, SageMaker ends the job.

The metrics of recommendations.

Configuration for Redshift Dataset Definition input.

Metadata for a register model job step.

Contains input values for a task.

A description of an error that occurred while rendering the template.

Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified Vpc as the value for the RepositoryAccessMode field of the ImageConfig object that you passed to a call to CreateModel and the private Docker registry where the model image is hosted requires authentication.

The resolved attributes.

Describes the resources, including ML compute instances and ML storage volumes, to use for model training.

Specifies the maximum number of training jobs and parallel training jobs that a hyperparameter tuning job can launch.

Specifies the ARN's of a SageMaker image and SageMaker image version, and the instance type that the version runs on.

The retention policy for data stored on an Amazon Elastic File System (EFS) volume.

The retry strategy to use when a training job fails due to an InternalServerError. RetryStrategy is specified as part of the CreateTrainingJob and CreateHyperParameterTuningJob requests. You can add the StoppingCondition parameter to the request to limit the training time for the complete job.

Describes the S3 data source.

The Amazon Simple Storage (Amazon S3) location and and security configuration for OfflineStore.

Configuration details about the monitoring schedule.

A multi-expression that searches for the specified resource or resources in a search. All resource objects that satisfy the expression's condition are included in the search results. You must specify at least one subexpression, filter, or nested filter. A SearchExpression can contain up to twenty elements.

A single resource returned as part of the Search API response.

An array element of DescribeTrainingJobResponse$SecondaryStatusTransitions. It provides additional details about a status that the training job has transitioned through. A training job can be in one of several states, for example, starting, downloading, training, or uploading. Within each state, there are a number of intermediate states. For example, within the starting state, Amazon SageMaker could be starting the training job or launching the ML instances. These transitional states are referred to as the job's secondary status.

Details of a provisioned service catalog product. For information about service catalog, see What is Amazon Web Services Service Catalog.

Details that you specify to provision a service catalog product. For information about service catalog, see What is Amazon Web Services Service Catalog.

Details that you specify to provision a service catalog product. For information about service catalog, see What is Amazon Web Services Service Catalog.

Specifies options for sharing SageMaker Studio notebooks. These settings are specified as part of DefaultUserSettings when the CreateDomain API is called, and as part of UserSettings when the CreateUserProfile API is called. When SharingSettings is not specified, notebook sharing isn't allowed.

A configuration for a shuffle option for input data in a channel. If you use S3Prefix for S3DataType, the results of the S3 key prefix matches are shuffled. If you use ManifestFile, the order of the S3 object references in the ManifestFile is shuffled. If you use AugmentedManifestFile, the order of the JSON lines in the AugmentedManifestFile is shuffled. The shuffling order is determined using the Seed value.

Specifies an algorithm that was used to create the model package. The algorithm must be either an algorithm resource in your Amazon SageMaker account or an algorithm in Amazon Web Services Marketplace that you are subscribed to.

A list of algorithms that were used to create a model package.

A list of IP address ranges (CIDRs). Used to create an allow list of IP addresses for a private workforce. Workers will only be able to login to their worker portal from an IP address within this range. By default, a workforce isn't restricted to specific IP addresses.

Specifies a limit to how long a model training job or model compilation job can run. It also specifies how long a managed spot training job has to complete. When the job reaches the time limit, Amazon SageMaker ends the training or compilation job. Use this API to cap model training costs.

Details of the Studio Lifecycle Configuration.

Describes a work team of a vendor that does the a labelling job.

Specified in the GetSearchSuggestions request. Limits the property names that are included in the response.

A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.

Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice.

The TensorBoard app settings.

Configuration of storage locations for the Debugger TensorBoard output data.

Defines the traffic pattern of the load test.

Defines the traffic routing strategy during an endpoint deployment to shift traffic from the old fleet to the new fleet.

Contains information about a training job.

Defines the input needed to run a training job using the algorithm.

The numbers of training jobs launched by a hyperparameter tuning job, categorized by status.

Metadata for a training job step.

Provides summary information about a training job.

Defines how the algorithm is used for a training job.

Describes the location of the channel data.

Describes the input source of a transform job and the way the transform job consumes it.

A batch transform job. For information about SageMaker batch transform, see Use Batch Transform.

Defines the input needed to run a transform job using the inference specification specified in the algorithm.

Metadata for a transform job step.

Provides a summary of a transform job. Multiple TransformJobSummary objects are returned as a list after in response to a ListTransformJobs call.

Describes the results of a transform job.

Describes the resources, including ML instance types and ML instance count, to use for transform job.

Describes the S3 data source.

The properties of a trial as returned by the Search API.

The properties of a trial component as returned by the Search API.

Represents an input or output artifact of a trial component. You specify TrialComponentArtifact as part of the InputArtifacts and OutputArtifacts parameters in the CreateTrialComponent request.

A summary of the metrics of a trial component.

A short summary of a trial component.

The Amazon Resource Name (ARN) and job type of the source of a trial component.

Detailed information about the source of a trial component. Either ProcessingJob or TrainingJob is returned.

The status of the trial component.

A summary of the properties of a trial component. To get all the properties, call the DescribeTrialComponent API and provide the TrialComponentName.

The source of the trial.

A summary of the properties of a trial. To get the complete set of properties, call the DescribeTrial API and provide the TrialName.

The job completion criteria.

Metadata for a tuning step.

Provided configuration information for the worker UI for a labeling job. Provide either HumanTaskUiArn or UiTemplateS3Uri.

The Liquid template for the worker user interface.

Container for user interface template information.

Represents an amount of money in United States dollars.

Information about the user who created or modified an experiment, trial, trial component, lineage group, or project.

The user profile details.

A collection of settings that apply to users of Amazon SageMaker Studio. These settings are specified when the CreateUserProfile API is called, and as DefaultUserSettings when the CreateDomain API is called.

Specifies a production variant property type for an Endpoint.

A lineage entity connected to the starting entity(ies).

Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

A single private workforce, which is automatically created when you create your first private work team. You can create one private work force in each Amazon Web Services Region. By default, any workforce-related API operation used in a specific region will apply to the workforce created in that region. To learn how to create a private workforce, see Create a Private Workforce.

Provides details about a labeling work team.

Enums

Note: ActionStatus::Unknown has been renamed to ::UnknownValue.

The compression used for Athena query results.

The data storage format for Athena query results.

The strategy hyperparameter tuning uses to

The compression used for Redshift query results.

The data storage format for Redshift query results.

The training input mode that the algorithm supports. For more information about input modes, see

The value of a hyperparameter. Only one of NumberValue or StringValue can be specified.