Expand description
Provides APIs for creating and managing Amazon SageMaker resources.
Other Resources:
If you’re using the service, you’re probably looking for SageMakerClient and SageMaker.
Structs§
A structure describing the source of an action.
Lists the properties of an action. An action represents an action or activity. Some examples are a workflow step and a model deployment. Generally, an action involves at least one input artifact or output artifact.
Edge Manager agent version.
This API is not supported.
Specifies the training algorithm to use in a CreateTrainingJob request.
For more information about algorithms provided by Amazon SageMaker, see Algorithms. For information about using your own algorithms, see Using Your Own Algorithms with Amazon SageMaker.
Specifies the validation and image scan statuses of the algorithm.
Represents the overall status of an algorithm.
Provides summary information about an algorithm.
Defines a training job and a batch transform job that Amazon SageMaker runs to validate your algorithm.
The data provided in the validation profile is made available to your buyers on AWS Marketplace.
Specifies configurations for one or more training jobs that Amazon SageMaker runs to test the algorithm.
Configures how labels are consolidated across human workers and processes output data.
Details about an Amazon SageMaker app.
The configuration for running a SageMaker image as a KernelGateway app.
Configuration to run a processing job in a specified container image.
A structure describing the source of an artifact.
The ID and ID type of an artifact source.
Lists a summary of the properties of an artifact. An artifact represents a URI addressable object or data. Some examples are a dataset and a model.
Lists a summary of the properties of an association. An association is an entity that links other lineage or experiment entities. An example would be an association between a training job and a model.
Configuration for Athena Dataset Definition input.
An Autopilot job returns recommendations, or candidates. Each candidate has futher details about the steps involved and the status.
Information about the steps for a candidate and what step it is working on.
A channel is a named input source that training algorithms can consume. For more information, see .
A list of container definitions that describe the different containers that make up an AutoML candidate. For more information, see .
The data source for the Autopilot job.
The artifacts that are generated during an AutoML job.
How long a job is allowed to run, or how many candidates a job is allowed to generate.
A collection of settings used for an AutoML job.
Specifies a metric to minimize or maximize as the objective of a job.
Provides a summary about an AutoML job.
The output data configuration.
The reason for a partial failure of an AutoML job.
The Amazon S3 data source.
Security options.
Currently, the
AutoRollbackConfig
API is not supported.Contains bias metrics for a model.
Currently, the
BlueGreenUpdatePolicy
API is not supported.Details on the cache hit of a pipeline execution step.
Metadata about a callback step.
The location of artifacts for an AutoML candidate job.
The properties of an AutoML candidate job.
Currently, the
CapacitySize
API is not supported.A list of categorical hyperparameters to tune.
Defines the possible values for a categorical hyperparameter.
A channel is a named input source that training algorithms can consume.
Defines a named input source, called a channel, to be used by an algorithm.
Contains information about the output location for managed spot training checkpoint data.
Specifies summary information about a Git repository.
Use this parameter to configure your Amazon Cognito workforce. A single Cognito workforce is created using and corresponds to a single Amazon Cognito user pool.
Identifies a Amazon Cognito user group. A user group can be used in on or more work teams.
Configuration information for the Debugger output tensor collections.
A summary of a model compilation job.
Metadata for a Condition step.
Describes the container, as part of model definition.
A structure describing the source of a context.
Lists a summary of the properties of a context. A context provides a logical grouping of other entities.
A list of continuous hyperparameters to tune.
Defines the possible values for a continuous hyperparameter.
A custom SageMaker image. For more information, see Bring your own SageMaker image.
The meta data of the Glue table which serves as data catalog for the
OfflineStore
.The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.
Information about the container that a data quality monitoring job runs.
Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
The input for the data quality monitoring job. Currently endpoints are supported for input.
Describes the location of the channel data.
Configuration for Dataset Definition inputs. The Dataset Definition input must specify exactly one of either
AthenaDatasetDefinition
orRedshiftDatasetDefinition
types.Configuration information for the Debugger hook parameters, metric and tensor collections, and storage paths. To learn more about how to configure the
DebugHookConfig
parameter, see Use the SageMaker and Debugger Configuration API Operations to Create, Update, and Debug Your Training Job.Configuration information for SageMaker Debugger rules for debugging. To learn more about how to configure the
DebugRuleConfiguration
parameter, see Use the SageMaker and Debugger Configuration API Operations to Create, Update, and Debug Your Training Job.Information about the status of the rule evaluation.
Gets the Amazon EC2 Container Registry path of the docker image of the model that is hosted in this ProductionVariant.
If you used the
registry/repository[:tag]
form to specify the image path of the primary container when you created the model hosted in thisProductionVariant
, the path resolves to a path of the formregistry/repository[@digest]
. A digest is a hash value that identifies a specific version of an image. For information about Amazon ECR paths, see Pulling an Image in the Amazon ECR User Guide.Currently, the
DeploymentConfig
API is not supported.Specifies weight and capacity values for a production variant.
Information of a particular device.
Summary of the device fleet.
Status of devices.
Summary of the device.
The domain's details.
The model on the edge device.
Status of edge devices with this model.
Summary of model on edge device.
The output configuration.
Summary of edge packaging job.
The output of a SageMaker Edge Manager deployable resource.
A hosted endpoint for real-time inference.
Provides summary information for an endpoint configuration.
Input object for the endpoint
Provides summary information for an endpoint.
The properties of an experiment as returned by the Search API.
Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:
The source of the experiment.
A summary of the properties of an experiment. To get the complete set of properties, call the DescribeExperiment API and provide the
ExperimentName
.Contains explainability metrics for a model.
A list of features. You must include
FeatureName
andFeatureType
. Valid featureFeatureType
s areIntegral
,Fractional
andString
.Amazon SageMaker Feature Store stores features in a collection called Feature Group. A Feature Group can be visualized as a table which has rows, with a unique identifier for each row where each column in the table is a feature. In principle, a Feature Group is composed of features and values per features.
The name, Arn,
CreationTime
,FeatureGroup
values,LastUpdatedTime
andEnableOnlineStorage
status of aFeatureGroup
.The Amazon Elastic File System (EFS) storage configuration for a SageMaker image.
Specifies a file system data source for a channel.
A conditional statement for a search expression that includes a resource property, a Boolean operator, and a value. Resources that match the statement are returned in the results from the Search API.
If you specify a
Value
, but not anOperator
, Amazon SageMaker uses the equals operator.In search, there are several property types:
- Metrics
-
To define a metric filter, enter a value using the form
"Metrics.<name>"
, where<name>
is a metric name. For example, the following filter searches for training jobs with an"accuracy"
metric greater than"0.9"
:{
"Name": "Metrics.accuracy",
"Operator": "GreaterThan",
"Value": "0.9"
}
- HyperParameters
-
To define a hyperparameter filter, enter a value with the form
"HyperParameters.<name>"
. Decimal hyperparameter values are treated as a decimal in a comparison if the specifiedValue
is also a decimal value. If the specifiedValue
is an integer, the decimal hyperparameter values are treated as integers. For example, the following filter is satisfied by training jobs with a"learningrate"
hyperparameter that is less than"0.5"
:{
"Name": "HyperParameters.learningrate",
"Operator": "LessThan",
"Value": "0.5"
}
- Tags
-
To define a tag filter, enter a value with the form
Tags.<key>
.
The best candidate result from an AutoML training job.
Shows the final value for the objective metric for a training job that was launched by a hyperparameter tuning job. You define the objective metric in the
HyperParameterTuningJobObjective
parameter of HyperParameterTuningJobConfig.Contains information about where human output will be stored.
Contains summary information about the flow definition.
Specifies configuration details for a Git repository in your AWS account.
Specifies configuration details for a Git repository when the repository is updated.
Defines under what conditions SageMaker creates a human loop. Used within . See for the required format of activation conditions.
Provides information about how and under what conditions SageMaker creates a human loop. If
HumanLoopActivationConfig
is not given, then all requests go to humans.Describes the work to be performed by human workers.
Container for configuring the source of human task requests.
Information required for human workers to complete a labeling task.
Container for human task user interface information.
Specifies which training algorithm to use for training jobs that a hyperparameter tuning job launches and the metrics to monitor.
Defines a hyperparameter to be used by an algorithm.
Defines the training jobs launched by a hyperparameter tuning job.
Specifies summary information about a training job.
Configures a hyperparameter tuning job.
Defines the objective metric for a hyperparameter tuning job. Hyperparameter tuning uses the value of this metric to evaluate the training jobs it launches, and returns the training job that results in either the highest or lowest value for this metric, depending on the value you specify for the
Type
parameter.Provides summary information about a hyperparameter tuning job.
Specifies the configuration for a hyperparameter tuning job that uses one or more previous hyperparameter tuning jobs as a starting point. The results of previous tuning jobs are used to inform which combinations of hyperparameters to search over in the new tuning job.
All training jobs launched by the new hyperparameter tuning job are evaluated by using the objective metric, and the training job that performs the best is compared to the best training jobs from the parent tuning jobs. From these, the training job that performs the best as measured by the objective metric is returned as the overall best training job.
All training jobs launched by parent hyperparameter tuning jobs and the new hyperparameter tuning jobs count against the limit of training jobs for the tuning job.
A SageMaker image. A SageMaker image represents a set of container images that are derived from a common base container image. Each of these container images is represented by a SageMaker
ImageVersion
.Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).
A version of a SageMaker
Image
. A version represents an existing container image.Specifies details about how containers in a multi-container endpoint are run.
Defines how to perform inference generation after a training job is run.
Contains information about the location of input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.
For a hyperparameter of the integer type, specifies the range that a hyperparameter tuning job searches.
Defines the possible values for an integer hyperparameter.
The JupyterServer app settings.
The KernelGateway app settings.
The configuration for the file system and kernels in a SageMaker image running as a KernelGateway app.
The specification of a Jupyter kernel.
Provides a breakdown of the number of objects labeled.
Provides counts for human-labeled tasks in the labeling job.
Provides configuration information for auto-labeling of your data objects. A
LabelingJobAlgorithmsConfig
object must be supplied in order to use auto-labeling.Attributes of the data specified by the customer. Use these to describe the data to be labeled.
Provides information about the location of input data.
You must specify at least one of the following:
S3DataSource
orSnsDataSource
.Use
SnsDataSource
to specify an SNS input topic for a streaming labeling job. If you do not specify and SNS input topic ARN, Ground Truth will create a one-time labeling job.Use
S3DataSource
to specify an input manifest file for both streaming and one-time labeling jobs. Adding anS3DataSource
is optional if you useSnsDataSource
to create a streaming labeling job.Provides summary information for a work team.
Input configuration information for a labeling job.
Specifies the location of the output produced by the labeling job.
Output configuration information for a labeling job.
Configure encryption on the storage volume attached to the ML compute instance used to run automated data labeling model training and inference.
The Amazon S3 location of the input data objects.
An Amazon SNS data source used for streaming labeling jobs.
A set of conditions for stopping a labeling job. If any of the conditions are met, the job is automatically stopped. You can use these conditions to control the cost of data labeling.
Labeling jobs fail after 30 days with an appropriate client error message.
Provides summary information about a labeling job.
Defines an Amazon Cognito or your own OIDC IdP user group that is part of a work team.
Metadata properties of the tracking entity, trial, or trial component.
The name, value, and date and time of a metric that was emitted to Amazon CloudWatch.
Specifies a metric that the training algorithm writes to
stderr
orstdout
. Amazon SageMakerhyperparameter tuning captures all defined metrics. You specify one metric that a hyperparameter tuning job uses as its objective metric to choose the best training job.Provides information about the location that is configured for storing model artifacts.
Model artifacts are the output that results from training a model, and typically consist of trained parameters, a model defintion that describes how to compute inferences, and other metadata.
Docker container image configuration object for the model bias job.
The configuration for a baseline model bias job.
Inputs for the model bias job.
Configures the timeout and maximum number of retries for processing a transform job invocation.
Data quality constraints and statistics for a model.
Specifies how to generate the endpoint name for an automatic one-click Autopilot model deployment.
Provides information about the endpoint of the model deployment.
Provides information to verify the integrity of stored model artifacts.
Docker container image configuration object for the model explainability job.
The configuration for a baseline model explainability job.
Inputs for the model explainability job.
Contains metrics captured from a model.
A versioned model that can be deployed for SageMaker inference.
Describes the Docker container for the model package.
A group of versioned models in the model registry.
Summary information about a model group.
Specifies the validation and image scan statuses of the model package.
Represents the overall status of a model package.
Provides summary information about a model package.
Contains data, such as the inputs and targeted instance types that are used in the process of validating the model package.
The data provided in the validation profile is made available to your buyers on AWS Marketplace.
Specifies batch transform jobs that Amazon SageMaker runs to validate your model package.
Model quality statistics and constraints.
Container image configuration object for the monitoring job.
Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
The input for the model quality monitoring job. Currently endponts are supported for input for model quality monitoring jobs.
Metadata for Model steps.
Provides summary information about a model.
Container image configuration object for the monitoring job.
Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
Configuration for the cluster used to run model monitoring jobs.
The constraints resource for a monitoring job.
Summary of information about the last monitoring job to run.
The ground truth labels for the dataset used for the monitoring job.
The inputs for a monitoring job.
Defines the monitoring job.
Summary information about a monitoring job.
The networking configuration for the monitoring job.
The output object for a monitoring job.
The output configuration for monitoring jobs.
Identifies the resources to deploy for a monitoring job.
Information about where and how you want to store the results of a monitoring job.
A schedule for a model monitoring job. For information about model monitor, see Amazon SageMaker Model Monitor.
Configures the monitoring schedule and defines the monitoring job.
Summarizes the monitoring schedule.
The statistics resource for a monitoring job.
A time limit for how long the monitoring job is allowed to run before stopping.
Specifies additional configuration for hosting multi-model endpoints.
A list of nested Filter objects. A resource must satisfy the conditions of all filters to be included in the results returned from the Search API.
For example, to filter on a training job's
InputDataConfig
property with a specific channel name andS3Uri
prefix, define the following filters:-
'{Name:"InputDataConfig.ChannelName", "Operator":"Equals", "Value":"train"}',
-
'{Name:"InputDataConfig.DataSource.S3DataSource.S3Uri", "Operator":"Contains", "Value":"mybucket/catdata"}'
-
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
Provides a summary of a notebook instance lifecycle configuration.
Contains the notebook instance lifecycle configuration script.
Each lifecycle configuration script has a limit of 16384 characters.
The value of the
$PATH
environment variable that is available to both scripts is/sbin:bin:/usr/sbin:/usr/bin
.View CloudWatch Logs for notebook instance lifecycle configurations in log group
/aws/sagemaker/NotebookInstances
in log stream[notebook-instance-name]/[LifecycleConfigHook]
.Lifecycle configuration scripts cannot run for longer than 5 minutes. If a script runs for longer than 5 minutes, it fails and the notebook instance is not created or started.
For information about notebook instance lifestyle configurations, see Step 2.1: (Optional) Customize a Notebook Instance.
Provides summary information for an Amazon SageMaker notebook instance.
Configures SNS notifications of available or expiring work items for work teams.
Specifies the number of training jobs that this hyperparameter tuning job launched, categorized by the status of their objective metric. The objective metric status shows whether the final objective metric for the training job has been evaluated by the tuning job and used in the hyperparameter tuning process.
The configuration of an
OfflineStore
.Provide an
OfflineStoreConfig
in a request toCreateFeatureGroup
to create anOfflineStore
.To encrypt an
OfflineStore
using at rest data encryption, specify AWS Key Management Service (KMS) key ID, orKMSKeyId
, inS3StorageConfig
.The status of
OfflineStore
.Use this parameter to configure your OIDC Identity Provider (IdP).
Your OIDC IdP workforce configuration.
A list of user groups that exist in your OIDC Identity Provider (IdP). One to ten groups can be used to create a single private work team. When you add a user group to the list of
Groups
, you can add that user group to one or more private work teams. If you add a user group to a private work team, all workers in that user group are added to the work team.Use this to specify the AWS Key Management Service (KMS) Key ID, or
KMSKeyId
, for at rest data encryption. You can turnOnlineStore
on or off by specifying theEnableOnlineStore
flag at General Assembly; the default value isFalse
.The security configuration for
OnlineStore
.Contains information about the output location for the compiled model and the target device that the model runs on.
TargetDevice
andTargetPlatform
are mutually exclusive, so you need to choose one between the two to specify your target device or platform. If you cannot find your device you want to use from theTargetDevice
list, useTargetPlatform
to describe the platform of your edge device andCompilerOptions
if there are specific settings that are required or recommended to use for particular TargetPlatform.Provides information about how to store model training results (model artifacts).
An output parameter of a pipeline step.
Assigns a value to a named Pipeline parameter.
Defines the possible values for categorical, continuous, and integer hyperparameters to be used by an algorithm.
Specifies ranges of integer, continuous, and categorical hyperparameters that a hyperparameter tuning job searches. The hyperparameter tuning job launches training jobs with hyperparameter values within these ranges to find the combination of values that result in the training job with the best performance as measured by the objective metric of the hyperparameter tuning job.
You can specify a maximum of 20 hyperparameters that a hyperparameter tuning job can search over. Every possible value of a categorical parameter range counts against this limit.
The trial that a trial component is associated with and the experiment the trial is part of. A component might not be associated with a trial. A component can be associated with multiple trials.
A previously completed or stopped hyperparameter tuning job to be used as a starting point for a new hyperparameter tuning job.
A SageMaker Model Building Pipeline instance.
An execution of a pipeline.
An execution of a step in a pipeline.
Metadata for a step execution.
A pipeline execution summary.
Specifies the names of the experiment and trial created by a pipeline.
A summary of a pipeline.
Configuration for the cluster used to run a processing job.
Configuration for processing job outputs in Amazon SageMaker Feature Store.
The inputs for a processing job. The processing input must specify exactly one of either
S3Input
orDatasetDefinition
types.An Amazon SageMaker processing job that is used to analyze data and evaluate models. For more information, see Process Data and Evaluate Models.
Metadata for a processing job step.
Summary of information about a processing job.
Describes the results of a processing job. The processing output must specify exactly one of either
S3Output
orFeatureStoreOutput
types.Configuration for uploading output from the processing container.
Identifies the resources, ML compute instances, and ML storage volumes to deploy for a processing job. In distributed training, you specify more than one instance.
Configuration for downloading input data from Amazon S3 into the processing container.
Configuration for uploading output data to Amazon S3 from the processing container.
Configures conditions under which the processing job should be stopped, such as how long the processing job has been running. After the condition is met, the processing job is stopped.
Identifies a model that you want to host and the resources chosen to deploy for hosting it. If you are deploying multiple models, tell Amazon SageMaker how to distribute traffic among the models by specifying variant weights.
Specifies configuration for a core dump from the model container when the process crashes.
Describes weight and capacities for a production variant associated with an endpoint. If you sent a request to the
UpdateEndpointWeightsAndCapacities
API and the endpoint status isUpdating
, you get different desired and current values.Configuration information for Debugger system monitoring, framework profiling, and storage paths.
Configuration information for updating the Debugger profile parameters, system and framework metrics configurations, and storage paths.
Configuration information for profiling rules.
Information about the status of the rule evaluation.
Information about a project.
Part of the
SuggestionQuery
type. Specifies a hint for retrieving property names that begin with the specified text.A property name returned from a
GetSearchSuggestions
call that specifies a value in thePropertyNameQuery
field.A key value pair used when you provision a project as a service catalog product. For information, see What is AWS Service Catalog.
Defines the amount of money paid to an Amazon Mechanical Turk worker for each task performed.
Use one of the following prices for bounding box tasks. Prices are in US dollars and should be based on the complexity of the task; the longer it takes in your initial testing, the more you should offer.
-
0.036
-
0.048
-
0.060
-
0.072
-
0.120
-
0.240
-
0.360
-
0.480
-
0.600
-
0.720
-
0.840
-
0.960
-
1.080
-
1.200
Use one of the following prices for image classification, text classification, and custom tasks. Prices are in US dollars.
-
0.012
-
0.024
-
0.036
-
0.048
-
0.060
-
0.072
-
0.120
-
0.240
-
0.360
-
0.480
-
0.600
-
0.720
-
0.840
-
0.960
-
1.080
-
1.200
Use one of the following prices for semantic segmentation tasks. Prices are in US dollars.
-
0.840
-
0.960
-
1.080
-
1.200
Use one of the following prices for Textract AnalyzeDocument Important Form Key Amazon Augmented AI review tasks. Prices are in US dollars.
-
2.400
-
2.280
-
2.160
-
2.040
-
1.920
-
1.800
-
1.680
-
1.560
-
1.440
-
1.320
-
1.200
-
1.080
-
0.960
-
0.840
-
0.720
-
0.600
-
0.480
-
0.360
-
0.240
-
0.120
-
0.072
-
0.060
-
0.048
-
0.036
-
0.024
-
0.012
Use one of the following prices for Rekognition DetectModerationLabels Amazon Augmented AI review tasks. Prices are in US dollars.
-
1.200
-
1.080
-
0.960
-
0.840
-
0.720
-
0.600
-
0.480
-
0.360
-
0.240
-
0.120
-
0.072
-
0.060
-
0.048
-
0.036
-
0.024
-
0.012
Use one of the following prices for Amazon Augmented AI custom human review tasks. Prices are in US dollars.
-
1.200
-
1.080
-
0.960
-
0.840
-
0.720
-
0.600
-
0.480
-
0.360
-
0.240
-
0.120
-
0.072
-
0.060
-
0.048
-
0.036
-
0.024
-
0.012
-
Configuration for Redshift Dataset Definition input.
Metadata for a register model job step.
Contains input values for a task.
A description of an error that occurred while rendering the template.
Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified
Vpc
as the value for theRepositoryAccessMode
field of theImageConfig
object that you passed to a call to CreateModel and the private Docker registry where the model image is hosted requires authentication.The resolved attributes.
Describes the resources, including ML compute instances and ML storage volumes, to use for model training.
Specifies the maximum number of training jobs and parallel training jobs that a hyperparameter tuning job can launch.
Specifies the ARN's of a SageMaker image and SageMaker image version, and the instance type that the version runs on.
The retention policy for data stored on an Amazon Elastic File System (EFS) volume.
The retry strategy to use when a training job fails due to an
InternalServerError
.RetryStrategy
is specified as part of theCreateTrainingJob
andCreateHyperParameterTuningJob
requests. You can add theStoppingCondition
parameter to the request to limit the training time for the complete job.Describes the S3 data source.
The Amazon Simple Storage (Amazon S3) location and and security configuration for
OfflineStore
.- A client for the SageMaker API.
Configuration details about the monitoring schedule.
A multi-expression that searches for the specified resource or resources in a search. All resource objects that satisfy the expression's condition are included in the search results. You must specify at least one subexpression, filter, or nested filter. A
SearchExpression
can contain up to twenty elements.A
SearchExpression
contains the following components:-
A list of
Filter
objects. Each filter defines a simple Boolean expression comprised of a resource property name, Boolean operator, and value. -
A list of
NestedFilter
objects. Each nested filter defines a list of Boolean expressions using a list of resource properties. A nested filter is satisfied if a single object in the list satisfies all Boolean expressions. -
A list of
SearchExpression
objects. A search expression object can be nested in a list of search expression objects. -
A Boolean operator:
And
orOr
.
-
A single resource returned as part of the Search API response.
An array element of DescribeTrainingJobResponse$SecondaryStatusTransitions. It provides additional details about a status that the training job has transitioned through. A training job can be in one of several states, for example, starting, downloading, training, or uploading. Within each state, there are a number of intermediate states. For example, within the starting state, Amazon SageMaker could be starting the training job or launching the ML instances. These transitional states are referred to as the job's secondary status.
Details of a provisioned service catalog product. For information about service catalog, see What is AWS Service Catalog.
Details that you specify to provision a service catalog product. For information about service catalog, see .What is AWS Service Catalog.
Specifies options for sharing SageMaker Studio notebooks. These settings are specified as part of
DefaultUserSettings
when theCreateDomain
API is called, and as part ofUserSettings
when theCreateUserProfile
API is called. WhenSharingSettings
is not specified, notebook sharing isn't allowed.A configuration for a shuffle option for input data in a channel. If you use
S3Prefix
forS3DataType
, the results of the S3 key prefix matches are shuffled. If you useManifestFile
, the order of the S3 object references in theManifestFile
is shuffled. If you useAugmentedManifestFile
, the order of the JSON lines in theAugmentedManifestFile
is shuffled. The shuffling order is determined using theSeed
value.For Pipe input mode, when
ShuffleConfig
is specified shuffling is done at the start of every epoch. With large datasets, this ensures that the order of the training data is different for each epoch, and it helps reduce bias and possible overfitting. In a multi-node training job whenShuffleConfig
is combined withS3DataDistributionType
ofShardedByS3Key
, the data is shuffled across nodes so that the content sent to a particular node on the first epoch might be sent to a different node on the second epoch.Specifies an algorithm that was used to create the model package. The algorithm must be either an algorithm resource in your Amazon SageMaker account or an algorithm in AWS Marketplace that you are subscribed to.
A list of algorithms that were used to create a model package.
A list of IP address ranges (CIDRs). Used to create an allow list of IP addresses for a private workforce. Workers will only be able to login to their worker portal from an IP address within this range. By default, a workforce isn't restricted to specific IP addresses.
Specifies a limit to how long a model training job, model compilation job, or hyperparameter tuning job can run. It also specifies how long a managed Spot training job has to complete. When the job reaches the time limit, Amazon SageMaker ends the training or compilation job. Use this API to cap model training costs.
To stop a job, Amazon SageMaker sends the algorithm the
SIGTERM
signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.The training algorithms provided by Amazon SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with
CreateModel
.The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
Describes a work team of a vendor that does the a labelling job.
Specified in the GetSearchSuggestions request. Limits the property names that are included in the response.
A tag object that consists of a key and an optional value, used to manage metadata for Amazon SageMaker AWS resources.
You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to Amazon SageMaker resources, see AddTags.
For more information on adding metadata to your AWS resources with tagging, see Tagging AWS resources. For advice on best practices for managing AWS resources with tagging, see Tagging Best Practices: Implement an Effective AWS Resource Tagging Strategy.
Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of
TargetDevice
.The TensorBoard app settings.
Configuration of storage locations for the Debugger TensorBoard output data.
Currently, the
TrafficRoutingConfig
API is not supported.Contains information about a training job.
Defines the input needed to run a training job using the algorithm.
The numbers of training jobs launched by a hyperparameter tuning job, categorized by status.
Metadata for a training job step.
Provides summary information about a training job.
Defines how the algorithm is used for a training job.
Describes the location of the channel data.
Describes the input source of a transform job and the way the transform job consumes it.
A batch transform job. For information about SageMaker batch transform, see Use Batch Transform.
Defines the input needed to run a transform job using the inference specification specified in the algorithm.
Metadata for a transform job step.
Provides a summary of a transform job. Multiple
TransformJobSummary
objects are returned as a list after in response to a ListTransformJobs call.Describes the results of a transform job.
Describes the resources, including ML instance types and ML instance count, to use for transform job.
Describes the S3 data source.
The properties of a trial as returned by the Search API.
The properties of a trial component as returned by the Search API.
Represents an input or output artifact of a trial component. You specify
TrialComponentArtifact
as part of theInputArtifacts
andOutputArtifacts
parameters in the CreateTrialComponent request.Examples of input artifacts are datasets, algorithms, hyperparameters, source code, and instance types. Examples of output artifacts are metrics, snapshots, logs, and images.
A summary of the metrics of a trial component.
The value of a hyperparameter. Only one of
NumberValue
orStringValue
can be specified.This object is specified in the CreateTrialComponent request.
A short summary of a trial component.
The Amazon Resource Name (ARN) and job type of the source of a trial component.
Detailed information about the source of a trial component. Either
ProcessingJob
orTrainingJob
is returned.The status of the trial component.
A summary of the properties of a trial component. To get all the properties, call the DescribeTrialComponent API and provide the
TrialComponentName
.The source of the trial.
A summary of the properties of a trial. To get the complete set of properties, call the DescribeTrial API and provide the
TrialName
.The job completion criteria.
Represents an amount of money in United States dollars.
Provided configuration information for the worker UI for a labeling job.
The Liquid template for the worker user interface.
Container for user interface template information.
Information about the user who created or modified an experiment, trial, or trial component.
The user profile details.
A collection of settings that apply to users of Amazon SageMaker Studio. These settings are specified when the
CreateUserProfile
API is called, and asDefaultUserSettings
when theCreateDomain
API is called.SecurityGroups
is aggregated when specified in both calls. For all other settings inUserSettings
, the values specified inCreateUserProfile
take precedence over those specified inCreateDomain
.Specifies a production variant property type for an Endpoint.
If you are updating an endpoint with the UpdateEndpointInput$RetainAllVariantProperties option set to
true
, theVariantProperty
objects listed in UpdateEndpointInput$ExcludeRetainedVariantProperties override the existing variant properties of the endpoint.Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.
A single private workforce, which is automatically created when you create your first private work team. You can create one private work force in each AWS Region. By default, any workforce-related API operation used in a specific region will apply to the workforce created in that region. To learn how to create a private workforce, see Create a Private Workforce.
Provides details about a labeling work team.
Enums§
- Errors returned by AddAssociation
- Errors returned by AddTags
- Errors returned by AssociateTrialComponent
- Errors returned by CreateAction
- Errors returned by CreateAlgorithm
- Errors returned by CreateApp
- Errors returned by CreateAppImageConfig
- Errors returned by CreateArtifact
- Errors returned by CreateAutoMLJob
- Errors returned by CreateCodeRepository
- Errors returned by CreateCompilationJob
- Errors returned by CreateContext
- Errors returned by CreateDataQualityJobDefinition
- Errors returned by CreateDeviceFleet
- Errors returned by CreateDomain
- Errors returned by CreateEdgePackagingJob
- Errors returned by CreateEndpointConfig
- Errors returned by CreateEndpoint
- Errors returned by CreateExperiment
- Errors returned by CreateFeatureGroup
- Errors returned by CreateFlowDefinition
- Errors returned by CreateHumanTaskUi
- Errors returned by CreateHyperParameterTuningJob
- Errors returned by CreateImage
- Errors returned by CreateImageVersion
- Errors returned by CreateLabelingJob
- Errors returned by CreateModelBiasJobDefinition
- Errors returned by CreateModel
- Errors returned by CreateModelExplainabilityJobDefinition
- Errors returned by CreateModelPackage
- Errors returned by CreateModelPackageGroup
- Errors returned by CreateModelQualityJobDefinition
- Errors returned by CreateMonitoringSchedule
- Errors returned by CreateNotebookInstance
- Errors returned by CreateNotebookInstanceLifecycleConfig
- Errors returned by CreatePipeline
- Errors returned by CreatePresignedDomainUrl
- Errors returned by CreatePresignedNotebookInstanceUrl
- Errors returned by CreateProcessingJob
- Errors returned by CreateProject
- Errors returned by CreateTrainingJob
- Errors returned by CreateTransformJob
- Errors returned by CreateTrialComponent
- Errors returned by CreateTrial
- Errors returned by CreateUserProfile
- Errors returned by CreateWorkforce
- Errors returned by CreateWorkteam
- Errors returned by DeleteAction
- Errors returned by DeleteAlgorithm
- Errors returned by DeleteApp
- Errors returned by DeleteAppImageConfig
- Errors returned by DeleteArtifact
- Errors returned by DeleteAssociation
- Errors returned by DeleteCodeRepository
- Errors returned by DeleteContext
- Errors returned by DeleteDataQualityJobDefinition
- Errors returned by DeleteDeviceFleet
- Errors returned by DeleteDomain
- Errors returned by DeleteEndpointConfig
- Errors returned by DeleteEndpoint
- Errors returned by DeleteExperiment
- Errors returned by DeleteFeatureGroup
- Errors returned by DeleteFlowDefinition
- Errors returned by DeleteHumanTaskUi
- Errors returned by DeleteImage
- Errors returned by DeleteImageVersion
- Errors returned by DeleteModelBiasJobDefinition
- Errors returned by DeleteModel
- Errors returned by DeleteModelExplainabilityJobDefinition
- Errors returned by DeleteModelPackage
- Errors returned by DeleteModelPackageGroup
- Errors returned by DeleteModelPackageGroupPolicy
- Errors returned by DeleteModelQualityJobDefinition
- Errors returned by DeleteMonitoringSchedule
- Errors returned by DeleteNotebookInstance
- Errors returned by DeleteNotebookInstanceLifecycleConfig
- Errors returned by DeletePipeline
- Errors returned by DeleteProject
- Errors returned by DeleteTags
- Errors returned by DeleteTrialComponent
- Errors returned by DeleteTrial
- Errors returned by DeleteUserProfile
- Errors returned by DeleteWorkforce
- Errors returned by DeleteWorkteam
- Errors returned by DeregisterDevices
- Errors returned by DescribeAction
- Errors returned by DescribeAlgorithm
- Errors returned by DescribeApp
- Errors returned by DescribeAppImageConfig
- Errors returned by DescribeArtifact
- Errors returned by DescribeAutoMLJob
- Errors returned by DescribeCodeRepository
- Errors returned by DescribeCompilationJob
- Errors returned by DescribeContext
- Errors returned by DescribeDataQualityJobDefinition
- Errors returned by DescribeDevice
- Errors returned by DescribeDeviceFleet
- Errors returned by DescribeDomain
- Errors returned by DescribeEdgePackagingJob
- Errors returned by DescribeEndpointConfig
- Errors returned by DescribeEndpoint
- Errors returned by DescribeExperiment
- Errors returned by DescribeFeatureGroup
- Errors returned by DescribeFlowDefinition
- Errors returned by DescribeHumanTaskUi
- Errors returned by DescribeHyperParameterTuningJob
- Errors returned by DescribeImage
- Errors returned by DescribeImageVersion
- Errors returned by DescribeLabelingJob
- Errors returned by DescribeModelBiasJobDefinition
- Errors returned by DescribeModel
- Errors returned by DescribeModelExplainabilityJobDefinition
- Errors returned by DescribeModelPackage
- Errors returned by DescribeModelPackageGroup
- Errors returned by DescribeModelQualityJobDefinition
- Errors returned by DescribeMonitoringSchedule
- Errors returned by DescribeNotebookInstance
- Errors returned by DescribeNotebookInstanceLifecycleConfig
- Errors returned by DescribePipelineDefinitionForExecution
- Errors returned by DescribePipeline
- Errors returned by DescribePipelineExecution
- Errors returned by DescribeProcessingJob
- Errors returned by DescribeProject
- Errors returned by DescribeSubscribedWorkteam
- Errors returned by DescribeTrainingJob
- Errors returned by DescribeTransformJob
- Errors returned by DescribeTrialComponent
- Errors returned by DescribeTrial
- Errors returned by DescribeUserProfile
- Errors returned by DescribeWorkforce
- Errors returned by DescribeWorkteam
- Errors returned by DisableSagemakerServicecatalogPortfolio
- Errors returned by DisassociateTrialComponent
- Errors returned by EnableSagemakerServicecatalogPortfolio
- Errors returned by GetDeviceFleetReport
- Errors returned by GetModelPackageGroupPolicy
- Errors returned by GetSagemakerServicecatalogPortfolioStatus
- Errors returned by GetSearchSuggestions
- Errors returned by ListActions
- Errors returned by ListAlgorithms
- Errors returned by ListAppImageConfigs
- Errors returned by ListApps
- Errors returned by ListArtifacts
- Errors returned by ListAssociations
- Errors returned by ListAutoMLJobs
- Errors returned by ListCandidatesForAutoMLJob
- Errors returned by ListCodeRepositories
- Errors returned by ListCompilationJobs
- Errors returned by ListContexts
- Errors returned by ListDataQualityJobDefinitions
- Errors returned by ListDeviceFleets
- Errors returned by ListDevices
- Errors returned by ListDomains
- Errors returned by ListEdgePackagingJobs
- Errors returned by ListEndpointConfigs
- Errors returned by ListEndpoints
- Errors returned by ListExperiments
- Errors returned by ListFeatureGroups
- Errors returned by ListFlowDefinitions
- Errors returned by ListHumanTaskUis
- Errors returned by ListHyperParameterTuningJobs
- Errors returned by ListImageVersions
- Errors returned by ListImages
- Errors returned by ListLabelingJobs
- Errors returned by ListLabelingJobsForWorkteam
- Errors returned by ListModelBiasJobDefinitions
- Errors returned by ListModelExplainabilityJobDefinitions
- Errors returned by ListModelPackageGroups
- Errors returned by ListModelPackages
- Errors returned by ListModelQualityJobDefinitions
- Errors returned by ListModels
- Errors returned by ListMonitoringExecutions
- Errors returned by ListMonitoringSchedules
- Errors returned by ListNotebookInstanceLifecycleConfigs
- Errors returned by ListNotebookInstances
- Errors returned by ListPipelineExecutionSteps
- Errors returned by ListPipelineExecutions
- Errors returned by ListPipelineParametersForExecution
- Errors returned by ListPipelines
- Errors returned by ListProcessingJobs
- Errors returned by ListProjects
- Errors returned by ListSubscribedWorkteams
- Errors returned by ListTags
- Errors returned by ListTrainingJobs
- Errors returned by ListTrainingJobsForHyperParameterTuningJob
- Errors returned by ListTransformJobs
- Errors returned by ListTrialComponents
- Errors returned by ListTrials
- Errors returned by ListUserProfiles
- Errors returned by ListWorkforces
- Errors returned by ListWorkteams
- Errors returned by PutModelPackageGroupPolicy
- Errors returned by RegisterDevices
- Errors returned by RenderUiTemplate
- Errors returned by Search
- Errors returned by SendPipelineExecutionStepFailure
- Errors returned by SendPipelineExecutionStepSuccess
- Errors returned by StartMonitoringSchedule
- Errors returned by StartNotebookInstance
- Errors returned by StartPipelineExecution
- Errors returned by StopAutoMLJob
- Errors returned by StopCompilationJob
- Errors returned by StopEdgePackagingJob
- Errors returned by StopHyperParameterTuningJob
- Errors returned by StopLabelingJob
- Errors returned by StopMonitoringSchedule
- Errors returned by StopNotebookInstance
- Errors returned by StopPipelineExecution
- Errors returned by StopProcessingJob
- Errors returned by StopTrainingJob
- Errors returned by StopTransformJob
- Errors returned by UpdateAction
- Errors returned by UpdateAppImageConfig
- Errors returned by UpdateArtifact
- Errors returned by UpdateCodeRepository
- Errors returned by UpdateContext
- Errors returned by UpdateDeviceFleet
- Errors returned by UpdateDevices
- Errors returned by UpdateDomain
- Errors returned by UpdateEndpoint
- Errors returned by UpdateEndpointWeightsAndCapacities
- Errors returned by UpdateExperiment
- Errors returned by UpdateImage
- Errors returned by UpdateModelPackage
- Errors returned by UpdateMonitoringSchedule
- Errors returned by UpdateNotebookInstance
- Errors returned by UpdateNotebookInstanceLifecycleConfig
- Errors returned by UpdatePipeline
- Errors returned by UpdatePipelineExecution
- Errors returned by UpdateTrainingJob
- Errors returned by UpdateTrialComponent
- Errors returned by UpdateTrial
- Errors returned by UpdateUserProfile
- Errors returned by UpdateWorkforce
- Errors returned by UpdateWorkteam
Traits§
- Trait representing the capabilities of the SageMaker API. SageMaker clients implement this trait.