logo
pub struct DescribeTrainingJobResponse {
Show 40 fields pub algorithm_specification: AlgorithmSpecification, pub auto_ml_job_arn: Option<String>, pub billable_time_in_seconds: Option<i64>, pub checkpoint_config: Option<CheckpointConfig>, pub creation_time: f64, pub debug_hook_config: Option<DebugHookConfig>, pub debug_rule_configurations: Option<Vec<DebugRuleConfiguration>>, pub debug_rule_evaluation_statuses: Option<Vec<DebugRuleEvaluationStatus>>, pub enable_inter_container_traffic_encryption: Option<bool>, pub enable_managed_spot_training: Option<bool>, pub enable_network_isolation: Option<bool>, pub environment: Option<HashMap<String, String>>, pub experiment_config: Option<ExperimentConfig>, pub failure_reason: Option<String>, pub final_metric_data_list: Option<Vec<MetricData>>, pub hyper_parameters: Option<HashMap<String, String>>, pub input_data_config: Option<Vec<Channel>>, pub labeling_job_arn: Option<String>, pub last_modified_time: Option<f64>, pub model_artifacts: ModelArtifacts, pub output_data_config: Option<OutputDataConfig>, pub profiler_config: Option<ProfilerConfig>, pub profiler_rule_configurations: Option<Vec<ProfilerRuleConfiguration>>, pub profiler_rule_evaluation_statuses: Option<Vec<ProfilerRuleEvaluationStatus>>, pub profiling_status: Option<String>, pub resource_config: ResourceConfig, pub retry_strategy: Option<RetryStrategy>, pub role_arn: Option<String>, pub secondary_status: String, pub secondary_status_transitions: Option<Vec<SecondaryStatusTransition>>, pub stopping_condition: StoppingCondition, pub tensor_board_output_config: Option<TensorBoardOutputConfig>, pub training_end_time: Option<f64>, pub training_job_arn: String, pub training_job_name: String, pub training_job_status: String, pub training_start_time: Option<f64>, pub training_time_in_seconds: Option<i64>, pub tuning_job_arn: Option<String>, pub vpc_config: Option<VpcConfig>,
}

Fields

algorithm_specification: AlgorithmSpecification

Information about the algorithm used for training, and algorithm metadata.

auto_ml_job_arn: Option<String>

The Amazon Resource Name (ARN) of an AutoML job.

billable_time_in_seconds: Option<i64>

The billable time in seconds. Billable time refers to the absolute wall-clock time.

Multiply BillableTimeInSeconds by the number of instances (InstanceCount) in your training cluster to get the total compute time Amazon SageMaker will bill you if you run distributed training. The formula is as follows: BillableTimeInSeconds * InstanceCount .

You can calculate the savings from using managed spot training using the formula (1 - BillableTimeInSeconds / TrainingTimeInSeconds) * 100. For example, if BillableTimeInSeconds is 100 and TrainingTimeInSeconds is 500, the savings is 80%.

checkpoint_config: Option<CheckpointConfig>creation_time: f64

A timestamp that indicates when the training job was created.

debug_hook_config: Option<DebugHookConfig>debug_rule_configurations: Option<Vec<DebugRuleConfiguration>>

Configuration information for Debugger rules for debugging output tensors.

debug_rule_evaluation_statuses: Option<Vec<DebugRuleEvaluationStatus>>

Evaluation status of Debugger rules for debugging on a training job.

enable_inter_container_traffic_encryption: Option<bool>

To encrypt all communications between ML compute instances in distributed training, choose True. Encryption provides greater security for distributed training, but training might take longer. How long it takes depends on the amount of communication between compute instances, especially if you use a deep learning algorithms in distributed training.

enable_managed_spot_training: Option<bool>

A Boolean indicating whether managed spot training is enabled (True) or not (False).

enable_network_isolation: Option<bool>

If you want to allow inbound or outbound network calls, except for calls between peers within a training cluster for distributed training, choose True. If you enable network isolation for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.

environment: Option<HashMap<String, String>>

The environment variables to set in the Docker container.

experiment_config: Option<ExperimentConfig>failure_reason: Option<String>

If the training job failed, the reason it failed.

final_metric_data_list: Option<Vec<MetricData>>

A collection of MetricData objects that specify the names, values, and dates and times that the training algorithm emitted to Amazon CloudWatch.

hyper_parameters: Option<HashMap<String, String>>

Algorithm-specific parameters.

input_data_config: Option<Vec<Channel>>

An array of Channel objects that describes each data input channel.

labeling_job_arn: Option<String>

The Amazon Resource Name (ARN) of the Amazon SageMaker Ground Truth labeling job that created the transform or training job.

last_modified_time: Option<f64>

A timestamp that indicates when the status of the training job was last modified.

model_artifacts: ModelArtifacts

Information about the Amazon S3 location that is configured for storing model artifacts.

output_data_config: Option<OutputDataConfig>

The S3 path where model artifacts that you configured when creating the job are stored. Amazon SageMaker creates subfolders for model artifacts.

profiler_config: Option<ProfilerConfig>profiler_rule_configurations: Option<Vec<ProfilerRuleConfiguration>>

Configuration information for Debugger rules for profiling system and framework metrics.

profiler_rule_evaluation_statuses: Option<Vec<ProfilerRuleEvaluationStatus>>

Evaluation status of Debugger rules for profiling on a training job.

profiling_status: Option<String>

Profiling status of a training job.

resource_config: ResourceConfig

Resources, including ML compute instances and ML storage volumes, that are configured for model training.

retry_strategy: Option<RetryStrategy>

The number of times to retry the job when the job fails due to an InternalServerError.

role_arn: Option<String>

The AWS Identity and Access Management (IAM) role configured for the training job.

secondary_status: String

Provides detailed information about the state of the training job. For detailed information on the secondary status of the training job, see StatusMessage under SecondaryStatusTransition.

Amazon SageMaker provides primary statuses and secondary statuses that apply to each of them:

InProgress
  • Starting - Starting the training job.

  • Downloading - An optional stage for algorithms that support File training input mode. It indicates that data is being downloaded to the ML storage volumes.

  • Training - Training is in progress.

  • Interrupted - The job stopped because the managed spot training instances were interrupted.

  • Uploading - Training is complete and the model artifacts are being uploaded to the S3 location.

Completed
  • Completed - The training job has completed.

Failed
  • Failed - The training job has failed. The reason for the failure is returned in the FailureReason field of DescribeTrainingJobResponse.

Stopped
  • MaxRuntimeExceeded - The job stopped because it exceeded the maximum allowed runtime.

  • MaxWaitTimeExceeded - The job stopped because it exceeded the maximum allowed wait time.

  • Stopped - The training job has stopped.

Stopping
  • Stopping - Stopping the training job.

Valid values for SecondaryStatus are subject to change.

We no longer support the following secondary statuses:

  • LaunchingMLInstances

  • PreparingTraining

  • DownloadingTrainingImage

secondary_status_transitions: Option<Vec<SecondaryStatusTransition>>

A history of all of the secondary statuses that the training job has transitioned through.

stopping_condition: StoppingCondition

Specifies a limit to how long a model training job can run. It also specifies how long a managed Spot training job has to complete. When the job reaches the time limit, Amazon SageMaker ends the training job. Use this API to cap model training costs.

To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.

tensor_board_output_config: Option<TensorBoardOutputConfig>training_end_time: Option<f64>

Indicates the time when the training job ends on training instances. You are billed for the time interval between the value of TrainingStartTime and this time. For successful jobs and stopped jobs, this is the time after model artifacts are uploaded. For failed jobs, this is the time when Amazon SageMaker detects a job failure.

training_job_arn: String

The Amazon Resource Name (ARN) of the training job.

training_job_name: String

Name of the model training job.

training_job_status: String

The status of the training job.

Amazon SageMaker provides the following training job statuses:

  • InProgress - The training is in progress.

  • Completed - The training job has completed.

  • Failed - The training job has failed. To see the reason for the failure, see the FailureReason field in the response to a DescribeTrainingJobResponse call.

  • Stopping - The training job is stopping.

  • Stopped - The training job has stopped.

For more detailed information, see SecondaryStatus.

training_start_time: Option<f64>

Indicates the time when the training job starts on training instances. You are billed for the time interval between this time and the value of TrainingEndTime. The start time in CloudWatch Logs might be later than this time. The difference is due to the time it takes to download the training data and to the size of the training container.

training_time_in_seconds: Option<i64>

The training time in seconds.

tuning_job_arn: Option<String>

The Amazon Resource Name (ARN) of the associated hyperparameter tuning job if the training job was launched by a hyperparameter tuning job.

vpc_config: Option<VpcConfig>

A VpcConfig object that specifies the VPC that this training job has access to. For more information, see Protect Training Jobs by Using an Amazon Virtual Private Cloud.

Trait Implementations

Returns a copy of the value. Read more

Performs copy-assignment from source. Read more

Formats the value using the given formatter. Read more

Returns the “default value” for a type. Read more

Deserialize this value from the given Serde deserializer. Read more

This method tests for self and other values to be equal, and is used by ==. Read more

This method tests for !=.

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more

Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Should always be Self

The resulting type after obtaining ownership.

Creates owned data from borrowed data, usually by cloning. Read more

🔬 This is a nightly-only experimental API. (toowned_clone_into)

Uses borrowed data to replace owned data, usually by cloning. Read more

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more