#[non_exhaustive]pub struct TrainingJob {Show 38 fields
pub training_job_name: Option<String>,
pub training_job_arn: Option<String>,
pub tuning_job_arn: Option<String>,
pub labeling_job_arn: Option<String>,
pub auto_ml_job_arn: Option<String>,
pub model_artifacts: Option<ModelArtifacts>,
pub training_job_status: Option<TrainingJobStatus>,
pub secondary_status: Option<SecondaryStatus>,
pub failure_reason: Option<String>,
pub hyper_parameters: Option<HashMap<String, String>>,
pub algorithm_specification: Option<AlgorithmSpecification>,
pub role_arn: Option<String>,
pub input_data_config: Option<Vec<Channel>>,
pub output_data_config: Option<OutputDataConfig>,
pub resource_config: Option<ResourceConfig>,
pub vpc_config: Option<VpcConfig>,
pub stopping_condition: Option<StoppingCondition>,
pub creation_time: Option<DateTime>,
pub training_start_time: Option<DateTime>,
pub training_end_time: Option<DateTime>,
pub last_modified_time: Option<DateTime>,
pub secondary_status_transitions: Option<Vec<SecondaryStatusTransition>>,
pub final_metric_data_list: Option<Vec<MetricData>>,
pub enable_network_isolation: Option<bool>,
pub enable_inter_container_traffic_encryption: Option<bool>,
pub enable_managed_spot_training: Option<bool>,
pub checkpoint_config: Option<CheckpointConfig>,
pub training_time_in_seconds: Option<i32>,
pub billable_time_in_seconds: Option<i32>,
pub debug_hook_config: Option<DebugHookConfig>,
pub experiment_config: Option<ExperimentConfig>,
pub debug_rule_configurations: Option<Vec<DebugRuleConfiguration>>,
pub tensor_board_output_config: Option<TensorBoardOutputConfig>,
pub debug_rule_evaluation_statuses: Option<Vec<DebugRuleEvaluationStatus>>,
pub profiler_config: Option<ProfilerConfig>,
pub environment: Option<HashMap<String, String>>,
pub retry_strategy: Option<RetryStrategy>,
pub tags: Option<Vec<Tag>>,
}
Expand description
Contains information about a training job.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.training_job_name: Option<String>
The name of the training job.
training_job_arn: Option<String>
The Amazon Resource Name (ARN) of the training job.
tuning_job_arn: Option<String>
The Amazon Resource Name (ARN) of the associated hyperparameter tuning job if the training job was launched by a hyperparameter tuning job.
labeling_job_arn: Option<String>
The Amazon Resource Name (ARN) of the labeling job.
auto_ml_job_arn: Option<String>
The Amazon Resource Name (ARN) of the job.
model_artifacts: Option<ModelArtifacts>
Information about the Amazon S3 location that is configured for storing model artifacts.
training_job_status: Option<TrainingJobStatus>
The status of the training job.
Training job statuses are:
-
InProgress
- The training is in progress. -
Completed
- The training job has completed. -
Failed
- The training job has failed. To see the reason for the failure, see theFailureReason
field in the response to aDescribeTrainingJobResponse
call. -
Stopping
- The training job is stopping. -
Stopped
- The training job has stopped.
For more detailed information, see SecondaryStatus
.
secondary_status: Option<SecondaryStatus>
Provides detailed information about the state of the training job. For detailed information about the secondary status of the training job, see StatusMessage
under SecondaryStatusTransition.
SageMaker provides primary statuses and secondary statuses that apply to each of them:
- InProgress
-
-
Starting
- Starting the training job. -
Downloading
- An optional stage for algorithms that supportFile
training input mode. It indicates that data is being downloaded to the ML storage volumes. -
Training
- Training is in progress. -
Uploading
- Training is complete and the model artifacts are being uploaded to the S3 location.
-
- Completed
-
-
Completed
- The training job has completed.
-
- Failed
-
-
Failed
- The training job has failed. The reason for the failure is returned in theFailureReason
field ofDescribeTrainingJobResponse
.
-
- Stopped
-
-
MaxRuntimeExceeded
- The job stopped because it exceeded the maximum allowed runtime. -
Stopped
- The training job has stopped.
-
- Stopping
-
-
Stopping
- Stopping the training job.
-
Valid values for SecondaryStatus
are subject to change.
We no longer support the following secondary statuses:
-
LaunchingMLInstances
-
PreparingTrainingStack
-
DownloadingTrainingImage
failure_reason: Option<String>
If the training job failed, the reason it failed.
hyper_parameters: Option<HashMap<String, String>>
Algorithm-specific parameters.
algorithm_specification: Option<AlgorithmSpecification>
Information about the algorithm used for training, and algorithm metadata.
role_arn: Option<String>
The Amazon Web Services Identity and Access Management (IAM) role configured for the training job.
input_data_config: Option<Vec<Channel>>
An array of Channel
objects that describes each data input channel.
Your input must be in the same Amazon Web Services region as your training job.
output_data_config: Option<OutputDataConfig>
The S3 path where model artifacts that you configured when creating the job are stored. SageMaker creates subfolders for model artifacts.
resource_config: Option<ResourceConfig>
Resources, including ML compute instances and ML storage volumes, that are configured for model training.
vpc_config: Option<VpcConfig>
A VpcConfig object that specifies the VPC that this training job has access to. For more information, see Protect Training Jobs by Using an Amazon Virtual Private Cloud.
stopping_condition: Option<StoppingCondition>
Specifies a limit to how long a model training job can run. It also specifies how long a managed Spot training job has to complete. When the job reaches the time limit, SageMaker ends the training job. Use this API to cap model training costs.
To stop a job, SageMaker sends the algorithm the SIGTERM
signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
creation_time: Option<DateTime>
A timestamp that indicates when the training job was created.
training_start_time: Option<DateTime>
Indicates the time when the training job starts on training instances. You are billed for the time interval between this time and the value of TrainingEndTime
. The start time in CloudWatch Logs might be later than this time. The difference is due to the time it takes to download the training data and to the size of the training container.
training_end_time: Option<DateTime>
Indicates the time when the training job ends on training instances. You are billed for the time interval between the value of TrainingStartTime
and this time. For successful jobs and stopped jobs, this is the time after model artifacts are uploaded. For failed jobs, this is the time when SageMaker detects a job failure.
last_modified_time: Option<DateTime>
A timestamp that indicates when the status of the training job was last modified.
secondary_status_transitions: Option<Vec<SecondaryStatusTransition>>
A history of all of the secondary statuses that the training job has transitioned through.
final_metric_data_list: Option<Vec<MetricData>>
A list of final metric values that are set when the training job completes. Used only if the training job was configured to use metrics.
enable_network_isolation: Option<bool>
If the TrainingJob
was created with network isolation, the value is set to true
. If network isolation is enabled, nodes can't communicate beyond the VPC they run in.
enable_inter_container_traffic_encryption: Option<bool>
To encrypt all communications between ML compute instances in distributed training, choose True
. Encryption provides greater security for distributed training, but training might take longer. How long it takes depends on the amount of communication between compute instances, especially if you use a deep learning algorithm in distributed training.
enable_managed_spot_training: Option<bool>
When true, enables managed spot training using Amazon EC2 Spot instances to run training jobs instead of on-demand instances. For more information, see Managed Spot Training.
checkpoint_config: Option<CheckpointConfig>
Contains information about the output location for managed spot training checkpoint data.
training_time_in_seconds: Option<i32>
The training time in seconds.
billable_time_in_seconds: Option<i32>
The billable time in seconds.
debug_hook_config: Option<DebugHookConfig>
Configuration information for the Amazon SageMaker Debugger hook parameters, metric and tensor collections, and storage paths. To learn more about how to configure the DebugHookConfig
parameter, see Use the SageMaker and Debugger Configuration API Operations to Create, Update, and Debug Your Training Job.
experiment_config: Option<ExperimentConfig>
Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:
debug_rule_configurations: Option<Vec<DebugRuleConfiguration>>
Information about the debug rule configuration.
tensor_board_output_config: Option<TensorBoardOutputConfig>
Configuration of storage locations for the Amazon SageMaker Debugger TensorBoard output data.
debug_rule_evaluation_statuses: Option<Vec<DebugRuleEvaluationStatus>>
Information about the evaluation status of the rules for the training job.
profiler_config: Option<ProfilerConfig>
Configuration information for Amazon SageMaker Debugger system monitoring, framework profiling, and storage paths.
environment: Option<HashMap<String, String>>
The environment variables to set in the Docker container.
retry_strategy: Option<RetryStrategy>
The number of times to retry the job when the job fails due to an InternalServerError
.
An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.
Implementations§
Source§impl TrainingJob
impl TrainingJob
Sourcepub fn training_job_name(&self) -> Option<&str>
pub fn training_job_name(&self) -> Option<&str>
The name of the training job.
Sourcepub fn training_job_arn(&self) -> Option<&str>
pub fn training_job_arn(&self) -> Option<&str>
The Amazon Resource Name (ARN) of the training job.
Sourcepub fn tuning_job_arn(&self) -> Option<&str>
pub fn tuning_job_arn(&self) -> Option<&str>
The Amazon Resource Name (ARN) of the associated hyperparameter tuning job if the training job was launched by a hyperparameter tuning job.
Sourcepub fn labeling_job_arn(&self) -> Option<&str>
pub fn labeling_job_arn(&self) -> Option<&str>
The Amazon Resource Name (ARN) of the labeling job.
Sourcepub fn auto_ml_job_arn(&self) -> Option<&str>
pub fn auto_ml_job_arn(&self) -> Option<&str>
The Amazon Resource Name (ARN) of the job.
Sourcepub fn model_artifacts(&self) -> Option<&ModelArtifacts>
pub fn model_artifacts(&self) -> Option<&ModelArtifacts>
Information about the Amazon S3 location that is configured for storing model artifacts.
Sourcepub fn training_job_status(&self) -> Option<&TrainingJobStatus>
pub fn training_job_status(&self) -> Option<&TrainingJobStatus>
The status of the training job.
Training job statuses are:
-
InProgress
- The training is in progress. -
Completed
- The training job has completed. -
Failed
- The training job has failed. To see the reason for the failure, see theFailureReason
field in the response to aDescribeTrainingJobResponse
call. -
Stopping
- The training job is stopping. -
Stopped
- The training job has stopped.
For more detailed information, see SecondaryStatus
.
Sourcepub fn secondary_status(&self) -> Option<&SecondaryStatus>
pub fn secondary_status(&self) -> Option<&SecondaryStatus>
Provides detailed information about the state of the training job. For detailed information about the secondary status of the training job, see StatusMessage
under SecondaryStatusTransition.
SageMaker provides primary statuses and secondary statuses that apply to each of them:
- InProgress
-
-
Starting
- Starting the training job. -
Downloading
- An optional stage for algorithms that supportFile
training input mode. It indicates that data is being downloaded to the ML storage volumes. -
Training
- Training is in progress. -
Uploading
- Training is complete and the model artifacts are being uploaded to the S3 location.
-
- Completed
-
-
Completed
- The training job has completed.
-
- Failed
-
-
Failed
- The training job has failed. The reason for the failure is returned in theFailureReason
field ofDescribeTrainingJobResponse
.
-
- Stopped
-
-
MaxRuntimeExceeded
- The job stopped because it exceeded the maximum allowed runtime. -
Stopped
- The training job has stopped.
-
- Stopping
-
-
Stopping
- Stopping the training job.
-
Valid values for SecondaryStatus
are subject to change.
We no longer support the following secondary statuses:
-
LaunchingMLInstances
-
PreparingTrainingStack
-
DownloadingTrainingImage
Sourcepub fn failure_reason(&self) -> Option<&str>
pub fn failure_reason(&self) -> Option<&str>
If the training job failed, the reason it failed.
Sourcepub fn hyper_parameters(&self) -> Option<&HashMap<String, String>>
pub fn hyper_parameters(&self) -> Option<&HashMap<String, String>>
Algorithm-specific parameters.
Sourcepub fn algorithm_specification(&self) -> Option<&AlgorithmSpecification>
pub fn algorithm_specification(&self) -> Option<&AlgorithmSpecification>
Information about the algorithm used for training, and algorithm metadata.
Sourcepub fn role_arn(&self) -> Option<&str>
pub fn role_arn(&self) -> Option<&str>
The Amazon Web Services Identity and Access Management (IAM) role configured for the training job.
Sourcepub fn input_data_config(&self) -> &[Channel]
pub fn input_data_config(&self) -> &[Channel]
An array of Channel
objects that describes each data input channel.
Your input must be in the same Amazon Web Services region as your training job.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .input_data_config.is_none()
.
Sourcepub fn output_data_config(&self) -> Option<&OutputDataConfig>
pub fn output_data_config(&self) -> Option<&OutputDataConfig>
The S3 path where model artifacts that you configured when creating the job are stored. SageMaker creates subfolders for model artifacts.
Sourcepub fn resource_config(&self) -> Option<&ResourceConfig>
pub fn resource_config(&self) -> Option<&ResourceConfig>
Resources, including ML compute instances and ML storage volumes, that are configured for model training.
Sourcepub fn vpc_config(&self) -> Option<&VpcConfig>
pub fn vpc_config(&self) -> Option<&VpcConfig>
A VpcConfig object that specifies the VPC that this training job has access to. For more information, see Protect Training Jobs by Using an Amazon Virtual Private Cloud.
Sourcepub fn stopping_condition(&self) -> Option<&StoppingCondition>
pub fn stopping_condition(&self) -> Option<&StoppingCondition>
Specifies a limit to how long a model training job can run. It also specifies how long a managed Spot training job has to complete. When the job reaches the time limit, SageMaker ends the training job. Use this API to cap model training costs.
To stop a job, SageMaker sends the algorithm the SIGTERM
signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
Sourcepub fn creation_time(&self) -> Option<&DateTime>
pub fn creation_time(&self) -> Option<&DateTime>
A timestamp that indicates when the training job was created.
Sourcepub fn training_start_time(&self) -> Option<&DateTime>
pub fn training_start_time(&self) -> Option<&DateTime>
Indicates the time when the training job starts on training instances. You are billed for the time interval between this time and the value of TrainingEndTime
. The start time in CloudWatch Logs might be later than this time. The difference is due to the time it takes to download the training data and to the size of the training container.
Sourcepub fn training_end_time(&self) -> Option<&DateTime>
pub fn training_end_time(&self) -> Option<&DateTime>
Indicates the time when the training job ends on training instances. You are billed for the time interval between the value of TrainingStartTime
and this time. For successful jobs and stopped jobs, this is the time after model artifacts are uploaded. For failed jobs, this is the time when SageMaker detects a job failure.
Sourcepub fn last_modified_time(&self) -> Option<&DateTime>
pub fn last_modified_time(&self) -> Option<&DateTime>
A timestamp that indicates when the status of the training job was last modified.
Sourcepub fn secondary_status_transitions(&self) -> &[SecondaryStatusTransition]
pub fn secondary_status_transitions(&self) -> &[SecondaryStatusTransition]
A history of all of the secondary statuses that the training job has transitioned through.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .secondary_status_transitions.is_none()
.
Sourcepub fn final_metric_data_list(&self) -> &[MetricData]
pub fn final_metric_data_list(&self) -> &[MetricData]
A list of final metric values that are set when the training job completes. Used only if the training job was configured to use metrics.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .final_metric_data_list.is_none()
.
Sourcepub fn enable_network_isolation(&self) -> Option<bool>
pub fn enable_network_isolation(&self) -> Option<bool>
If the TrainingJob
was created with network isolation, the value is set to true
. If network isolation is enabled, nodes can't communicate beyond the VPC they run in.
Sourcepub fn enable_inter_container_traffic_encryption(&self) -> Option<bool>
pub fn enable_inter_container_traffic_encryption(&self) -> Option<bool>
To encrypt all communications between ML compute instances in distributed training, choose True
. Encryption provides greater security for distributed training, but training might take longer. How long it takes depends on the amount of communication between compute instances, especially if you use a deep learning algorithm in distributed training.
Sourcepub fn enable_managed_spot_training(&self) -> Option<bool>
pub fn enable_managed_spot_training(&self) -> Option<bool>
When true, enables managed spot training using Amazon EC2 Spot instances to run training jobs instead of on-demand instances. For more information, see Managed Spot Training.
Sourcepub fn checkpoint_config(&self) -> Option<&CheckpointConfig>
pub fn checkpoint_config(&self) -> Option<&CheckpointConfig>
Contains information about the output location for managed spot training checkpoint data.
Sourcepub fn training_time_in_seconds(&self) -> Option<i32>
pub fn training_time_in_seconds(&self) -> Option<i32>
The training time in seconds.
Sourcepub fn billable_time_in_seconds(&self) -> Option<i32>
pub fn billable_time_in_seconds(&self) -> Option<i32>
The billable time in seconds.
Sourcepub fn debug_hook_config(&self) -> Option<&DebugHookConfig>
pub fn debug_hook_config(&self) -> Option<&DebugHookConfig>
Configuration information for the Amazon SageMaker Debugger hook parameters, metric and tensor collections, and storage paths. To learn more about how to configure the DebugHookConfig
parameter, see Use the SageMaker and Debugger Configuration API Operations to Create, Update, and Debug Your Training Job.
Sourcepub fn experiment_config(&self) -> Option<&ExperimentConfig>
pub fn experiment_config(&self) -> Option<&ExperimentConfig>
Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:
Sourcepub fn debug_rule_configurations(&self) -> &[DebugRuleConfiguration]
pub fn debug_rule_configurations(&self) -> &[DebugRuleConfiguration]
Information about the debug rule configuration.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .debug_rule_configurations.is_none()
.
Sourcepub fn tensor_board_output_config(&self) -> Option<&TensorBoardOutputConfig>
pub fn tensor_board_output_config(&self) -> Option<&TensorBoardOutputConfig>
Configuration of storage locations for the Amazon SageMaker Debugger TensorBoard output data.
Sourcepub fn debug_rule_evaluation_statuses(&self) -> &[DebugRuleEvaluationStatus]
pub fn debug_rule_evaluation_statuses(&self) -> &[DebugRuleEvaluationStatus]
Information about the evaluation status of the rules for the training job.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .debug_rule_evaluation_statuses.is_none()
.
Sourcepub fn profiler_config(&self) -> Option<&ProfilerConfig>
pub fn profiler_config(&self) -> Option<&ProfilerConfig>
Configuration information for Amazon SageMaker Debugger system monitoring, framework profiling, and storage paths.
Sourcepub fn environment(&self) -> Option<&HashMap<String, String>>
pub fn environment(&self) -> Option<&HashMap<String, String>>
The environment variables to set in the Docker container.
Sourcepub fn retry_strategy(&self) -> Option<&RetryStrategy>
pub fn retry_strategy(&self) -> Option<&RetryStrategy>
The number of times to retry the job when the job fails due to an InternalServerError
.
An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .tags.is_none()
.
Source§impl TrainingJob
impl TrainingJob
Sourcepub fn builder() -> TrainingJobBuilder
pub fn builder() -> TrainingJobBuilder
Creates a new builder-style object to manufacture TrainingJob
.
Trait Implementations§
Source§impl Clone for TrainingJob
impl Clone for TrainingJob
Source§fn clone(&self) -> TrainingJob
fn clone(&self) -> TrainingJob
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl Debug for TrainingJob
impl Debug for TrainingJob
Source§impl PartialEq for TrainingJob
impl PartialEq for TrainingJob
impl StructuralPartialEq for TrainingJob
Auto Trait Implementations§
impl Freeze for TrainingJob
impl RefUnwindSafe for TrainingJob
impl Send for TrainingJob
impl Sync for TrainingJob
impl Unpin for TrainingJob
impl UnwindSafe for TrainingJob
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);