Struct HumanTaskConfig

Source
#[non_exhaustive]
pub struct HumanTaskConfig { pub workteam_arn: Option<String>, pub ui_config: Option<UiConfig>, pub pre_human_task_lambda_arn: Option<String>, pub task_keywords: Option<Vec<String>>, pub task_title: Option<String>, pub task_description: Option<String>, pub number_of_human_workers_per_data_object: Option<i32>, pub task_time_limit_in_seconds: Option<i32>, pub task_availability_lifetime_in_seconds: Option<i32>, pub max_concurrent_task_count: Option<i32>, pub annotation_consolidation_config: Option<AnnotationConsolidationConfig>, pub public_workforce_task_price: Option<PublicWorkforceTaskPrice>, }
Expand description

Information required for human workers to complete a labeling task.

Fields (Non-exhaustive)§

This struct is marked as non-exhaustive
Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.
§workteam_arn: Option<String>

The Amazon Resource Name (ARN) of the work team assigned to complete the tasks.

§ui_config: Option<UiConfig>

Information about the user interface that workers use to complete the labeling task.

§pre_human_task_lambda_arn: Option<String>

The Amazon Resource Name (ARN) of a Lambda function that is run before a data object is sent to a human worker. Use this function to provide input to a custom labeling job.

For built-in task types, use one of the following Amazon SageMaker Ground Truth Lambda function ARNs for PreHumanTaskLambdaArn. For custom labeling workflows, see Pre-annotation Lambda.

Bounding box - Finds the most similar boxes from different workers based on the Jaccard index of the boxes.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-BoundingBox

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-BoundingBox

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-BoundingBox

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-BoundingBox

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-BoundingBox

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-BoundingBox

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-BoundingBox

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-BoundingBox

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-BoundingBox

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-BoundingBox

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-BoundingBox

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-BoundingBox

Image classification - Uses a variant of the Expectation Maximization approach to estimate the true class of an image based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-ImageMultiClass

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-ImageMultiClass

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-ImageMultiClass

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-ImageMultiClass

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-ImageMultiClass

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-ImageMultiClass

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-ImageMultiClass

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-ImageMultiClass

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-ImageMultiClass

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-ImageMultiClass

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-ImageMultiClass

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-ImageMultiClass

Multi-label image classification - Uses a variant of the Expectation Maximization approach to estimate the true classes of an image based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-ImageMultiClassMultiLabel

Semantic segmentation - Treats each pixel in an image as a multi-class classification and treats pixel annotations from workers as "votes" for the correct label.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-SemanticSegmentation

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-SemanticSegmentation

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-SemanticSegmentation

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-SemanticSegmentation

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-SemanticSegmentation

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-SemanticSegmentation

Text classification - Uses a variant of the Expectation Maximization approach to estimate the true class of text based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-TextMultiClass

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClass

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-TextMultiClass

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-TextMultiClass

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-TextMultiClass

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-TextMultiClass

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-TextMultiClass

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-TextMultiClass

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-TextMultiClass

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-TextMultiClass

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-TextMultiClass

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-TextMultiClass

Multi-label text classification - Uses a variant of the Expectation Maximization approach to estimate the true classes of text based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-TextMultiClassMultiLabel

Named entity recognition - Groups similar selections and calculates aggregate boundaries, resolving to most-assigned label.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-NamedEntityRecognition

Video Classification - Use this task type when you need workers to classify videos using predefined labels that you specify. Workers are shown videos and are asked to choose one label for each video.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-VideoMultiClass

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-VideoMultiClass

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-VideoMultiClass

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-VideoMultiClass

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VideoMultiClass

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VideoMultiClass

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-VideoMultiClass

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-VideoMultiClass

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VideoMultiClass

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-VideoMultiClass

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VideoMultiClass

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-VideoMultiClass

Video Frame Object Detection - Use this task type to have workers identify and locate objects in a sequence of video frames (images extracted from a video) using bounding boxes. For example, you can use this task to ask workers to identify and localize various objects in a series of video frames, such as cars, bikes, and pedestrians.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-VideoObjectDetection

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-VideoObjectDetection

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-VideoObjectDetection

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-VideoObjectDetection

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VideoObjectDetection

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-VideoObjectDetection

Video Frame Object Tracking - Use this task type to have workers track the movement of objects in a sequence of video frames (images extracted from a video) using bounding boxes. For example, you can use this task to ask workers to track the movement of objects, such as cars, bikes, and pedestrians.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-VideoObjectTracking

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-VideoObjectTracking

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-VideoObjectTracking

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-VideoObjectTracking

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VideoObjectTracking

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-VideoObjectTracking

3D Point Cloud Modalities

Use the following pre-annotation lambdas for 3D point cloud labeling modality tasks. See 3D Point Cloud Task types to learn more.

3D Point Cloud Object Detection - Use this task type when you want workers to classify objects in a 3D point cloud by drawing 3D cuboids around objects. For example, you can use this task type to ask workers to identify different types of objects in a point cloud, such as cars, bikes, and pedestrians.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-3DPointCloudObjectDetection

3D Point Cloud Object Tracking - Use this task type when you want workers to draw 3D cuboids around objects that appear in a sequence of 3D point cloud frames. For example, you can use this task type to ask workers to track the movement of vehicles across multiple point cloud frames.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-3DPointCloudObjectTracking

3D Point Cloud Semantic Segmentation - Use this task type when you want workers to create a point-level semantic segmentation masks by painting objects in a 3D point cloud using different colors where each color is assigned to one of the classes you specify.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-3DPointCloudSemanticSegmentation

Use the following ARNs for Label Verification and Adjustment Jobs

Use label verification and adjustment jobs to review and adjust labels. To learn more, see Verify and Adjust Labels .

Bounding box verification - Uses a variant of the Expectation Maximization approach to estimate the true class of verification judgement for bounding box labels based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-VerificationBoundingBox

Bounding box adjustment - Finds the most similar boxes from different workers based on the Jaccard index of the adjusted annotations.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentBoundingBox

Semantic segmentation verification - Uses a variant of the Expectation Maximization approach to estimate the true class of verification judgment for semantic segmentation labels based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VerificationSemanticSegmentation

Semantic segmentation adjustment - Treats each pixel in an image as a multi-class classification and treats pixel adjusted annotations from workers as "votes" for the correct label.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentSemanticSegmentation

Video Frame Object Detection Adjustment - Use this task type when you want workers to adjust bounding boxes that workers have added to video frames to classify and localize objects in a sequence of video frames.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentVideoObjectDetection

Video Frame Object Tracking Adjustment - Use this task type when you want workers to adjust bounding boxes that workers have added to video frames to track object movement across a sequence of video frames.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentVideoObjectTracking

3D point cloud object detection adjustment - Adjust 3D cuboids in a point cloud frame.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-Adjustment3DPointCloudObjectDetection

3D point cloud object tracking adjustment - Adjust 3D cuboids across a sequence of point cloud frames.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-Adjustment3DPointCloudObjectTracking

3D point cloud semantic segmentation adjustment - Adjust semantic segmentation masks in a 3D point cloud.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-Adjustment3DPointCloudSemanticSegmentation

§task_keywords: Option<Vec<String>>

Keywords used to describe the task so that workers on Amazon Mechanical Turk can discover the task.

§task_title: Option<String>

A title for the task for your human workers.

§task_description: Option<String>

A description of the task for your human workers.

§number_of_human_workers_per_data_object: Option<i32>

The number of human workers that will label an object.

§task_time_limit_in_seconds: Option<i32>

The amount of time that a worker has to complete a task.

If you create a custom labeling job, the maximum value for this parameter is 8 hours (28,800 seconds).

If you create a labeling job using a built-in task type the maximum for this parameter depends on the task type you use:

  • For image and text labeling jobs, the maximum is 8 hours (28,800 seconds).

  • For 3D point cloud and video frame labeling jobs, the maximum is 30 days (2952,000 seconds) for non-AL mode. For most users, the maximum is also 30 days.

§task_availability_lifetime_in_seconds: Option<i32>

The length of time that a task remains available for labeling by human workers. The default and maximum values for this parameter depend on the type of workforce you use.

  • If you choose the Amazon Mechanical Turk workforce, the maximum is 12 hours (43,200 seconds). The default is 6 hours (21,600 seconds).

  • If you choose a private or vendor workforce, the default value is 30 days (2592,000 seconds) for non-AL mode. For most users, the maximum is also 30 days.

§max_concurrent_task_count: Option<i32>

Defines the maximum number of data objects that can be labeled by human workers at the same time. Also referred to as batch size. Each object may have more than one worker at one time. The default value is 1000 objects. To increase the maximum value to 5000 objects, contact Amazon Web Services Support.

§annotation_consolidation_config: Option<AnnotationConsolidationConfig>

Configures how labels are consolidated across human workers.

§public_workforce_task_price: Option<PublicWorkforceTaskPrice>

The price that you pay for each task performed by an Amazon Mechanical Turk worker.

Implementations§

Source§

impl HumanTaskConfig

Source

pub fn workteam_arn(&self) -> Option<&str>

The Amazon Resource Name (ARN) of the work team assigned to complete the tasks.

Source

pub fn ui_config(&self) -> Option<&UiConfig>

Information about the user interface that workers use to complete the labeling task.

Source

pub fn pre_human_task_lambda_arn(&self) -> Option<&str>

The Amazon Resource Name (ARN) of a Lambda function that is run before a data object is sent to a human worker. Use this function to provide input to a custom labeling job.

For built-in task types, use one of the following Amazon SageMaker Ground Truth Lambda function ARNs for PreHumanTaskLambdaArn. For custom labeling workflows, see Pre-annotation Lambda.

Bounding box - Finds the most similar boxes from different workers based on the Jaccard index of the boxes.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-BoundingBox

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-BoundingBox

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-BoundingBox

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-BoundingBox

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-BoundingBox

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-BoundingBox

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-BoundingBox

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-BoundingBox

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-BoundingBox

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-BoundingBox

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-BoundingBox

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-BoundingBox

Image classification - Uses a variant of the Expectation Maximization approach to estimate the true class of an image based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-ImageMultiClass

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-ImageMultiClass

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-ImageMultiClass

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-ImageMultiClass

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-ImageMultiClass

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-ImageMultiClass

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-ImageMultiClass

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-ImageMultiClass

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-ImageMultiClass

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-ImageMultiClass

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-ImageMultiClass

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-ImageMultiClass

Multi-label image classification - Uses a variant of the Expectation Maximization approach to estimate the true classes of an image based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-ImageMultiClassMultiLabel

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-ImageMultiClassMultiLabel

Semantic segmentation - Treats each pixel in an image as a multi-class classification and treats pixel annotations from workers as "votes" for the correct label.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-SemanticSegmentation

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-SemanticSegmentation

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-SemanticSegmentation

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-SemanticSegmentation

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-SemanticSegmentation

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-SemanticSegmentation

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-SemanticSegmentation

Text classification - Uses a variant of the Expectation Maximization approach to estimate the true class of text based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-TextMultiClass

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClass

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-TextMultiClass

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-TextMultiClass

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-TextMultiClass

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-TextMultiClass

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-TextMultiClass

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-TextMultiClass

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-TextMultiClass

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-TextMultiClass

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-TextMultiClass

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-TextMultiClass

Multi-label text classification - Uses a variant of the Expectation Maximization approach to estimate the true classes of text based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-TextMultiClassMultiLabel

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-TextMultiClassMultiLabel

Named entity recognition - Groups similar selections and calculates aggregate boundaries, resolving to most-assigned label.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-NamedEntityRecognition

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-NamedEntityRecognition

Video Classification - Use this task type when you need workers to classify videos using predefined labels that you specify. Workers are shown videos and are asked to choose one label for each video.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-VideoMultiClass

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-VideoMultiClass

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-VideoMultiClass

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-VideoMultiClass

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VideoMultiClass

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VideoMultiClass

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-VideoMultiClass

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-VideoMultiClass

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VideoMultiClass

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-VideoMultiClass

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VideoMultiClass

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-VideoMultiClass

Video Frame Object Detection - Use this task type to have workers identify and locate objects in a sequence of video frames (images extracted from a video) using bounding boxes. For example, you can use this task to ask workers to identify and localize various objects in a series of video frames, such as cars, bikes, and pedestrians.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-VideoObjectDetection

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-VideoObjectDetection

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-VideoObjectDetection

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-VideoObjectDetection

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VideoObjectDetection

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VideoObjectDetection

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-VideoObjectDetection

Video Frame Object Tracking - Use this task type to have workers track the movement of objects in a sequence of video frames (images extracted from a video) using bounding boxes. For example, you can use this task to ask workers to track the movement of objects, such as cars, bikes, and pedestrians.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-VideoObjectTracking

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-VideoObjectTracking

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-VideoObjectTracking

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-VideoObjectTracking

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VideoObjectTracking

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VideoObjectTracking

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-VideoObjectTracking

3D Point Cloud Modalities

Use the following pre-annotation lambdas for 3D point cloud labeling modality tasks. See 3D Point Cloud Task types to learn more.

3D Point Cloud Object Detection - Use this task type when you want workers to classify objects in a 3D point cloud by drawing 3D cuboids around objects. For example, you can use this task type to ask workers to identify different types of objects in a point cloud, such as cars, bikes, and pedestrians.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-3DPointCloudObjectDetection

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-3DPointCloudObjectDetection

3D Point Cloud Object Tracking - Use this task type when you want workers to draw 3D cuboids around objects that appear in a sequence of 3D point cloud frames. For example, you can use this task type to ask workers to track the movement of vehicles across multiple point cloud frames.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-3DPointCloudObjectTracking

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-3DPointCloudObjectTracking

3D Point Cloud Semantic Segmentation - Use this task type when you want workers to create a point-level semantic segmentation masks by painting objects in a 3D point cloud using different colors where each color is assigned to one of the classes you specify.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-3DPointCloudSemanticSegmentation

Use the following ARNs for Label Verification and Adjustment Jobs

Use label verification and adjustment jobs to review and adjust labels. To learn more, see Verify and Adjust Labels .

Bounding box verification - Uses a variant of the Expectation Maximization approach to estimate the true class of verification judgement for bounding box labels based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VerificationBoundingBox

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-VerificationBoundingBox

Bounding box adjustment - Finds the most similar boxes from different workers based on the Jaccard index of the adjusted annotations.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentBoundingBox

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentBoundingBox

Semantic segmentation verification - Uses a variant of the Expectation Maximization approach to estimate the true class of verification judgment for semantic segmentation labels based on annotations from individual workers.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VerificationSemanticSegmentation

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VerificationSemanticSegmentation

Semantic segmentation adjustment - Treats each pixel in an image as a multi-class classification and treats pixel adjusted annotations from workers as "votes" for the correct label.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentSemanticSegmentation

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentSemanticSegmentation

Video Frame Object Detection Adjustment - Use this task type when you want workers to adjust bounding boxes that workers have added to video frames to classify and localize objects in a sequence of video frames.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentVideoObjectDetection

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentVideoObjectDetection

Video Frame Object Tracking Adjustment - Use this task type when you want workers to adjust bounding boxes that workers have added to video frames to track object movement across a sequence of video frames.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentVideoObjectTracking

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentVideoObjectTracking

3D point cloud object detection adjustment - Adjust 3D cuboids in a point cloud frame.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-Adjustment3DPointCloudObjectDetection

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-Adjustment3DPointCloudObjectDetection

3D point cloud object tracking adjustment - Adjust 3D cuboids across a sequence of point cloud frames.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-Adjustment3DPointCloudObjectTracking

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-Adjustment3DPointCloudObjectTracking

3D point cloud semantic segmentation adjustment - Adjust semantic segmentation masks in a 3D point cloud.

  • arn:aws:lambda:us-east-1:432418664414:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:us-east-2:266458841044:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:us-west-2:081040173940:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-west-1:568282634449:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-south-1:565803892007:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-central-1:203001061592:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:eu-west-2:487402164563:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-Adjustment3DPointCloudSemanticSegmentation

  • arn:aws:lambda:ca-central-1:918755190332:function:PRE-Adjustment3DPointCloudSemanticSegmentation

Source

pub fn task_keywords(&self) -> &[String]

Keywords used to describe the task so that workers on Amazon Mechanical Turk can discover the task.

If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .task_keywords.is_none().

Source

pub fn task_title(&self) -> Option<&str>

A title for the task for your human workers.

Source

pub fn task_description(&self) -> Option<&str>

A description of the task for your human workers.

Source

pub fn number_of_human_workers_per_data_object(&self) -> Option<i32>

The number of human workers that will label an object.

Source

pub fn task_time_limit_in_seconds(&self) -> Option<i32>

The amount of time that a worker has to complete a task.

If you create a custom labeling job, the maximum value for this parameter is 8 hours (28,800 seconds).

If you create a labeling job using a built-in task type the maximum for this parameter depends on the task type you use:

  • For image and text labeling jobs, the maximum is 8 hours (28,800 seconds).

  • For 3D point cloud and video frame labeling jobs, the maximum is 30 days (2952,000 seconds) for non-AL mode. For most users, the maximum is also 30 days.

Source

pub fn task_availability_lifetime_in_seconds(&self) -> Option<i32>

The length of time that a task remains available for labeling by human workers. The default and maximum values for this parameter depend on the type of workforce you use.

  • If you choose the Amazon Mechanical Turk workforce, the maximum is 12 hours (43,200 seconds). The default is 6 hours (21,600 seconds).

  • If you choose a private or vendor workforce, the default value is 30 days (2592,000 seconds) for non-AL mode. For most users, the maximum is also 30 days.

Source

pub fn max_concurrent_task_count(&self) -> Option<i32>

Defines the maximum number of data objects that can be labeled by human workers at the same time. Also referred to as batch size. Each object may have more than one worker at one time. The default value is 1000 objects. To increase the maximum value to 5000 objects, contact Amazon Web Services Support.

Source

pub fn annotation_consolidation_config( &self, ) -> Option<&AnnotationConsolidationConfig>

Configures how labels are consolidated across human workers.

Source

pub fn public_workforce_task_price(&self) -> Option<&PublicWorkforceTaskPrice>

The price that you pay for each task performed by an Amazon Mechanical Turk worker.

Source§

impl HumanTaskConfig

Source

pub fn builder() -> HumanTaskConfigBuilder

Creates a new builder-style object to manufacture HumanTaskConfig.

Trait Implementations§

Source§

impl Clone for HumanTaskConfig

Source§

fn clone(&self) -> HumanTaskConfig

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for HumanTaskConfig

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl PartialEq for HumanTaskConfig

Source§

fn eq(&self, other: &HumanTaskConfig) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl StructuralPartialEq for HumanTaskConfig

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

Source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
Source§

impl<T> Paint for T
where T: ?Sized,

Source§

fn fg(&self, value: Color) -> Painted<&T>

Returns a styled value derived from self with the foreground set to value.

This method should be used rarely. Instead, prefer to use color-specific builder methods like red() and green(), which have the same functionality but are pithier.

§Example

Set foreground color to white using fg():

use yansi::{Paint, Color};

painted.fg(Color::White);

Set foreground color to white using white().

use yansi::Paint;

painted.white();
Source§

fn primary(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: Primary].

§Example
println!("{}", value.primary());
Source§

fn fixed(&self, color: u8) -> Painted<&T>

Returns self with the fg() set to [Color :: Fixed].

§Example
println!("{}", value.fixed(color));
Source§

fn rgb(&self, r: u8, g: u8, b: u8) -> Painted<&T>

Returns self with the fg() set to [Color :: Rgb].

§Example
println!("{}", value.rgb(r, g, b));
Source§

fn black(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: Black].

§Example
println!("{}", value.black());
Source§

fn red(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: Red].

§Example
println!("{}", value.red());
Source§

fn green(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: Green].

§Example
println!("{}", value.green());
Source§

fn yellow(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: Yellow].

§Example
println!("{}", value.yellow());
Source§

fn blue(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: Blue].

§Example
println!("{}", value.blue());
Source§

fn magenta(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: Magenta].

§Example
println!("{}", value.magenta());
Source§

fn cyan(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: Cyan].

§Example
println!("{}", value.cyan());
Source§

fn white(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: White].

§Example
println!("{}", value.white());
Source§

fn bright_black(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: BrightBlack].

§Example
println!("{}", value.bright_black());
Source§

fn bright_red(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: BrightRed].

§Example
println!("{}", value.bright_red());
Source§

fn bright_green(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: BrightGreen].

§Example
println!("{}", value.bright_green());
Source§

fn bright_yellow(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: BrightYellow].

§Example
println!("{}", value.bright_yellow());
Source§

fn bright_blue(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: BrightBlue].

§Example
println!("{}", value.bright_blue());
Source§

fn bright_magenta(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: BrightMagenta].

§Example
println!("{}", value.bright_magenta());
Source§

fn bright_cyan(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: BrightCyan].

§Example
println!("{}", value.bright_cyan());
Source§

fn bright_white(&self) -> Painted<&T>

Returns self with the fg() set to [Color :: BrightWhite].

§Example
println!("{}", value.bright_white());
Source§

fn bg(&self, value: Color) -> Painted<&T>

Returns a styled value derived from self with the background set to value.

This method should be used rarely. Instead, prefer to use color-specific builder methods like on_red() and on_green(), which have the same functionality but are pithier.

§Example

Set background color to red using fg():

use yansi::{Paint, Color};

painted.bg(Color::Red);

Set background color to red using on_red().

use yansi::Paint;

painted.on_red();
Source§

fn on_primary(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: Primary].

§Example
println!("{}", value.on_primary());
Source§

fn on_fixed(&self, color: u8) -> Painted<&T>

Returns self with the bg() set to [Color :: Fixed].

§Example
println!("{}", value.on_fixed(color));
Source§

fn on_rgb(&self, r: u8, g: u8, b: u8) -> Painted<&T>

Returns self with the bg() set to [Color :: Rgb].

§Example
println!("{}", value.on_rgb(r, g, b));
Source§

fn on_black(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: Black].

§Example
println!("{}", value.on_black());
Source§

fn on_red(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: Red].

§Example
println!("{}", value.on_red());
Source§

fn on_green(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: Green].

§Example
println!("{}", value.on_green());
Source§

fn on_yellow(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: Yellow].

§Example
println!("{}", value.on_yellow());
Source§

fn on_blue(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: Blue].

§Example
println!("{}", value.on_blue());
Source§

fn on_magenta(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: Magenta].

§Example
println!("{}", value.on_magenta());
Source§

fn on_cyan(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: Cyan].

§Example
println!("{}", value.on_cyan());
Source§

fn on_white(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: White].

§Example
println!("{}", value.on_white());
Source§

fn on_bright_black(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: BrightBlack].

§Example
println!("{}", value.on_bright_black());
Source§

fn on_bright_red(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: BrightRed].

§Example
println!("{}", value.on_bright_red());
Source§

fn on_bright_green(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: BrightGreen].

§Example
println!("{}", value.on_bright_green());
Source§

fn on_bright_yellow(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: BrightYellow].

§Example
println!("{}", value.on_bright_yellow());
Source§

fn on_bright_blue(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: BrightBlue].

§Example
println!("{}", value.on_bright_blue());
Source§

fn on_bright_magenta(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: BrightMagenta].

§Example
println!("{}", value.on_bright_magenta());
Source§

fn on_bright_cyan(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: BrightCyan].

§Example
println!("{}", value.on_bright_cyan());
Source§

fn on_bright_white(&self) -> Painted<&T>

Returns self with the bg() set to [Color :: BrightWhite].

§Example
println!("{}", value.on_bright_white());
Source§

fn attr(&self, value: Attribute) -> Painted<&T>

Enables the styling Attribute value.

This method should be used rarely. Instead, prefer to use attribute-specific builder methods like bold() and underline(), which have the same functionality but are pithier.

§Example

Make text bold using attr():

use yansi::{Paint, Attribute};

painted.attr(Attribute::Bold);

Make text bold using using bold().

use yansi::Paint;

painted.bold();
Source§

fn bold(&self) -> Painted<&T>

Returns self with the attr() set to [Attribute :: Bold].

§Example
println!("{}", value.bold());
Source§

fn dim(&self) -> Painted<&T>

Returns self with the attr() set to [Attribute :: Dim].

§Example
println!("{}", value.dim());
Source§

fn italic(&self) -> Painted<&T>

Returns self with the attr() set to [Attribute :: Italic].

§Example
println!("{}", value.italic());
Source§

fn underline(&self) -> Painted<&T>

Returns self with the attr() set to [Attribute :: Underline].

§Example
println!("{}", value.underline());

Returns self with the attr() set to [Attribute :: Blink].

§Example
println!("{}", value.blink());

Returns self with the attr() set to [Attribute :: RapidBlink].

§Example
println!("{}", value.rapid_blink());
Source§

fn invert(&self) -> Painted<&T>

Returns self with the attr() set to [Attribute :: Invert].

§Example
println!("{}", value.invert());
Source§

fn conceal(&self) -> Painted<&T>

Returns self with the attr() set to [Attribute :: Conceal].

§Example
println!("{}", value.conceal());
Source§

fn strike(&self) -> Painted<&T>

Returns self with the attr() set to [Attribute :: Strike].

§Example
println!("{}", value.strike());
Source§

fn quirk(&self, value: Quirk) -> Painted<&T>

Enables the yansi Quirk value.

This method should be used rarely. Instead, prefer to use quirk-specific builder methods like mask() and wrap(), which have the same functionality but are pithier.

§Example

Enable wrapping using .quirk():

use yansi::{Paint, Quirk};

painted.quirk(Quirk::Wrap);

Enable wrapping using wrap().

use yansi::Paint;

painted.wrap();
Source§

fn mask(&self) -> Painted<&T>

Returns self with the quirk() set to [Quirk :: Mask].

§Example
println!("{}", value.mask());
Source§

fn wrap(&self) -> Painted<&T>

Returns self with the quirk() set to [Quirk :: Wrap].

§Example
println!("{}", value.wrap());
Source§

fn linger(&self) -> Painted<&T>

Returns self with the quirk() set to [Quirk :: Linger].

§Example
println!("{}", value.linger());
Source§

fn clear(&self) -> Painted<&T>

👎Deprecated since 1.0.1: renamed to resetting() due to conflicts with Vec::clear(). The clear() method will be removed in a future release.

Returns self with the quirk() set to [Quirk :: Clear].

§Example
println!("{}", value.clear());
Source§

fn resetting(&self) -> Painted<&T>

Returns self with the quirk() set to [Quirk :: Resetting].

§Example
println!("{}", value.resetting());
Source§

fn bright(&self) -> Painted<&T>

Returns self with the quirk() set to [Quirk :: Bright].

§Example
println!("{}", value.bright());
Source§

fn on_bright(&self) -> Painted<&T>

Returns self with the quirk() set to [Quirk :: OnBright].

§Example
println!("{}", value.on_bright());
Source§

fn whenever(&self, value: Condition) -> Painted<&T>

Conditionally enable styling based on whether the Condition value applies. Replaces any previous condition.

See the crate level docs for more details.

§Example

Enable styling painted only when both stdout and stderr are TTYs:

use yansi::{Paint, Condition};

painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);
Source§

fn new(self) -> Painted<Self>
where Self: Sized,

Create a new Painted with a default Style. Read more
Source§

fn paint<S>(&self, style: S) -> Painted<&Self>
where S: Into<Style>,

Apply a style wholesale to self. Any previous style is replaced. Read more
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

impl<T> ErasedDestructor for T
where T: 'static,