Struct aws_sdk_rekognition::model::ModerationLabel
source · [−]#[non_exhaustive]pub struct ModerationLabel { /* private fields */ }Expand description
Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.
Implementations
sourceimpl ModerationLabel
impl ModerationLabel
sourcepub fn confidence(&self) -> Option<f32>
pub fn confidence(&self) -> Option<f32>
Specifies the confidence that Amazon Rekognition has that the label has been correctly identified.
If you don't specify the MinConfidence parameter in the call to DetectModerationLabels, the operation returns labels with a confidence value greater than or equal to 50 percent.
sourcepub fn name(&self) -> Option<&str>
pub fn name(&self) -> Option<&str>
The label name for the type of unsafe content detected in the image.
sourcepub fn parent_name(&self) -> Option<&str>
pub fn parent_name(&self) -> Option<&str>
The name for the parent label. Labels at the top level of the hierarchy have the parent label "".
sourceimpl ModerationLabel
impl ModerationLabel
sourcepub fn builder() -> Builder
pub fn builder() -> Builder
Creates a new builder-style object to manufacture ModerationLabel.
Trait Implementations
sourceimpl Clone for ModerationLabel
impl Clone for ModerationLabel
sourcefn clone(&self) -> ModerationLabel
fn clone(&self) -> ModerationLabel
1.0.0 · sourcefn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more