pub struct ModerationCategories {
pub hate: bool,
pub hate_threatening: bool,
pub self_harm: bool,
pub sexual: bool,
pub sexual_minors: bool,
pub violence: bool,
pub violence_graphic: bool,
}
Expand description
A breakdown of the moderation categories.
Each field corresponds to a distinct policy category recognized by OpenAI’s model.
If true
, the text has been flagged under that category.
Fields§
§hate: bool
Hateful content directed towards a protected group or individual.
hate_threatening: bool
Hateful content with threats.
self_harm: bool
Content about self-harm or suicide.
sexual: bool
If true
, the text includes sexual content or references.
sexual_minors: bool
If true
, the text includes sexual content involving minors.
violence: bool
If true
, the text includes violent content or context.
violence_graphic: bool
If true
, the text includes particularly graphic or gory violence.
Trait Implementations§
Source§impl Debug for ModerationCategories
impl Debug for ModerationCategories
Source§impl<'de> Deserialize<'de> for ModerationCategories
impl<'de> Deserialize<'de> for ModerationCategories
Source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
Auto Trait Implementations§
impl Freeze for ModerationCategories
impl RefUnwindSafe for ModerationCategories
impl Send for ModerationCategories
impl Sync for ModerationCategories
impl Unpin for ModerationCategories
impl UnwindSafe for ModerationCategories
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more