Struct async_openai::Moderations
source · pub struct Moderations<'c, C: Config> { /* private fields */ }
Expand description
Given some input text, outputs if the model classifies it as potentially harmful across several categories.
Related guide: Moderations
Implementations§
source§impl<'c, C: Config> Moderations<'c, C>
impl<'c, C: Config> Moderations<'c, C>
pub fn new(client: &'c Client<C>) -> Self
sourcepub async fn create(
&self,
request: CreateModerationRequest,
) -> Result<CreateModerationResponse, OpenAIError>
pub async fn create( &self, request: CreateModerationRequest, ) -> Result<CreateModerationResponse, OpenAIError>
Classifies if text is potentially harmful.
Auto Trait Implementations§
impl<'c, C> Freeze for Moderations<'c, C>
impl<'c, C> !RefUnwindSafe for Moderations<'c, C>
impl<'c, C> Send for Moderations<'c, C>where
C: Sync,
impl<'c, C> Sync for Moderations<'c, C>where
C: Sync,
impl<'c, C> Unpin for Moderations<'c, C>
impl<'c, C> !UnwindSafe for Moderations<'c, C>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more