pub struct MultiHeadAttention;Expand description
Multi-Headed attention is the first evolution of the Scaled Dot-Product Attention mechanism. They allow the model to jointly attend to information from different representation subspaces at different positions.
Auto Trait Implementations§
impl Freeze for MultiHeadAttention
impl RefUnwindSafe for MultiHeadAttention
impl Send for MultiHeadAttention
impl Sync for MultiHeadAttention
impl Unpin for MultiHeadAttention
impl UnwindSafe for MultiHeadAttention
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more