pub struct LinearAttention { /* private fields */ }Expand description
Linear attention with random feature maps
Uses kernel trick to achieve O(n * k * d) complexity instead of O(n² * d).
Implementations§
Source§impl LinearAttention
impl LinearAttention
Sourcepub fn with_kernel(dim: usize, num_features: usize, kernel: KernelType) -> Self
pub fn with_kernel(dim: usize, num_features: usize, kernel: KernelType) -> Self
Create with specific kernel type
Trait Implementations§
Source§impl Attention for LinearAttention
impl Attention for LinearAttention
Source§fn compute(
&self,
query: &[f32],
keys: &[&[f32]],
values: &[&[f32]],
) -> AttentionResult<Vec<f32>>
fn compute( &self, query: &[f32], keys: &[&[f32]], values: &[&[f32]], ) -> AttentionResult<Vec<f32>>
Computes attention over the given query, keys, and values. Read more
Auto Trait Implementations§
impl Freeze for LinearAttention
impl RefUnwindSafe for LinearAttention
impl Send for LinearAttention
impl Sync for LinearAttention
impl Unpin for LinearAttention
impl UnwindSafe for LinearAttention
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more