TensorOp

Enum TensorOp 

Source
pub enum TensorOp {
    Atom {
        pipeline: Arc<CachedPipeline>,
        bindings: Vec<Arc<BindGroup>>,
        dispatch: [u32; 3],
    },
    List(Vec<TensorOp>),
    Sep,
}

Variants§

§

Atom

Fields

§bindings: Vec<Arc<BindGroup>>
§dispatch: [u32; 3]
§

List(Vec<TensorOp>)

§

Sep

Implementations§

Source§

impl TensorOp

Source

pub const NF4_BLOCK_SIZE: u32 = 64u32

Source

pub const INT8_BLOCK_SIZE: u32 = 128u32

Source

pub fn empty() -> Self

Source

pub fn softmax( x: &TensorGpu<impl Float, ReadWrite>, ) -> Result<Self, TensorError>

Softmax operator applied on x.

Source

pub fn embed( tokens: &TensorGpu<u32, ReadWrite>, input: &TensorGpu<f16, ReadWrite>, output: &TensorGpu<impl Float, ReadWrite>, ) -> Result<Self, TensorError>

Embedding on GPU.

  • tokens shape: [T, B].
  • input shape: [C, V].
  • output shape: [C, T, B].
Source

pub fn layer_norm( w: &TensorGpu<f16, ReadWrite>, b: &TensorGpu<f16, ReadWrite>, x: &TensorGpu<impl Float, ReadWrite>, eps: f32, ) -> Result<Self, TensorError>

Layer normalization applied on x, with weight w and bias b.

  • x shape: [C, T, B].
  • w shape: [C, 1, 1].
  • b shape: [C, 1, 1].
  • s shape: [4, T, B], mean and inverse std of x.
Source

pub fn group_norm( w: &TensorGpu<f16, ReadWrite>, b: &TensorGpu<f16, ReadWrite>, x: &TensorGpu<impl Float, ReadWrite>, eps: f32, ) -> Result<Self, TensorError>

Group normalization applied on x, with weight w and bias b.

  • x shape: [S, H, A].
  • w shape: [S, H, 1].
  • b shape: [S, H, 1].
Source

pub fn recenter( x: &TensorGpu<impl Float, ReadWrite>, ) -> Result<Self, TensorError>

Recenter x to be zero-mean.

Source

pub fn rms_norm( w: &TensorGpu<f16, ReadWrite>, b: &TensorGpu<f16, ReadWrite>, x: &TensorGpu<impl Float, ReadWrite>, eps: f32, ) -> Result<Self, TensorError>

Root-mean-square normalization applied on x, with weight w and bias b.

  • x shape: [C, T, B].
  • w shape: [C, 1, 1].
  • b shape: [C, 1, 1].
Source

pub fn l2_norm( x: &TensorGpu<impl Float, ReadWrite>, eps: f32, ) -> Result<Self, TensorError>

L2 normalization applied on x.

  • x shape: [C, T, B].
Source

pub fn matmul_vec_fp16<'a, 'b, F0: Float, F1: Float>( matrix: &TensorGpu<f16, ReadWrite>, input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, act: Activation, sparse: bool, ) -> Result<Self, TensorError>

Fp32 matrix-vector multiplication.

  • matrix shape: [C, R, B].
  • input shape: [C, T, B].
  • output shape: [R, T, B].
Source

pub fn matmul_vec_int8<'a, 'b, F0: Float, F1: Float>( matrix: &TensorGpu<u8, ReadWrite>, minmax: &TensorGpu<f16, ReadWrite>, input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, act: Activation, sparse: bool, ) -> Result<Self, TensorError>

Int8 matrix-vector multiplication.

  • matrix shape: [C, R, B].
  • input shape: [C, T, B].
  • output shape: [R, T, B].
Source

pub fn matmul_vec_nf4<'a, 'b, F0: Float, F1: Float>( matrix: &TensorGpu<u8, ReadWrite>, quant: &TensorGpu<f32, Uniform>, absmax: &TensorGpu<f16, ReadWrite>, input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, act: Activation, sparse: bool, ) -> Result<Self, TensorError>

NFloat4 matrix-vector multiplication.

  • matrix shape: [C, R, B].
  • input shape: [C, T, B].
  • output shape: [R, T, B].
Source

pub fn matmul_mat_fp16<'a, 'b, 'c, F0: Float, F1: Float>( matrix: impl Into<TensorGpuView<'c, f16>>, input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, act: Activation, ) -> Result<Self, TensorError>

Fp16 matrix-matrix multiplication.

  • matrix shape: [K, M, B].
  • input shape: [K, N, B].
  • output shape: [M, N, B].

Note: K must be multiples of 4; M and N must be multiples of 4.

Source

pub fn matmul_mat_int8<'a, 'b, 'c, F0: Float, F1: Float>( matrix: impl Into<TensorGpuView<'c, u8>>, minmax: &TensorGpu<f16, ReadWrite>, input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, act: Activation, ) -> Result<Self, TensorError>

Int8 matrix-matrix multiplication.

  • matrix shape: [K, M, B].
  • input shape: [K, N, B].
  • output shape: [M, N, B].

Notes:

  1. K must be multiples of 4; M and N must be multiples of 4.
  2. The total size of matrix must be multiples of 128.
Source

pub fn matmul_mat_nf4<'a, 'b, 'c, F0: Float, F1: Float>( matrix: impl Into<TensorGpuView<'c, u8>>, quant: &TensorGpu<f32, Uniform>, absmax: &TensorGpu<f16, ReadWrite>, input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, act: Activation, ) -> Result<Self, TensorError>

NFloat4 matrix-matrix multiplication.

  • matrix shape: [K, M, B].
  • input shape: [K, N, B].
  • output shape: [M, N, B].

Notes:

  1. K must be multiples of 8; M and N must be multiples of 8.
  2. The total size of matrix must be multiples of 256.
Source

pub fn add_activate<'a, 'b, F0: Float, F1: Float>( input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, act_x: Activation, act_y: Activation, act_out: Activation, ) -> Result<Self, TensorError>

Add input to output.

  • input shape: [C, 1, B] or [C, T, B].
  • output shape: [C, T, B].
  • Activations may be applied to input, output and the final result.
Source

pub fn add<'a, 'b, F0: Float, F1: Float>( input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, ) -> Result<Self, TensorError>

Add input to output.

  • input shape: [C, 1, B] or [C, T, B].
  • output shape: [C, T, B].
Source

pub fn mul_activate<'a, 'b, F0: Float, F1: Float>( input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, act_x: Activation, act_y: Activation, act_out: Activation, ) -> Result<Self, TensorError>

Multiply input to output.

  • input shape: [C, 1, B] or [C, T, B].
  • output shape: [C, T, B].
  • Activations may be applied to input, output and the final result.
Source

pub fn mul<'a, 'b, F0: Float, F1: Float>( input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, ) -> Result<Self, TensorError>

Multiply input to output.

  • input shape: [C, 1, B] or [C, T, B].
  • output shape: [C, T, B].
Source

pub fn token_shift<'a, 'b, F: Float>( cursors: &TensorGpu<u32, ReadWrite>, time_mix: impl Into<TensorGpuView<'a, F>>, state: impl Into<TensorGpuView<'b, f32>>, input: &TensorGpu<impl Float, ReadWrite>, output: &TensorGpu<impl Float, ReadWrite>, reversed: bool, ) -> Result<Self, TensorError>

Source

pub fn time_mix_v4<'a, T: Float>( cursors: &TensorGpu<u32, ReadWrite>, time_decay: &TensorGpu<f32, ReadWrite>, time_first: &TensorGpu<f32, ReadWrite>, state: impl Into<TensorGpuView<'a, f32>>, k: &TensorGpu<T, ReadWrite>, v: &TensorGpu<T, ReadWrite>, r: &TensorGpu<T, ReadWrite>, x: &TensorGpu<T, ReadWrite>, ) -> Result<Self, TensorError>

Source

pub fn time_mix_v5<'a, T: Float>( cursors: &TensorGpu<u32, ReadWrite>, time_decay: &TensorGpu<f32, ReadWrite>, time_first: &TensorGpu<f32, ReadWrite>, state: impl Into<TensorGpuView<'a, f32>>, k: &TensorGpu<T, ReadWrite>, v: &TensorGpu<T, ReadWrite>, r: &TensorGpu<T, ReadWrite>, x: &TensorGpu<T, ReadWrite>, ) -> Result<Self, TensorError>

Source

pub fn time_mix_v6<'a, T: Float>( cursors: &TensorGpu<u32, ReadWrite>, time_decay: &TensorGpu<f32, ReadWrite>, time_first: &TensorGpu<f32, ReadWrite>, state: impl Into<TensorGpuView<'a, f32>>, k: &TensorGpu<T, ReadWrite>, v: &TensorGpu<T, ReadWrite>, r: &TensorGpu<T, ReadWrite>, x: &TensorGpu<T, ReadWrite>, ) -> Result<Self, TensorError>

Source

pub fn time_mix_v7<'a, T: Float>( cursors: &TensorGpu<u32, ReadWrite>, state: impl Into<TensorGpuView<'a, f32>>, r: &TensorGpu<T, ReadWrite>, w: &TensorGpu<T, ReadWrite>, n: &TensorGpu<T, ReadWrite>, x: &TensorGpu<T, ReadWrite>, ) -> Result<Self, TensorError>

The V7 WKV kernel.

  • n: Stack of k, v, a, kk.

Note that the state layout is different from the official implementation. Here is an illustration of each head’s layout:

time-mix-v7

Source

pub fn time_first_v7<T: Float>( u: &TensorGpu<f16, ReadWrite>, r: &TensorGpu<T, ReadWrite>, n: &TensorGpu<T, ReadWrite>, x: &TensorGpu<T, ReadWrite>, ) -> Result<Self, TensorError>

Source

pub fn control_k_v7<'a, 'b, F0: Float, F1: Float>( p: &TensorGpu<f16, ReadWrite>, a: impl Into<TensorGpuView<'a, F0>>, k: impl Into<TensorGpuView<'b, F1>>, ) -> Result<Self, TensorError>

Source

pub fn channel_mix<'a, T: Float>( cursors: &TensorGpu<u32, ReadWrite>, state: impl Into<TensorGpuView<'a, f32>>, r: &TensorGpu<T, ReadWrite>, v: &TensorGpu<T, ReadWrite>, x: &TensorGpu<T, ReadWrite>, ) -> Result<Self, TensorError>

Source

pub fn channel_mix_v7<'a, T: Float>( cursors: &TensorGpu<u32, ReadWrite>, state: impl Into<TensorGpuView<'a, f32>>, v: &TensorGpu<T, ReadWrite>, x: &TensorGpu<T, ReadWrite>, ) -> Result<Self, TensorError>

Source

pub fn activate<'a, F: Float>( x: impl Into<TensorGpuView<'a, F>>, act: Activation, ) -> Result<Self, TensorError>

Source

pub fn blit<'a, 'b, F0: Float, F1: Float>( input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, ) -> Result<Self, TensorError>

Copy the content of input into output of the same shape.

Source

pub fn broadcast<'a, 'b, F0: Float, F1: Float>( input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, ) -> Result<Self, TensorError>

Repeat the content of input into output along the token and batch axes.

Source

pub fn transpose<'a, 'b, F0: Float, F1: Float>( input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, ) -> Result<Self, TensorError>

Swap the token and batch axes.

Source

pub fn blend( factor: &TensorGpu<f32, Uniform>, input: &TensorGpu<impl Float, ReadWrite>, output: &TensorGpu<impl Float, ReadWrite>, ) -> Result<Self, TensorError>

Source

pub fn blend_lora<'a, 'b, 'c>( factor: &TensorGpu<f32, Uniform>, xa: impl Into<TensorGpuView<'a, f16>>, xb: impl Into<TensorGpuView<'b, f16>>, output: impl Into<TensorGpuView<'c, f16>>, ) -> Result<Self, TensorError>

Source

pub fn lerp<'a, 'b, 'c, F0: Float, F1: Float, F2: Float>( input: impl Into<TensorGpuView<'a, F0>>, output: impl Into<TensorGpuView<'b, F1>>, factor: impl Into<TensorGpuView<'c, F2>>, reversed: bool, ) -> Result<Self, TensorError>

Source

pub fn affine( x: &TensorGpu<impl Float, ReadWrite>, scale: f32, bias: f32, ) -> Result<Self, TensorError>

Source

pub fn quantize_mat_int8( input: &TensorGpu<f16, ReadWrite>, minmax: &TensorGpu<f16, ReadWrite>, output: &TensorGpu<u8, ReadWrite>, ) -> Result<Self, TensorError>

Source

pub fn quantize_mat_nf4( input: &TensorGpu<f16, ReadWrite>, quant: &TensorGpu<f32, Uniform>, absmax: &TensorGpu<f16, ReadWrite>, output: &TensorGpu<u8, ReadWrite>, ) -> Result<Self, TensorError>

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<From, Into> CoHom<From> for Into
where From: Hom<Into>,

Source§

fn co_hom(value: From) -> Into

Source§

impl<T> Downcast<T> for T

Source§

fn downcast(&self) -> &T

Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<SS, SP> SupersetOf<SS> for SP
where SS: SubsetOf<SP>,

Source§

fn to_subset(&self) -> Option<SS>

The inverse inclusion map: attempts to construct self from the equivalent element of its superset. Read more
Source§

fn is_in_subset(&self) -> bool

Checks if self is actually part of its subset T (and can be converted to it).
Source§

fn to_subset_unchecked(&self) -> SS

Use with care! Same as self.to_subset but without any property checks. Always succeeds.
Source§

fn from_subset(element: &SS) -> SP

The inclusion map: converts self to the equivalent element of its superset.
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> Upcast<T> for T

Source§

fn upcast(&self) -> Option<&T>

Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<T> WasmNotSend for T
where T: Send,

Source§

impl<T> WasmNotSendSync for T

Source§

impl<T> WasmNotSync for T
where T: Sync,