pub struct KernelOpsDispatch<'a> { /* private fields */ }Expand description
Dispatch wrapper that tries KernelOps first, then falls back to
TensorOps for operations that have a TensorOps equivalent.
This enables gradual migration: callers use the dispatch without caring which path actually runs.
Implementations§
Source§impl<'a> KernelOpsDispatch<'a>
impl<'a> KernelOpsDispatch<'a>
pub fn new( kernel_ops: Option<&'a dyn KernelOps>, tensor_ops: &'a dyn TensorOps, ) -> Self
Sourcepub fn rms_norm(
&self,
input: &TensorRef,
weight: &TensorRef,
eps: f32,
) -> Result<TensorRef>
pub fn rms_norm( &self, input: &TensorRef, weight: &TensorRef, eps: f32, ) -> Result<TensorRef>
RMS norm: prefer KernelOps::NormOps, fall back to TensorOps::rms_norm.
Sourcepub fn gelu(&self, input: &TensorRef) -> Result<TensorRef>
pub fn gelu(&self, input: &TensorRef) -> Result<TensorRef>
GELU: prefer KernelOps::ActivationOps, fall back to TensorOps::gelu.
Sourcepub fn silu(&self, input: &TensorRef) -> Result<TensorRef>
pub fn silu(&self, input: &TensorRef) -> Result<TensorRef>
SiLU: prefer KernelOps::ActivationOps::silu_mul is fused so there
is no direct TensorOps equivalent. This helper exposes the non-fused
TensorOps::silu for callers that only need plain SiLU.
Sourcepub fn silu_mul(&self, gate: &TensorRef, up: &TensorRef) -> Result<TensorRef>
pub fn silu_mul(&self, gate: &TensorRef, up: &TensorRef) -> Result<TensorRef>
Fused SiLU-multiply (SwiGLU building block).
Falls back to silu(gate) * up via TensorOps when kernel is unavailable.
Sourcepub fn linear(&self, input: &TensorRef, weight: &TensorRef) -> Result<TensorRef>
pub fn linear(&self, input: &TensorRef, weight: &TensorRef) -> Result<TensorRef>
Dense linear (no bias).
Falls back to TensorOps::matmul.
Sourcepub fn softmax(&self, input: &TensorRef, dim: i32) -> Result<TensorRef>
pub fn softmax(&self, input: &TensorRef, dim: i32) -> Result<TensorRef>
Softmax: always via TensorOps (no kernel sub-trait for plain softmax).
Sourcepub fn kernel_ops(&self) -> Option<&'a dyn KernelOps>
pub fn kernel_ops(&self) -> Option<&'a dyn KernelOps>
Access the underlying KernelOps (if any) for ops that have no
TensorOps fallback (e.g. rotary_embedding, attention, sampling).
Sourcepub fn tensor_ops(&self) -> &'a dyn TensorOps
pub fn tensor_ops(&self) -> &'a dyn TensorOps
Access the underlying TensorOps.