Crate kizzasi_model

Crate kizzasi_model 

Source
Expand description

§kizzasi-model

Model architectures for Kizzasi AGSP (Autoregressive General-Purpose Signal Predictor).

This crate implements various State Space Model architectures optimized for continuous signal prediction with O(1) inference step complexity:

  • Mamba/Mamba2: Selective State Space Models with input-dependent dynamics
  • RWKV: Linear attention with time-mixing and channel-mixing
  • S4/S4D: Structured State Space Models with diagonal state matrices
  • Transformer: Standard attention for comparison (O(N) per step)

§COOLJAPAN Ecosystem

This crate follows KIZZASI_POLICY.md and uses scirs2-core for all array and numerical operations.

§Architecture Philosophy

As described in the AGSP concept, these models treat all signals (audio, video, sensors, actions) as equivalent tokenized sequences, enabling cross-modal prediction and world model construction.

Re-exports§

pub use loader::ModelLoader;
pub use loader::TensorInfo;
pub use loader::WeightLoader;
pub use blas_ops::axpy;
pub use blas_ops::batch_matmul_vec;
pub use blas_ops::dot;
pub use blas_ops::matmul_mat;
pub use blas_ops::matmul_vec;
pub use blas_ops::norm_frobenius;
pub use blas_ops::norm_l2;
pub use blas_ops::transpose;
pub use blas_ops::BlasConfig;
pub use profiling::BottleneckInfo;
pub use profiling::BottleneckSeverity;
pub use profiling::ComprehensiveComparison;
pub use profiling::ComprehensiveProfiler;
pub use profiling::ModelBottleneckAnalysis;

Modules§

batch
Batched Inference Support
blas_ops
BLAS-Accelerated Operations
cache_friendly
Cache-Friendly Memory Layouts
checkpoint
Checkpointing and Training Utilities
compression
Model Compression Utilities
dynamic_quantization
Dynamic Quantization for On-the-Fly Model Compression
factory
Model Factory for Instantiating Models from Loaded Weights
h3
H3: Hungry Hungry Hippos
huggingface
HuggingFace Hub Integration
huggingface_loader
HuggingFace Model Loading and Weight Conversion
hybrid
Hybrid Mamba+Attention Model
loader
Weight loading from safetensors format
mamba
Mamba: Selective State Space Model
mamba2
Mamba2: Enhanced Selective State Space Model with State Space Duality (SSD)
mixed_precision
Mixed Precision Support (FP16/BF16)
moe
Mixture of Experts (MoE)
parallel_multihead
Parallel Multi-Head Computation
profiling
Model Profiling and Benchmarking Utilities
pytorch_compat
PyTorch Checkpoint Compatibility
quantization
Weight Quantization for Efficient Inference
rwkv
RWKV v6: Receptance Weighted Key Value
rwkv7
RWKV v7: Next Generation Receptance Weighted Key Value (Forward-Compatible Scaffolding)
s4
S4 and S4D: Structured State Space Models
s5
S5: Simplified State Space Model
simd_ops
SIMD-Optimized Operations for Model Inference
training
Training Infrastructure for kizzasi-model
transformer
Transformer: Standard Multi-Head Attention Baseline

Structs§

HiddenState
Represents the hidden state of the SSM

Enums§

ModelError
Errors that can occur in model operations
ModelType
Enumeration of supported model architectures

Traits§

AutoregressiveModel
Trait for model architectures that support autoregressive prediction
SignalPredictor
Core trait for autoregressive signal prediction

Type Aliases§

Array1
one-dimensional array
Array2
two-dimensional array
CoreResult
Result type alias for core operations
ModelResult
Result type alias for model operations