Expand description
§Kimi-FANN Core: Neural Inference Engine
Real neural network inference engine using ruv-FANN for micro-expert processing. This crate provides WebAssembly-compatible neural network inference with actual AI processing capabilities for Kimi-K2 micro-expert architecture. Enhanced with Synaptic Market integration for distributed compute.
Modules§
- enhanced_
router - Enhanced routing system with market integration
- optimized_
features - Optimized Feature Extraction for Neural Inference
Structs§
- Expert
Config - Configuration for creating a micro-expert
- Expert
Router - Expert router with intelligent request distribution and consensus
- Kimi
Runtime - Main runtime for Kimi-FANN with neural processing
- Micro
Expert - A micro-expert neural network with real AI processing
- Mock
Neural Network - Mock neural network for demonstration and optimization testing
- Network
Stats - Network statistics for distributed processing
- Neural
Config - Neural network configuration for micro-experts
- Neural
Weights - Neural network weights and biases
- Processing
Config - Processing configuration with neural parameters
- Token
Embedding - Token embedding for neural processing
Enums§
- Expert
Domain - Expert domain enumeration with neural specialization
Constants§
- VERSION
- Version of the Kimi-FANN Core library
Functions§
- init
- Initialize the WASM module with neural network setup