XNeuron 🧠
A Freestanding, Zero dependency AI/ML library written in Rust with maximum portability.
Overview
XNeuron is a #![no_std] compliant machine learning library designed for environments where the Rust standard library is unavailable or undesirable. It's perfect for:
- Embedded systems
- Operating system kernels
- Bare metal applications
- Resource-constrained environments
Features
- 🚫 Zero external dependencies
- 💻
#![no_std]compliant - 🔢 Fixed-point arithmetic (no floating-point required)
- 🧮 Custom memory management
- 🔄 Full training capabilities
- 🎯 Inference support
- 📦 Lightweight and portable
Supported Models
- Perceptron
- Feedforward Neural Networks
- Support Vector Machines (SVM)
- More coming soon!
Installation
Add this to your Cargo.toml:
[]
= "0.1.0"
Usage Examples
Basic Perceptron
use ;
// Create a perceptron with 2 inputs
let mut perceptron = new; // 8-bit scale for fixed-point arithmetic
// Training data
let input = vec!; // [1.0, 1.0] in fixed-point
let target = true;
// Train the perceptron
perceptron.train;
// Make predictions
let prediction = perceptron.predict;
Neural Network
use ;
use Box;
// Create a neural network
let mut nn = new; // Learning rate = 1.0
// Add layers
nn.add_layer; // Hidden layer
nn.add_layer; // Output layer
// Training data
let input = vec!;
let target = vec!;
// Train the network
nn.train;
// Make predictions
let output = nn.forward;
Support Vector Machine
use ;
// Create an SVM with 2 input features
let mut svm = SVMnew;
// Training data
let input = vec!; // [1.0, 0.0] in fixed-point
let target = true;
// Train the SVM
svm.train;
// Make predictions
let prediction = svm.predict;
Fixed-Point Arithmetic
XNeuron uses fixed-point arithmetic instead of floating-point numbers. This makes it suitable for platforms without FPU support. The scale factor determines the precision:
// Create a fixed-point number with 8-bit scale
let x = new; // Represents 1.0 (256 >> 8 = 1)
let y = new; // Represents 0.5 (128 >> 8 = 0.5)
// Arithmetic operations maintain scale
let sum = x + y; // 1.5
let product = x * y; // 0.5
Memory Management
XNeuron includes a basic bump allocator for no_std environments. You can also provide your own allocator implementation:
static ALLOCATOR: CustomAllocator = new;
Performance Considerations
- Fixed-point arithmetic may have lower precision than floating-point
- The bump allocator never frees memory (consider implementing a proper allocator for long-running applications)
- Matrix operations are not currently optimized for SIMD
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Roadmap
- SIMD optimizations
- More model implementations (CNNs, Decision Trees)
- Better memory management
- Serialization support
- More activation functions
- Advanced optimizers (Adam, RMSprop)