# π Neurocore
### *A lightweight neural-state engine for Rust β created by **Seleste Scorpion (God Ace)**.*
## π Overview
**Neurocore** is a fast, lightweight, and extendable neural-state processing engine written in Rust.
It provides simple building blocks for creating neural layers, activation functions, and forward-propagation logic β all without heavy machine-learning dependencies.
This crate focuses on **speed**, **minimalism**, and **clean architecture**, giving developers a flexible foundation for experimenting with neural computation in pure Rust.
## β¨ Features
- πΉ Fully modular neural layers (Dense layers)
- πΉ Built-in activation functions (ReLU, Sigmoid, Tanh, Linear)
- πΉ Forward propagation engine
- πΉ Serde serialization + deserialization
- πΉ Safe, fast, 100% Rust implementation
- πΉ Beginner-friendly, type-safe API
- πΉ Zero heavy ML dependencies
## π¦ Installation
Add Neurocore to your project:
```bash
cargo add neuroflow
```
Or manually include it in `Cargo.toml`:
```toml
[dependencies]
neurocore = "0.1.0"
```
## π§ Example Usage
```rust
use neurocore::{Dense, Activation};
fn main() {
// Create a Dense layer with 3 inputs and 2 outputs
let layer = Dense::new(3, 2, Activation::Relu);
// Example input vector
let input = vec![1.0, 2.0, 3.0];
// Perform forward propagation
let output = layer.forward(&input);
println!("Layer output: {:?}", output);
}
```
## π§© Architecture
Neurocore is built around three core components:
### β 1. Activation Enum
Implements:
- ReLU
- Sigmoid
- Tanh
- Linear
Each with its own mathematical transformation.
### β 2. Dense Layer
A fully connected layer with:
- Weight matrix
- Bias vector
- Activation function
- Forward propagation logic
### β 3. Serialization
Neurocore includes automatic support for:
```rust
#[derive(Serialize, Deserialize)]
```
Allowing trained layers to be saved and loaded easily.
## β‘ Performance
Neurocore is optimized for:
- minimal allocations
- fast forward-pass execution
- deterministic and stable results
- tiny binary footprint
The crate avoids any heavy machine-learning frameworks, making it ideal for embedded devices and performance-focused systems.
## π Roadmap
Planned major updates:
- Multi-layer neural network (`Sequential`)
- Convolutional layer support
- GPU acceleration (WGPU)
- Training engine (SGD, ADAM)
- Dataset loader module
- Dropout and regularization tools
## π Author
**Created by:**
### **Waithaka Njoroge(seleste)**
Rust developer β’ Machine learning enthusiast β’ Systems engineer
## π License
This project is licensed under the **MIT License**.
## β Support the Project
If you find Neurocore helpful, consider:
- β Starring the GitHub repository
- π οΈ Contributing code
- π‘ Suggesting new features
Your support helps grow this project into a full ML engine written in Rust.