FEAGI
Framework for Evolutionary Artificial General Intelligence - High-performance Rust libraries for bio-inspired neural computation.
What is FEAGI?
FEAGI (Framework for Evolutionary Artificial General Intelligence) is a bio-inspired neural architecture that models brain structures and dynamics. FEAGI Core provides the foundational Rust libraries for building neural networks that learn and adapt like biological brains.
Unlike traditional neural networks, FEAGI:
- Models individual neurons with realistic dynamics (membrane potential, leak, refractory periods)
- Supports heterogeneous brain regions with distinct properties
- Enables structural plasticity (neurogenesis, synaptogenesis)
- Runs in real-time with spike-based computation
- Scales from microcontrollers to servers
Key Features
- Bio-Inspired Architecture: Cortical areas, synaptic plasticity, and realistic neuron models
- High Performance: 50-100x faster than Python implementations through optimized Rust
- Cross-Platform: Runs on desktop, server, embedded (ESP32, Arduino, STM32), and cloud
- GPU Acceleration: Optional WGPU (cross-platform) and CUDA (NVIDIA) backends
- No-Std Compatible: Core algorithms work in resource-constrained environments
- Modular Design: Use individual crates or the complete framework
- Python Bindings: Integrate with existing Python workflows via PyO3
Installation
Add to your Cargo.toml:
[]
= "0.0.1-beta.1" # Umbrella crate (includes everything)
Or use individual building blocks:
[]
= "0.0.1-beta.1" # Just the NPU
= "0.0.1-beta.1" # Just core types
Or umbrella with specific features:
[]
= { = "0.0.1-beta.1", = ["gpu"] }
Quick Start
Create and Run a Neural Network
use *;
Embedded Target (no_std)
use NeuronDynamics;
use EmbeddedRuntime;
// Configure for resource-constrained systems
let runtime = new;
let mut dynamics = new;
Python Integration
# Create high-performance engine
=
# Build synaptic connectivity
# Process neural activity
=
Architecture
FEAGI Core is organized as a workspace of focused crates:
Core Types and Algorithms
- feagi-types: Fundamental data structures (neurons, synapses, cortical areas)
- feagi-neural: Platform-agnostic neuron dynamics (no_std compatible)
- feagi-synapse: Synaptic computation algorithms (no_std compatible)
Neural Processing
- feagi-burst-engine: High-performance burst cycle execution
- feagi-brain-development: Brain development (neurogenesis, synaptogenesis)
- feagi-plasticity: Synaptic learning (STDP, memory consolidation)
- feagi-evolutionary: Genome I/O and evolutionary algorithms
Infrastructure
- feagi-state-manager: Runtime state and lifecycle management
- feagi-config: Configuration loading and validation
- feagi-observability: Logging, telemetry, and profiling
I/O and Integration
- feagi-io: I/O system (sensory input, motor output)
- feagi-agent: Client library for agent integration
- feagi-api: REST API server
- feagi-transports: Network transport abstractions (ZMQ, UDP, HTTP)
Platform Adapters
- feagi-runtime-std: Desktop and server deployment (Vec, Rayon, async)
- feagi-runtime-embedded: Embedded systems (fixed arrays, no_std)
- feagi-hal: Platform abstraction layer for ESP32, Arduino, STM32
Utilities
- feagi-connectome-serialization: Brain persistence and loading
- feagi-services: High-level service compositions
Performance
FEAGI Core delivers significant performance improvements over interpreted implementations:
- Synaptic Propagation: 50-100x faster than Python/NumPy
- Burst Frequency: Supports 30Hz+ with millions of neurons
- Memory Efficiency: Minimal allocations, cache-friendly data structures
- Parallel Processing: Multi-threaded execution with Rayon
- GPU Acceleration: Optional WGPU or CUDA backends for massive parallelism
Design Principles
Biologically Plausible
- Individual neuron modeling with realistic parameters
- Spike-based computation (not rate-coded)
- Synaptic delays and conductances
- Structural and functional plasticity
Cross-Platform from Day One
- Core algorithms are platform-agnostic (no_std compatible)
- Runtime adapters for different deployment targets
- Conditional compilation for embedded, desktop, and server
- No reliance on OS-specific features in core logic
Performance Critical
- No allocations in hot paths (pre-allocated buffers)
- Cache-friendly data layouts (
#[repr(C)], AoS patterns) - SIMD-friendly operations where applicable
- Optional GPU acceleration without compromising portability
Type Safety
- Strong typing with newtypes (
NeuronId,SynapseId) - Compile-time guarantees over runtime checks
- Zero-cost abstractions throughout
Feature Flags
Umbrella Crate (feagi)
[]
= ["std", "full"]
= [...] # Standard library support
= [...] # Embedded/bare-metal
= [...] # WebAssembly target
= ["compute", "io"]
= [...] # Neural computation only
= [...] # I/O and networking
Burst Engine
[]
= [...] # Cross-platform GPU (WGPU)
= [...] # NVIDIA CUDA acceleration
= [...] # All GPU backends
Building from Source
# Clone repository
# Build the crate
# Run tests
# Build with GPU support
# Generate documentation
Contributing
We welcome contributions! Whether you're fixing bugs, adding features, improving documentation, or optimizing performance, your help is appreciated.
Getting Started
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes following our guidelines
- Run tests and linting (
cargo test && cargo clippy) - Submit a pull request
Code Standards
All contributions must:
- Pass
cargo clippywith zero warnings - Pass
cargo test(all tests) - Include documentation for public APIs
- Follow Rust API guidelines
- Support cross-platform compilation where applicable
Development Workflow
# Check compilation
# Run tests
# Lint code
# Format code
# Build release
Areas for Contribution
- Performance optimization: SIMD, GPU kernels, cache optimization
- Platform support: Additional embedded targets (Teensy, Nordic nRF, RISC-V)
- Neural algorithms: New plasticity rules, neuron models
- Documentation: Examples, tutorials, API documentation
- Testing: Edge cases, integration tests, benchmarks
- Tools: Visualization, debugging, profiling utilities
Documentation
- API Reference: docs.rs/feagi
- Architecture Guide: docs/ARCHITECTURE.md
Generate local documentation:
Testing
# All tests
# Specific crate
# With features
# Benchmarks
Platform Support
Tested Platforms
- Desktop: Linux, macOS, Windows
- Embedded: ESP32 (WROOM, S3, C3)
- Cloud: Docker, Kubernetes
Planned Support
- Arduino (Due, MKR, Nano 33 IoT)
- STM32 (F4, F7, H7 series)
- Teensy (4.0, 4.1)
- Nordic nRF (nRF52, nRF53)
- Raspberry Pi Pico (RP2040)
Use Cases
- Robotics: Real-time control with adaptive learning
- Edge AI: On-device intelligence for IoT
- Research: Neuroscience modeling and experimentation
- AGI Development: Evolutionary and developmental AI systems
- Embedded Intelligence: Neural processing on microcontrollers
Project Status
Version: 0.0.1
Status: Active development
Minimum Rust Version: 1.75+
FEAGI Core is under active development. The core APIs are stabilizing, but breaking changes may occur in minor releases.
Community and Support
- Discord: discord.gg/neuraville
- Website: neuraville.com/feagi
- Repository: github.com/feagi/feagi-core
- Issues: GitHub Issues
- Email: feagi@neuraville.com
License
Licensed under the Apache License, Version 2.0. See LICENSE for details.
Copyright 2025 Neuraville Inc.
Citation
If you use FEAGI in your research, please cite:
Built with Rust for performance, safety, and portability.