omega-attention
Brain-like selective attention system with 40 attention mechanisms, working memory gating, and top-down/bottom-up processing.
Part of the ExoGenesis-Omega cognitive architecture.
Overview
omega-attention implements a biologically-inspired attention system modeled after neuroscience research on selective attention and transformer architectures. It provides 40 attention mechanisms ranging from standard scaled dot-product attention to advanced hyperbolic, graph, and memory-augmented variants.
The system combines:
- Top-Down Attention: Goal-driven, task-relevant selection
- Bottom-Up Attention: Stimulus-driven, salience-based capture
- Working Memory: Capacity-limited storage with gated access (7±2 items)
- Attention Spotlight: Winner-take-all competition for resource allocation
Features
- 40 Attention Mechanisms: Comprehensive library including Flash, Linear, Sparse, Hyperbolic, Graph, Memory-augmented, Multi-head, Cross-attention, and more
- Salience Computation: Bottom-up attention based on novelty, contrast, motion, and change detection
- Priority Maps: Combined top-down/bottom-up priority for attention allocation
- Working Memory Gating: Input/output/forget gates mimicking biological WM
- Configurable Architecture: Customize attention dimensions, heads, dropout, and more
Installation
Add this to your Cargo.toml:
[]
= "1.0.0"
Quick Start
use ;
Architecture
┌─────────────────────────────────────────────────────────────┐
│ ATTENTION SYSTEM │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌────────────────────┐ ┌────────────────────┐ │
│ │ TOP-DOWN │ │ BOTTOM-UP │ │
│ │ (Goal-driven) │ │ (Salience) │ │
│ │ │ │ │ │
│ │ • Task relevance │ │ • Novelty │ │
│ │ • Expected value │ │ • Contrast │ │
│ │ • Memory match │ │ • Motion │ │
│ └────────┬───────────┘ └────────┬───────────┘ │
│ │ │ │
│ └───────────┬─────────────┘ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ ATTENTION CONTROL │ │
│ │ (Priority Map) │ │
│ └───────────┬───────────┘ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ ATTENTION MECHANISMS│ │
│ │ (40 types) │ │
│ └───────────┬───────────┘ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ WORKING MEMORY │ │
│ │ (Gated Access) │ │
│ └───────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Attention Mechanisms
Core Mechanisms
| Type | Description | Use Case |
|---|---|---|
ScaledDotProduct |
Standard transformer attention | General purpose |
FlashAttention |
Memory-efficient O(N) attention | Long sequences |
LinearAttention |
Kernel-based linear complexity | Very long sequences |
MultiHeadAttention |
Parallel attention heads | Rich representations |
Advanced Mechanisms
| Type | Description | Use Case |
|---|---|---|
SparseAttention |
Top-k sparsity patterns | Efficiency |
HyperbolicAttention |
Hyperbolic space embeddings | Hierarchical data |
GraphAttention |
Graph neural network attention | Relational data |
MemoryAugmented |
External memory access | Long-term context |
CrossAttention |
Query/key from different sources | Multi-modal fusion |
Biological Mechanisms
| Type | Description | Use Case |
|---|---|---|
SalienceAttention |
Bottom-up salience maps | Novelty detection |
InhibitionOfReturn |
Temporal attention suppression | Visual search |
FeatureIntegration |
Binding features to locations | Object recognition |
Working Memory
The working memory system implements Miller's "magical number 7±2" with biological gating:
use ;
// Create working memory with capacity 7
let mut wm = new;
// Configure input gate (controls what enters WM)
wm.input_gate.threshold = 0.5; // Minimum importance to enter
wm.input_gate.openness = 0.8; // Gate openness
// Store high-importance item
let item = new;
assert!; // Passes gate
// Items decay over time
wm.decay; // Reduce all activations
// Rehearse to maintain items
wm.rehearse; // Boost activation
// Find similar items
let query = vec!;
let similar = wm.find_similar;
Salience Computation
Bottom-up attention is driven by stimulus salience:
use ;
let mut computer = new;
// Process input to extract salience features
let input = vec!;
let salience_map = computer.compute;
// Individual feature contributions
let features = computer.extract_features;
for feature in features
Configuration
use AttentionConfig;
let config = AttentionConfig ;
Use Cases
1. Selective Processing
// Focus attention on task-relevant features
let goals = encode_task;
let document = encode_document;
let attended = system.attend?;
// attended.attended_values contains task-relevant information
2. Novelty Detection
// Automatically detect novel/unexpected inputs
let salience = salience_computer.compute;
if salience.max > 0.8
3. Memory Consolidation
// Important items enter working memory
let output = system.attend?;
if output.max_attention > 0.7
Integration with Omega
omega-attention is a core component of the Omega cognitive architecture:
omega-brain (Unified Cognitive System)
└── omega-attention (Selective Processing)
└── omega-consciousness (Awareness)
└── omega-hippocampus (Memory)
└── omega-snn (Neural Substrate)
Performance
- 40 attention mechanisms for diverse use cases
- O(N²) standard attention, O(N) linear/flash attention
- Configurable sparsity for efficiency
- Parallel processing with multi-head attention
Related Crates
- omega-brain - Unified cognitive architecture
- omega-consciousness - Global workspace and IIT
- omega-snn - Spiking neural network substrate
- omega-hippocampus - Memory encoding and retrieval
References
- Vaswani et al. (2017) "Attention Is All You Need"
- Desimone & Duncan (1995) "Neural Mechanisms of Selective Visual Attention"
- Corbetta & Shulman (2002) "Control of Goal-Directed and Stimulus-Driven Attention"
- Cowan (2001) "The Magical Number 4 in Short-Term Memory"
License
Licensed under the MIT License. See LICENSE for details.