Expand description
§ruvector-attention
Attention mechanisms for ruvector, including geometric, graph, and sparse attention.
This crate provides efficient implementations of various attention mechanisms:
- Scaled dot-product attention
- Multi-head attention with parallel processing
- Graph attention for GNN applications
- Geometric attention in hyperbolic spaces
- Sparse attention patterns
§Features
- SIMD Acceleration: Optional SIMD optimizations for performance
- Parallel Processing: Rayon-based parallel head computation
- WASM Support: WebAssembly compilation support
- NAPI Bindings: Node.js bindings for JavaScript integration
§Example
use ruvector_attention::{
attention::ScaledDotProductAttention,
traits::Attention,
};
// Create scaled dot-product attention
let attention = ScaledDotProductAttention::new(512);
// Prepare inputs
let query = vec![1.0; 512];
let keys = vec![vec![0.5; 512], vec![0.3; 512]];
let values = vec![vec![1.0; 512], vec![2.0; 512]];
let keys_refs: Vec<&[f32]> = keys.iter().map(|k| k.as_slice()).collect();
let values_refs: Vec<&[f32]> = values.iter().map(|v| v.as_slice()).collect();
// Compute attention
let output = attention.compute(&query, &keys_refs, &values_refs).unwrap();
assert_eq!(output.len(), 512);Re-exports§
pub use attention::MultiHeadAttention;pub use attention::ScaledDotProductAttention;pub use config::AttentionConfig;pub use config::GraphAttentionConfig;pub use config::SparseAttentionConfig;pub use error::AttentionError;pub use error::AttentionResult;pub use traits::Attention;pub use traits::EdgeInfo;pub use traits::GeometricAttention;pub use traits::Gradients;pub use traits::GraphAttention;pub use traits::SparseAttention;pub use traits::SparseMask;pub use traits::TrainableAttention;pub use hyperbolic::poincare_distance;pub use hyperbolic::mobius_add;pub use hyperbolic::exp_map;pub use hyperbolic::log_map;pub use hyperbolic::project_to_ball;pub use hyperbolic::HyperbolicAttention;pub use hyperbolic::HyperbolicAttentionConfig;pub use hyperbolic::MixedCurvatureAttention;pub use hyperbolic::MixedCurvatureConfig;pub use sparse::SparseMaskBuilder;pub use sparse::AttentionMask;pub use sparse::LocalGlobalAttention;pub use sparse::LinearAttention;pub use sparse::FlashAttention;pub use moe::MoEAttention;pub use moe::MoEConfig;pub use moe::Expert;pub use moe::ExpertType;pub use moe::StandardExpert;pub use moe::HyperbolicExpert;pub use moe::LinearExpert;pub use moe::Router;pub use moe::LearnedRouter;pub use moe::TopKRouting;pub use graph::EdgeFeaturedAttention;pub use graph::EdgeFeaturedConfig;pub use graph::GraphRoPE;pub use graph::RoPEConfig;pub use graph::DualSpaceAttention;pub use graph::DualSpaceConfig;pub use training::Loss;pub use training::InfoNCELoss;pub use training::LocalContrastiveLoss;pub use training::SpectralRegularization;pub use training::Reduction;pub use training::Optimizer;pub use training::SGD;pub use training::Adam;pub use training::AdamW;pub use training::CurriculumScheduler;pub use training::CurriculumStage;pub use training::TemperatureAnnealing;pub use training::DecayType;pub use training::NegativeMiner;pub use training::HardNegativeMiner;pub use training::MiningStrategy;pub use sdk::AttentionBuilder;pub use sdk::AttentionPipeline;pub use sdk::presets;
Modules§
- attention
- Attention mechanism implementations.
- config
- Configuration types for attention mechanisms.
- error
- Error types for the ruvector-attention crate.
- graph
- Graph attention mechanisms for GNN applications
- hyperbolic
- Hyperbolic Attention Module
- moe
- Mixture of Experts (MoE) attention mechanisms
- sdk
- ruvector-attention SDK
- sparse
- Sparse attention mechanisms for efficient computation on long sequences
- training
- Training utilities for attention-based graph neural networks
- traits
- Trait definitions for attention mechanisms.
- utils
- Utility functions for attention mechanisms.
Constants§
- VERSION
- Library version