Crate amari_gpu

Crate amari_gpu 

Source
Expand description

GPU acceleration for geometric algebra operations using WebGPU/wgpu

This crate provides GPU-accelerated implementations of core Amari operations using WebGPU/wgpu for cross-platform compatibility (native + WASM).

§Overview

The crate offers GPU acceleration for:

  • Clifford Algebra: Batch geometric products with Cayley table upload
  • Information Geometry: Batch Amari-Chentsov tensor computation
  • Holographic Memory: Batch bind, unbind, bundle, similarity (with holographic feature)
  • Measure Theory: GPU-accelerated Monte Carlo integration (with measure feature)
  • Relativistic Physics: Batch Lorentz transformations

§Quick Start

use amari_gpu::{GpuCliffordAlgebra, AdaptiveCompute};

// Create GPU context for Cl(3,0,0)
let gpu = GpuCliffordAlgebra::new::<3, 0, 0>().await?;

// Batch geometric product
let results = gpu.batch_geometric_product(&a_batch, &b_batch).await?;

// Or use adaptive dispatch (auto CPU/GPU)
let adaptive = AdaptiveCompute::new::<3, 0, 0>().await;
let results = adaptive.batch_geometric_product(&a_batch, &b_batch).await?;

§Holographic Memory (with holographic feature)

GPU-accelerated batch operations for vector symbolic architectures:

use amari_gpu::GpuHolographic;

// Create GPU holographic processor
let gpu = GpuHolographic::new(256).await?;  // 256-dimensional vectors

// Batch bind thousands of key-value pairs
let bound_flat = gpu.batch_bind(&keys_flat, &values_flat).await?;

// Batch similarity computation
let similarities = gpu.batch_similarity(&a_flat, &b_flat).await?;

// Batch bundle operation
let bundled_flat = gpu.batch_bundle(&a_flat, &b_flat).await?;

§Information Geometry

use amari_gpu::GpuInfoGeometry;

let gpu = GpuInfoGeometry::new().await?;

// Batch Amari-Chentsov tensor computation
let tensors = gpu.amari_chentsov_tensor_batch(&x_batch, &y_batch, &z_batch).await?;

// Fisher information matrix
let fisher = gpu.fisher_information_matrix(&params).await?;

§Adaptive CPU/GPU Dispatch

All GPU operations automatically fall back to CPU when:

  • GPU is unavailable
  • Batch size is too small to benefit from GPU parallelism
  • Operating in CI/test environments

The threshold is typically ~100 operations for GPU to be beneficial.

§Feature Flags

FeatureDescription
stdStandard library support
holographicGPU-accelerated holographic memory
measureGPU-accelerated Monte Carlo integration
calculusGPU-accelerated differential geometry
probabilisticGPU-accelerated probability sampling
automataGPU-accelerated cellular automata
enumerativeGPU-accelerated combinatorics
functionalGPU-accelerated functional analysis (Hilbert spaces, spectral theory)
webgpuEnable WebGPU backend
high-precisionEnable 128-bit float support

§Performance

GPU acceleration provides significant speedups for batch operations:

OperationBatch SizeSpeedup
Geometric Product1000~10-50x
Holographic Bind10000~20-100x
Similarity Batch10000~50-200x
Monte Carlo100000~100-500x

Actual speedups depend on GPU hardware and driver support.

Re-exports§

pub use adaptive::AdaptiveVerificationError;
pub use adaptive::AdaptiveVerificationLevel;
pub use adaptive::AdaptiveVerifier;
pub use adaptive::CpuFeatures;
pub use adaptive::GpuBackend;
pub use adaptive::PlatformCapabilities;
pub use adaptive::PlatformPerformanceProfile;
pub use adaptive::VerificationPlatform;
pub use adaptive::WasmEnvironment;
pub use benchmarks::AmariMultiGpuBenchmarks;
pub use benchmarks::BenchmarkConfig;
pub use benchmarks::BenchmarkResult;
pub use benchmarks::BenchmarkRunner;
pub use benchmarks::BenchmarkSuiteResults;
pub use benchmarks::BenchmarkSummary;
pub use benchmarks::ScalingAnalysis;
pub use multi_gpu::ComputeIntensity;
pub use multi_gpu::DeviceCapabilities;
pub use multi_gpu::DeviceId;
pub use multi_gpu::DeviceWorkload;
pub use multi_gpu::GpuArchitecture;
pub use multi_gpu::GpuDevice;
pub use multi_gpu::IntelligentLoadBalancer;
pub use multi_gpu::LoadBalancingStrategy;
pub use multi_gpu::MultiGpuBarrier;
pub use multi_gpu::PerformanceRecord;
pub use multi_gpu::PerformanceStats;
pub use multi_gpu::SynchronizationManager;
pub use multi_gpu::Workload;
pub use multi_gpu::WorkloadCoordinator;
pub use network::AdaptiveNetworkCompute;
pub use network::GpuGeometricNetwork;
pub use network::GpuNetworkError;
pub use network::GpuNetworkResult;
pub use performance::AdaptiveDispatchPolicy;
pub use performance::CalibrationResult;
pub use performance::GpuProfile;
pub use performance::GpuProfiler;
pub use performance::WorkgroupConfig;
pub use performance::WorkgroupOptimizer;
pub use relativistic::GpuRelativisticParticle;
pub use relativistic::GpuRelativisticPhysics;
pub use relativistic::GpuSpacetimeVector;
pub use relativistic::GpuTrajectoryParams;
pub use shaders::ShaderLibrary;
pub use shaders::DUAL_SHADERS;
pub use shaders::FUSION_SHADERS;
pub use shaders::TROPICAL_SHADERS;
pub use timeline::BottleneckAnalysis;
pub use timeline::DeviceUtilizationStats;
pub use timeline::GpuTimelineAnalyzer;
pub use timeline::MultiGpuPerformanceMonitor;
pub use timeline::OptimizationRecommendation;
pub use timeline::PerformanceAnalysisReport;
pub use timeline::PerformanceBottleneck;
pub use timeline::PerformanceSummary;
pub use timeline::RecommendationPriority;
pub use timeline::SynchronizationAnalysis;
pub use timeline::TimelineEvent;
pub use timeline::UtilizationAnalysis;
pub use unified::BufferPoolStats;
pub use unified::EnhancedGpuBufferPool;
pub use unified::GpuAccelerated;
pub use unified::GpuContext;
pub use unified::GpuDispatcher;
pub use unified::GpuOperationParams;
pub use unified::GpuParam;
pub use unified::MultiGpuStats;
pub use unified::PoolEntryStats;
pub use unified::SharedGpuContext;
pub use unified::UnifiedGpuError;
pub use unified::UnifiedGpuResult;
pub use verification::GpuBoundaryVerifier;
pub use verification::GpuVerificationError;
pub use verification::RelativisticVerifier;
pub use verification::StatisticalGpuVerifier;
pub use verification::VerificationConfig;
pub use verification::VerificationStrategy;
pub use verification::VerifiedMultivector;

Modules§

adaptive
Adaptive Verification Framework for Cross-Platform GPU Operations
benchmarks
Comprehensive Benchmarking Suite for Multi-GPU Performance Validation
multi_gpu
Multi-GPU workload distribution and coordination infrastructure
network
GPU-accelerated geometric network analysis
performance
GPU performance optimization and profiling infrastructure
relativistic
GPU acceleration for relativistic physics computations
shaders
WebGPU compute shader library for mathematical operations
timeline
GPU Timeline Analysis and Performance Profiling Infrastructure
unified
Unified GPU acceleration infrastructure for all mathematical domains
verification
GPU Verification Framework for Phase 4B

Structs§

AdaptiveCompute
Adaptive GPU/CPU dispatcher
GpuCliffordAlgebra
GPU-accelerated Clifford algebra operations
GpuDeviceInfo
GPU device information for edge computing
GpuFisherMatrix
GPU Fisher Information Matrix
GpuInfoGeometry
GPU-accelerated Information Geometry operations

Enums§

GpuError