Expand description
SIMD-optimized operations for sklears
This crate provides SIMD-accelerated implementations of common machine learning operations using Rust’s portable SIMD API and platform-specific intrinsics.
Re-exports§
pub use clustering::LinkageType;
Modules§
- activation
- SIMD-optimized activation functions for machine learning
- adaptive_
optimization - Adaptive optimization and runtime algorithm selection
- advanced_
optimizations - Advanced SIMD optimization techniques
- allocator
- Custom allocators optimized for SIMD operations
- approximate
- Approximate computing algorithms for high-performance scenarios
- audio_
processing - SIMD-optimized audio processing operations
- batch_
operations - Batch operations for tensor processing
- benchmark_
framework - Advanced benchmarking framework for SIMD operations
- bit_
operations - SIMD-optimized bit-level operations for efficient data manipulation
- clustering
- SIMD-optimized clustering operations
- comprehensive_
benchmarks - Comprehensive benchmarking suite for sklears-simd
- compression
- SIMD-optimized compression algorithms
- custom_
accelerator - Custom accelerator support framework
- distance
- SIMD-optimized distance calculations using platform intrinsics
- distributions
- SIMD-optimized probability distributions and sampling algorithms
- energy_
benchmarks - Energy efficiency benchmarking for SIMD operations
- error_
correction - SIMD-optimized error correction codes
- external_
integration - External SIMD library integration framework
- fluent
- Fluent API for SIMD operations
- fpga
- FPGA (Field-Programmable Gate Array) acceleration support for SIMD operations
- gpu
- GPU acceleration support for SIMD operations
- gpu_
memory - GPU memory management utilities
- half_
precision - Half-precision floating point operations (FP16/BF16)
- image_
processing - SIMD-optimized image processing operations
- intrinsics
- Intrinsic function wrappers and compiler optimization hints
- kernels
- SIMD-optimized kernel functions for machine learning
- loss
- SIMD-optimized loss functions for machine learning
- matrix
- Enhanced SIMD-optimized matrix operations
- memory
- Memory optimization utilities for SIMD operations
- middleware
- Middleware system for operation pipelines
- multi_
gpu - Multi-GPU support for parallel processing
- neuromorphic
- Neuromorphic computing acceleration support for SIMD operations
- optimization
- SIMD-optimized optimization algorithms
- optimization_
hints - Compile-time optimization hints for SIMD operations
- performance_
hooks - Performance monitoring hooks for SIMD operations
- performance_
monitor - Performance monitoring and tracking utilities
- plugin_
architecture - Plugin architecture for custom SIMD operations
- profiling
- Performance analysis and profiling tools
- quantum
- Quantum computing acceleration support for SIMD operations
- reduction
- SIMD-optimized reduction and scan operations
- regression
- SIMD-optimized regression operations
- safe_
simd - Type-safe SIMD abstractions with compile-time guarantees
- safety
- Safety and correctness enhancements for SIMD operations
- search
- SIMD-optimized search algorithms
- signal_
processing - SIMD-optimized signal processing operations
- sorting
- SIMD-optimized sorting algorithms
- target
- Target-specific SIMD optimizations and compilation support
- tpu
- TPU (Tensor Processing Unit) acceleration support for SIMD operations
- traits
- Trait-based SIMD framework for modular and composable operations
- validation
- Validation framework for SIMD operations
- vector
- SIMD Vector Operations Framework
Macros§
- impl_
simd_ operation - Macro for implementing basic SIMD operation traits
- optimize_
for_ simd - Macro for compile-time optimization hints
- perf_
scope - Macro for creating performance monitoring scopes
- select_
target_ impl - Select implementation at compile time based on target features
Structs§
- Simd
Capabilities - Platform-specific SIMD capabilities
Statics§
- SIMD_
CAPS - Global SIMD capabilities detection