Expand description
Automatic Relevance Determination (ARD) kernels.
ARD kernels learn a separate length scale for each input dimension, automatically determining the relevance of each feature. This is particularly useful for high-dimensional data where some features may be more important than others.
§Key Features
- Per-dimension length scales: Each feature has its own length scale
- Feature selection: Irrelevant features get large length scales (effectively ignored)
- Gradient support: For hyperparameter optimization via gradient descent
§Example
use tensorlogic_sklears_kernels::ard_kernel::{ArdRbfKernel, ArdMaternKernel};
use tensorlogic_sklears_kernels::Kernel;
// Create ARD RBF kernel with 3 features, each with its own length scale
let length_scales = vec![1.0, 2.0, 0.5]; // Different relevance per dimension
let kernel = ArdRbfKernel::new(length_scales.clone()).unwrap();
let x = vec![1.0, 2.0, 3.0];
let y = vec![1.5, 2.5, 3.5];
let sim = kernel.compute(&x, &y).unwrap();Structs§
- ArdMatern
Kernel - ARD Matérn kernel with per-dimension length scales.
- ArdRational
Quadratic Kernel - ARD Rational Quadratic kernel.
- ArdRbf
Kernel - ARD (Automatic Relevance Determination) RBF kernel.
- Constant
Kernel - Constant kernel: K(x, y) = σ²
- DotProduct
Kernel - Dot Product kernel (Linear kernel with variance and shift).
- Kernel
Gradient - Gradient information for kernel hyperparameter optimization.
- Scaled
Kernel - Scaled kernel wrapper that multiplies a kernel by a variance parameter.
- White
Noise Kernel - Utility kernel: White Noise kernel for observation noise modeling.