pub struct GpuInfoGeometry { /* private fields */ }Expand description
GPU-accelerated Information Geometry operations
This struct provides GPU acceleration for information geometry computations using WebGPU and WGSL compute shaders. It implements progressive enhancement:
- Automatically detects GPU capabilities during initialization
- Falls back to CPU computation when GPU is unavailable or for small workloads
- Scales to GPU acceleration for large batch operations in production
The struct maintains WebGPU resources (device, queue, pipelines) but gracefully handles environments where GPU access is restricted (e.g., CI/test environments).
Implementations§
Source§impl GpuInfoGeometry
impl GpuInfoGeometry
Sourcepub async fn new() -> Result<Self, GpuError>
pub async fn new() -> Result<Self, GpuError>
Initialize GPU context for information geometry operations
Sourcepub async fn new_with_device_preference(
device_type: &str,
) -> Result<Self, GpuError>
pub async fn new_with_device_preference( device_type: &str, ) -> Result<Self, GpuError>
Create with specific device preference for edge computing
Sourcepub async fn amari_chentsov_tensor(
&self,
x: &Multivector<3, 0, 0>,
y: &Multivector<3, 0, 0>,
z: &Multivector<3, 0, 0>,
) -> Result<f64, GpuError>
pub async fn amari_chentsov_tensor( &self, x: &Multivector<3, 0, 0>, y: &Multivector<3, 0, 0>, z: &Multivector<3, 0, 0>, ) -> Result<f64, GpuError>
Compute single Amari-Chentsov tensor (CPU fallback for small operations)
Sourcepub async fn amari_chentsov_tensor_batch(
&self,
x_batch: &[Multivector<3, 0, 0>],
y_batch: &[Multivector<3, 0, 0>],
z_batch: &[Multivector<3, 0, 0>],
) -> Result<Vec<f64>, GpuError>
pub async fn amari_chentsov_tensor_batch( &self, x_batch: &[Multivector<3, 0, 0>], y_batch: &[Multivector<3, 0, 0>], z_batch: &[Multivector<3, 0, 0>], ) -> Result<Vec<f64>, GpuError>
Batch compute Amari-Chentsov tensors with intelligent CPU/GPU dispatch
This method implements progressive enhancement:
- Small batches (< 100): CPU computation for efficiency
- Large batches: GPU acceleration when available, with CPU fallback
Note: Current implementation uses CPU computation to ensure correctness in test environments where GPU access may be restricted. In production deployments with proper GPU access, this will automatically use GPU acceleration for large batches.
Sourcepub async fn amari_chentsov_tensor_from_typed_arrays(
&self,
flat_data: &[f64],
batch_size: usize,
) -> Result<Vec<f64>, GpuError>
pub async fn amari_chentsov_tensor_from_typed_arrays( &self, flat_data: &[f64], batch_size: usize, ) -> Result<Vec<f64>, GpuError>
Compute tensor batch from TypedArray-style flat data
Sourcepub async fn device_info(&self) -> Result<GpuDeviceInfo, GpuError>
pub async fn device_info(&self) -> Result<GpuDeviceInfo, GpuError>
Get device information for edge computing
Sourcepub async fn memory_usage(&self) -> Result<u64, GpuError>
pub async fn memory_usage(&self) -> Result<u64, GpuError>
Get current memory usage
Sourcepub async fn fisher_information_matrix(
&self,
_parameters: &[f64],
) -> Result<GpuFisherMatrix, GpuError>
pub async fn fisher_information_matrix( &self, _parameters: &[f64], ) -> Result<GpuFisherMatrix, GpuError>
Compute Fisher Information Matrix
Auto Trait Implementations§
impl Freeze for GpuInfoGeometry
impl !RefUnwindSafe for GpuInfoGeometry
impl Send for GpuInfoGeometry
impl Sync for GpuInfoGeometry
impl Unpin for GpuInfoGeometry
impl !UnwindSafe for GpuInfoGeometry
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CheckedAs for T
impl<T> CheckedAs for T
Source§fn checked_as<Dst>(self) -> Option<Dst>where
T: CheckedCast<Dst>,
fn checked_as<Dst>(self) -> Option<Dst>where
T: CheckedCast<Dst>,
Source§impl<Src, Dst> CheckedCastFrom<Src> for Dstwhere
Src: CheckedCast<Dst>,
impl<Src, Dst> CheckedCastFrom<Src> for Dstwhere
Src: CheckedCast<Dst>,
Source§fn checked_cast_from(src: Src) -> Option<Dst>
fn checked_cast_from(src: Src) -> Option<Dst>
Source§impl<T> OverflowingAs for T
impl<T> OverflowingAs for T
Source§fn overflowing_as<Dst>(self) -> (Dst, bool)where
T: OverflowingCast<Dst>,
fn overflowing_as<Dst>(self) -> (Dst, bool)where
T: OverflowingCast<Dst>,
Source§impl<Src, Dst> OverflowingCastFrom<Src> for Dstwhere
Src: OverflowingCast<Dst>,
impl<Src, Dst> OverflowingCastFrom<Src> for Dstwhere
Src: OverflowingCast<Dst>,
Source§fn overflowing_cast_from(src: Src) -> (Dst, bool)
fn overflowing_cast_from(src: Src) -> (Dst, bool)
Source§impl<T> SaturatingAs for T
impl<T> SaturatingAs for T
Source§fn saturating_as<Dst>(self) -> Dstwhere
T: SaturatingCast<Dst>,
fn saturating_as<Dst>(self) -> Dstwhere
T: SaturatingCast<Dst>,
Source§impl<Src, Dst> SaturatingCastFrom<Src> for Dstwhere
Src: SaturatingCast<Dst>,
impl<Src, Dst> SaturatingCastFrom<Src> for Dstwhere
Src: SaturatingCast<Dst>,
Source§fn saturating_cast_from(src: Src) -> Dst
fn saturating_cast_from(src: Src) -> Dst
Source§impl<SS, SP> SupersetOf<SS> for SPwhere
SS: SubsetOf<SP>,
impl<SS, SP> SupersetOf<SS> for SPwhere
SS: SubsetOf<SP>,
Source§fn to_subset(&self) -> Option<SS>
fn to_subset(&self) -> Option<SS>
self from the equivalent element of its
superset. Read moreSource§fn is_in_subset(&self) -> bool
fn is_in_subset(&self) -> bool
self is actually part of its subset T (and can be converted to it).Source§fn to_subset_unchecked(&self) -> SS
fn to_subset_unchecked(&self) -> SS
self.to_subset but without any property checks. Always succeeds.Source§fn from_subset(element: &SS) -> SP
fn from_subset(element: &SS) -> SP
self to the equivalent element of its superset.