Expand description
Distributed Machine Learning Algorithms
This module provides concrete implementations of distributed ML algorithms that scale across multiple nodes with fault tolerance and load balancing.
Structs§
- BFTConfig
- Configuration for Byzantine-Fault Tolerant training
- Byzantine
Fault Tolerant - Byzantine-Fault Tolerant aggregation for robust distributed learning
- Client
Stats - Statistics for federated client
- Data
Partition - Data partition assigned to a worker
- Distributed
Config - Configuration for distributed training
- Distributed
Linear Regression - Distributed linear regression using parameter server architecture
- Distributed
Training Stats - Statistics from distributed training
- Federated
Client - Federated learning client
- Federated
Config - Configuration for federated learning
- Federated
Learning - Federated Learning framework with privacy-preserving techniques
- Load
Balancer - Advanced load balancing for distributed systems
- Model
Parameters - Model parameters for distributed learning
- Parameter
Metadata - Metadata about model parameters
- Parameter
Server - Parameter server for coordinating distributed training
- Privacy
Mechanism - Privacy mechanism for federated learning
- Worker
Load - Worker load information
- Worker
Node - Worker node for distributed computation
- Worker
Stats - Statistics tracked by each worker
Enums§
- Aggregation
Method - Robust aggregation methods for Byzantine-Fault Tolerance
- Load
Balancing Strategy - Load balancing strategy
- Sync
Strategy - Synchronization strategy for distributed training