pub struct DeepDistributedQP { /* private fields */ }Expand description
DeepDistributedQP: Deep Learning-Aided Distributed Optimization.
DeepDistributedQP combines operator splitting methods with learned policies to efficiently solve large-scale quadratic programming problems in a distributed manner.
Implementations§
Source§impl DeepDistributedQP
impl DeepDistributedQP
Sourcepub fn new(
learning_rate: f32,
num_consensus_nodes: usize,
max_iterations: usize,
tolerance: f32,
) -> Self
pub fn new( learning_rate: f32, num_consensus_nodes: usize, max_iterations: usize, tolerance: f32, ) -> Self
Creates a new DeepDistributedQP optimizer with the given configuration.
Sourcepub fn for_large_scale() -> Self
pub fn for_large_scale() -> Self
Creates DeepDistributedQP with configuration optimized for large-scale problems.
Sourcepub fn for_portfolio_optimization() -> Self
pub fn for_portfolio_optimization() -> Self
Creates DeepDistributedQP with configuration optimized for portfolio optimization.
Sourcepub fn with_config(config: DeepDistributedQPConfig) -> Self
pub fn with_config(config: DeepDistributedQPConfig) -> Self
Creates DeepDistributedQP with custom configuration.
Sourcepub fn qp_solver_stats(&self) -> HashMap<String, (usize, f32, f32, bool)>
pub fn qp_solver_stats(&self) -> HashMap<String, (usize, f32, f32, bool)>
Returns statistics about the distributed QP solver.
Sourcepub fn cumulative_speedup(&self) -> f32
pub fn cumulative_speedup(&self) -> f32
Returns the cumulative speedup achieved.
Sourcepub fn distributed_memory_usage(&self) -> usize
pub fn distributed_memory_usage(&self) -> usize
Returns memory usage of consensus nodes and policy networks.
Source§impl DeepDistributedQP
impl DeepDistributedQP
Sourcepub fn num_workers(&self) -> usize
pub fn num_workers(&self) -> usize
Get number of consensus workers/nodes
Sourcepub fn learning_rate(&self) -> f32
pub fn learning_rate(&self) -> f32
Get current learning rate
Sourcepub fn communication_rounds(&self) -> usize
pub fn communication_rounds(&self) -> usize
Get estimated communication rounds
Sourcepub fn synchronization_overhead(&self) -> f32
pub fn synchronization_overhead(&self) -> f32
Get synchronization overhead estimate
Sourcepub fn solve_qp(
&mut self,
problem_id: &str,
p: &Tensor,
q: &Tensor,
a: Option<&Tensor>,
b: Option<&Tensor>,
g: Option<&Tensor>,
h: Option<&Tensor>,
) -> Result<Tensor>
pub fn solve_qp( &mut self, problem_id: &str, p: &Tensor, q: &Tensor, a: Option<&Tensor>, b: Option<&Tensor>, g: Option<&Tensor>, h: Option<&Tensor>, ) -> Result<Tensor>
Solves a quadratic programming problem with explicit matrices.
Trait Implementations§
Source§impl Clone for DeepDistributedQP
impl Clone for DeepDistributedQP
Source§fn clone(&self) -> DeepDistributedQP
fn clone(&self) -> DeepDistributedQP
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl Debug for DeepDistributedQP
impl Debug for DeepDistributedQP
Source§impl Optimizer for DeepDistributedQP
impl Optimizer for DeepDistributedQP
Source§fn update(&mut self, parameter: &mut Tensor, gradient: &Tensor) -> Result<()>
fn update(&mut self, parameter: &mut Tensor, gradient: &Tensor) -> Result<()>
Source§fn accumulate_grad(
&mut self,
parameter: &mut Tensor,
grad: &Tensor,
) -> Result<(), TrustformersError>
fn accumulate_grad( &mut self, parameter: &mut Tensor, grad: &Tensor, ) -> Result<(), TrustformersError>
Source§fn apply_accumulated_grads(
&mut self,
accumulation_steps: usize,
) -> Result<(), TrustformersError>
fn apply_accumulated_grads( &mut self, accumulation_steps: usize, ) -> Result<(), TrustformersError>
Source§impl StatefulOptimizer for DeepDistributedQP
impl StatefulOptimizer for DeepDistributedQP
Source§type Config = DeepDistributedQPConfig
type Config = DeepDistributedQPConfig
Source§type State = StateMemoryStats
type State = StateMemoryStats
Source§fn state_mut(&mut self) -> &mut Self::State
fn state_mut(&mut self) -> &mut Self::State
Source§fn state_dict(&self) -> Result<HashMap<String, Tensor>>
fn state_dict(&self) -> Result<HashMap<String, Tensor>>
Source§fn load_state_dict(&mut self, state: HashMap<String, Tensor>) -> Result<()>
fn load_state_dict(&mut self, state: HashMap<String, Tensor>) -> Result<()>
Source§fn memory_usage(&self) -> StateMemoryStats
fn memory_usage(&self) -> StateMemoryStats
Source§fn reset_state(&mut self)
fn reset_state(&mut self)
Source§fn num_parameters(&self) -> usize
fn num_parameters(&self) -> usize
Auto Trait Implementations§
impl Freeze for DeepDistributedQP
impl RefUnwindSafe for DeepDistributedQP
impl Send for DeepDistributedQP
impl Sync for DeepDistributedQP
impl Unpin for DeepDistributedQP
impl UnwindSafe for DeepDistributedQP
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more