[][src]Crate ddo

DDO

DDO is a truly generic framework to develop MDD-based combinatorial optimization solvers in Rust. Its goal is to let you describe your optimization problem as a dynamic program (see Problem) along with a Relaxation. When the dynamic program of the problem is considered as a transition system, the relaxation serves the purpose of merging different nodes of the transition system into an other node standing for them all. In that setup, the sole condition to ensure the correctness of the optimization algorithm is that the replacement node must be an over approximation of all what is feasible from the merged nodes.

Side benefit

As a side benefit from using ddo, you will be able to exploit all of your hardware to solve your optimization in parallel.

Quick Example

The following presents a minimalistic use of ddo. It implements a solver for the knapsack problem which uses all the available computing resources to complete its task. This example is shown for illustration purpose because it is pretty simple and chances are high anybody is already comfortable with the problem definition.

Note:

The example folder of our repository contains many other examples in addition to this one. So please consider checking them out for further details.

Describe the problem as dynamic program

The first thing to do in this example is to describe the binary knapsack problem in terms of a dynamic program. Here, the state of a node, is nothing more than an unsigned integer (usize). That unsigned integer represents the remaining capacity of our sack. To do so, you define your own structure and make sure it implements the Problem<usize> trait.

#[derive(Debug, Clone)]
struct Knapsack {
    capacity: usize,
    profit  : Vec<usize>,
    weight  : Vec<usize>
}
impl Problem<usize> for Knapsack {
    fn nb_vars(&self) -> usize {
        self.profit.len()
    }
    fn domain_of<'a>(&self, state: &'a usize, var: Variable) ->Domain<'a> {
        if *state >= self.weight[var.id()] {
            vec![0, 1].into()
        } else {
            vec![0].into()
        }
    }
    fn initial_state(&self) -> usize {
        self.capacity
    }
    fn initial_value(&self) -> isize {
        0
    }
    fn transition(&self, state: &usize, _vars: &VarSet, dec: Decision) -> usize {
        state - (self.weight[dec.variable.id()] * dec.value as usize)
    }
    fn transition_cost(&self, _state: &usize, _vars: &VarSet, dec: Decision) -> isize {
        self.profit[dec.variable.id()] as isize * dec.value
    }
}

Define a Relaxation

The relaxation we will define is probably the simplest you can think of. When one needs to define a new state to replace those exceeding the maximum width of the MDD, we will simply keep the state with the maximum capacity as it enables at least all the possibly behaviors feasible with lesser capacities.

Optionally, we could also implement a rough upper bound estimator for our problem in the relaxation. However, we wont do it in this minimalistic example since the framework provides you with a default implementation. If you were to override the default implementation you would need to implement the estimate() method of the Relaxation trait.

#[derive(Debug, Clone)]
struct KPRelax;
impl Relaxation<usize> for KPRelax {
    /// To merge a given selection of states (capacities) we will keep the
    /// maximum capacity. This is an obvious relaxation as it allows us to
    /// put more items in the sack.
    fn merge_states(&self, states: &mut dyn Iterator<Item=&usize>) -> usize {
        // the selection is guaranteed to have at least one state so using
        // unwrap after max to get rid of the wrapping 'Option' is perfectly safe.
        *states.max().unwrap()
    }
    /// When relaxing (merging) the states, we did not run into the risk of
    /// possibly decreasing the maximum objective value reachable from the
    /// components of the merged node. Hence, we dont need to do anything
    /// when relaxing the edge. Still, if we wanted to, we could chose to
    /// return an higher value.
    fn relax_edge(&self, src: &usize, dst: &usize, relaxed: &usize, d: Decision, cost: isize) -> isize {
        cost
    }
}

Instanciate your Solver

As soon as you have defined a problem and relaxation, you are good to go. The only thing you still need to do is to write your main method and spin your solver to solve actual problems. Here is how you would do it.

// 1. Create an instance of our knapsack problem
let problem = Knapsack {
    capacity: 50,
    profit  : vec![60, 100, 120],
    weight  : vec![10,  20,  30]
};
// 2. Build an MDD for the given problem and relaxation
let mdd = mdd_builder(&problem, KPRelax).into_deep();
// 3. Create a parllel solver on the basis of this MDD (this is how
//    you can specify the MDD implementation you wish to use to develop
//    the relaxed and restricted MDDs).
let mut solver = ParallelSolver::new(mdd);
// 4. Maximize your objective function
// the outcome provides the value of the best solution that was found for
// the problem (if one was found) along with a flag indicating whether or
// not the solution was proven optimal. Hence an unsatisfiable problem
// would have `outcome.best_value == None` and `outcome.is_exact` true.
// The `is_exact` flag will only be false if you explicitly decide to stop
// searching with an arbitrary cutoff.
let outcome    = solver.maximize();
// The best solution (if one exist) is retrieved with
let solution   = solver.best_solution();

// 5. Do whatever you like with the optimal solution.
assert_eq!(Some(220), outcome.best_value);
println!("Solution");
for decision in solution.unwrap().iter() {
    if decision.value == 1 {
        println!("{}", decision.variable.id());
    }
}

Going further / Getting a grasp on the codebase

The easiest way to get your way around with DDO is probably to start exploring the available APIs and then to move to the exploration of the examples. (Or the other way around, that's really up to you !). For the exploration of the APIs, you are encouraged to start with the types ddo::Problem and ddo::Relaxation which defines the core abstractions you will need to implement. After that, it is also interesting to have a look at the various heuristics availble and the configuration options you can use when cutomizing the behavior of your solver and mdd. That should get you covered and you should be able to get a deep understanding of how to use our library.

Citing DDO

If you use DDO, or find it useful for your purpose (research, teaching, business, ...) please cite:

@misc{gillard:20:ddo,
    author       = {Xavier Gillard, Pierre Schaus, Vianney Coppé},
    title        = {Ddo, a generic and efficient framework for MDD-based optimization},
    howpublished = {IJCAI-20},
    year         = {2020},
    note         = {Available from \url{https://github.com/xgillard/ddo}},
}

Structs

AggressivelyBoundedMDD

This structure implements an MDD which is aggressively bounded when developing a restricted MDD and DeepMDD

BitSetIter

This structure defines an iterator capable of iterating over the 1-bits of a fixed bitset. It uses word representation of the items in the set, so it should be more efficient to use than a crude iteration over the elements of the set.

Completion

The outcome of an mdd development

CompositeMDD

This is the composite mdd which provides the core of all hybrid implementations. The parameter types mean the following:

ConfigurationBuilder

This is the structure used to build an MDD configuration. There is very little logic to it: it only uses the type system to adapt to its configurations and return a config which may be used efficiently (and stack allocated). Concretely, a configuration builder lets you specify all the parameters of a candidate configuration to build. Namely: + a problem (mandatory) + a relaxation (mandatory) + a load variable heuristic (Defaults to LoadVarFromPartialAssignment) + a variable selection heuristic (select the next var to branch on. Defaults to NaturalOrder.) + a maximum width heuristic to limit the memory usage of each layer. (Defaults to NbUnassignedWidth.) + a node selection heuritic (Ordering to chose the nodes to drop/merge. Defaults to MinLP).

Decision

This denotes a decision that was made during the search. It affects a given value to the specified variable. Any given Decision should be understood as ```[[ variable = value ]]````

Decreasing

This strategy selects the variable in decreasing order. This means, it has provides and order which is the opposite of the NaturalOrder.

DeepMDD

This structure provides an implementation of a deep mdd (one that materializes the complete mdd graph). It implements rough upper bound and local bounds to strengthen the pruning achieved by the branch and bound algorithm. It uses the given configuration to develop the (approximate) MDD.

DivBy

This strategy acts as a decorator for an other max width heuristic. It divides the maximum width of the strategy it delegates to by a constant (configured) factor. It is typically used in conjunction with NbUnassigned to provide a maximum width that allows a certain number of nodes. Using a constant factor of 1 means that this decorator will have absolutely no impact.

FindFirst

This cutoff allows one to specify that the solver must stop searching as soon as it has found a first valid solution.

FixedWidth

This strategy specifies a fixed maximum width for all the layers of an approximate MDD. This is a static heuristic as the width will remain fixed regardless of the approximate MDD to generate.

FlatMDD

This is the structure implementing flat MDD. This is a kind of bounded width MDD which offers a real guarantee wrt to the maximum amount of used memory.

FrontierNode

Frontier nodes describes the subproblems that have been enumerated and must be explored in order to prove optimality. Hence, each single frontier node denotes one subproblem characterized by the same transition and transition cost functions as the original problem, but having a different initial state and value (lp_len). Because a frontier node is a subproblem, of the original constraint optimization problem, it also remembers the partial assignment (the path) which transforms the global problem into this region of the state space.

GapCutoff

This cutoff allows one to specify that the solver must stop searching new solutions as soon the gap between the best lower and upper bounds falls within a given percentage of the optimum.

HybridFlatDeep

This structure is an hybrid mdd that behaves like a FlatMDD when unrolling an exact or relaxed mdd, and behaves like a DeepMDD when it comes to deriving a relaxed dd.

HybridPooledDeep

This structure is an hybrid mdd that behaves like a PooledMDD when unrolling an exact or relaxed mdd, and behaves like a DeepMDD when it comes to deriving a relaxed dd.

LexBitSet

A totally ordered Bitset wrapper. Useful to implement tie break mechanisms. This wrapper orders the bitsets according to the lexical order of their underlying bits.

LoadVarFromPartialAssignment

This strategy retrieves the set of free variables starting from the full set of variables (known at construction time) and iteratively removing the variables that have been assigned in the partial assignment leading to some given frontier node.

Matrix

This structure implements a 2D matrix of size [ n X m ].

MaxUB

This is the default heuristic to set the order in which nodes are popped from the frontier So far, this is not only the default heuristic, but it is also your only choice (this was configurable in the past and could possibly change back in the future).

MinLP

This function provides an implementation of the MinLP node selection heuristic. In other words, it provides an heuristic that removes the nodes having the shortest longest path from root when it needs to squash the size of an overly large layer.

NaturalOrder

This strategy branches on the variables in their ''natural'' order. That is, it will first pick Variable(0) then Variable(1), Variable(2), etc... until there are no variables left to branch on.

NbUnassignedWitdh

This strategy specifies a variable maximum width for the layers of an approximate MDD. When using this heuristic, each layer of an approximate MDD is allowed to have as many nodes as there are free variables to decide upon.

NoCutoff

This is the default cutoff heuristic. It imposes that the search goes proves optimality before to stop.

NoDupFrontier

A frontier that enforces the requirement that a given state will never be present twice in the frontier.

NoForgetFrontier

A frontier that enforces the requirement that a given state will never be enqueued again, unless it improves the longest path length.

ParallelSolver
PartialAssignmentIter

This structure backs the iteration over the elements of a PartialAssignment. This is what enables the use of partial assignments in for loops.

PassThroughConfig

This structure provides a simple 'umbrella' over the configuration of an MDD. All it does it to forward calls to the appropriate heuristic.

PooledMDD

This structure implements a pooled MDD. This is a kind of bounded width MDD which cannot offer a strong guarantees wrt to the maximum amount of used memory. However, this structure is perfectly suited for problems like MISP, where one decision can affect more than one variable, and expanding a node is expensive but checkinig if a node is going to be impacted by a decision is cheap.

SequentialSolver

This is the structure implementing an single-threaded MDD solver.

SimpleFrontier

The simplest frontier implementation you can think of: is basically consists of a binary heap that pushes an pops frontier nodes

Solution

A solution is a partial assignment that assigns a value to each of the problem variables. From the MDD-perspective, it is an optimal r-t path in the exact MDD.

TimeBudget

This cutoff allows one to specify a maximum time budget to solve the problem. Once the time budget is elapsed, the optimization stops and the best solution that has been found (so far) is returned.

Times

This strategy acts as a decorator for an other max width heuristic. It multiplies the maximum width of the strategy it delegates to by a constant (configured) factor. It is typically used in conjunction with NbUnassigned to provide a maximum width that allows a certain number of nodes. Using a constant factor of 1 means that this decorator will have absolutely no impact.

VarSet

This type denotes a set of variable. It encodes them compactly as a fixed size bitset. A VarSet can be efficiently iterated upon.

VarSetIter

This type denotes the iterator used to iterate over the Variables to a given VarSet. It should never be manually instantiated, but always via the iter() method from the varset.

Variable

This type denotes a variable from the optimization problem at hand. In this case, each variable is assumed to be identified with an integer ranging from 0 until problem.nb_vars()

Enums

Domain

A domain is a set of values (isize) that may be assigned to some variable by a decision.

DomainIter

DomainIter is the type of the iterator used to go over the possible values of a domain. Therefore, it is isomorphic to the Domain type itself.

PartialAssignment

A partial assignment is an incomplete solution that can be extended into a complete one. In other words, it encapsulates a collection of decisions.

Reason

A reason explaining why the mdd stopped developing

Traits

Config

The config trait describes the configuration of an MDD. In other words, it encapsulates the configurable behavior (problem, relaxation, heuristics,..) of an MDD. Such a configuration is typically obtained from a builder.

Cutoff

This trait encapsulates a criterion (external to the solver) which imposes to stop searching for a better solution. Typically, this is done to grant a given time budget to the search.

Frontier

The Frontier. That is the abstraction of the solver's frontier (aka fringe, aka queue). This is the set of sub-problems that must be treated before the problem is considered solved (by exhaustion).

LoadVars

This trait defines a strategy/heuristic to retrieve the smallest set of free variables from a given node

MDD

This trait describes an MDD

NodeSelectionHeuristic

This trait defines an heuristic to rank the nodes in order to remove (or merge) the less relevant ones from an MDD that is growing too large.

Problem

This is the main abstraction that should be provided by any user of our library. Indeed, it defines the problem to be solved in the form of a dynamic program. Therefore, this trait closely sticks to the formal definition of a dynamic program.

Relaxation

This is the second most important abstraction that a client should provide when using this library. It defines the relaxation that may be applied to the given problem. In particular, the merge_states method from this trait defines how the nodes of a layer may be combined to provide an upper bound approximation standing for an arbitrarily selected set of nodes.

SelectableNode

This trait defines a minimal abstraction over the MDD nodes so that they can easily (and transparently) be ordered by the NodeSelectionHeuristic.

Solver

The solver trait lets you maximize an objective function.

VariableHeuristic

This trait defines an heuristic to determine the best variable to branch on while developing an MDD.

WidthHeuristic

This trait defines an heuristic to determine the maximum allowed width of a layer in a relaxed or restricted MDD.

Functions

config_builder

This is the function you should use to instantiate a new configuration builder with all defaults heuristics. It should be used as in the following example where one creates an MDD whaving a fixed width strategy.

mdd_builder

This is the function you should use to instantiate a new MDD builder with all defaults heuristics. It should be used as in the following example where one creates an MDD whaving a fixed width strategy.