pare
Pareto frontier and skyline query primitives for multi-objective optimization. Filters sets of items to find non-dominated candidates across multiple metrics.
[]
= "0.2.0"
Quick start
use ParetoFrontier;
// [Relevance, Recency] -- higher is better
let candidates = vec!;
let frontier = try_new.unwrap;
assert_eq!; // D excluded
Mixed objectives
When some objectives should be minimized, construct with explicit directions:
use ;
// accuracy (maximize) vs latency_ms (minimize)
let mut f = new;
f.push;
f.push;
f.push; // dominated by model_b
assert_eq!; // model_c filtered out
Selecting from the frontier
Pick a point with weighted scoring, ASF (Achievement Scalarizing Function), knee detection, or analyze spread with crowding distance:
use ;
let mut f = new;
f.push;
f.push;
f.push;
// ASF -- works on non-convex fronts, finds the best compromise point
let ideal = f.ideal_point.unwrap;
let best_asf = f.best_asf.unwrap;
// Knee point -- highest tradeoff point on the frontier
let knee = f.knee_index.unwrap;
// Weighted linear score -- simple but misses non-convex regions
let best = f.best_index.unwrap;
// Full ranking by score (descending)
let ranked = f.ranked_indices;
// Crowding distance -- points in sparse regions score higher
let distances = f.crowding_distances;
// Hypervolume -- area dominated by the frontier (quality indicator)
let ref_pt = f.suggest_ref_point.unwrap;
let hv = f.hypervolume;
// Per-point hypervolume contribution (for indicator-based selection)
let contribs = f.hypervolume_contributions;
Normalization and scaling
Objectives often have different units (accuracy 0-1, latency 10-500ms, cost $0-$10000). pare handles this:
use ;
let mut f = new;
f.push;
f.push;
// Unit-free normalized values: 0 = worst on front, 1 = best on front
let norm = f.normalized_values.unwrap;
// norm[0] = 1.0 (best accuracy), norm[1] = 0.0 (worst latency)
// Ideal (best per-objective) and nadir (worst per-objective)
let ideal = f.ideal_point.unwrap; // [0.92, 80.0]
let nadir = f.nadir_point.unwrap; // [0.88, 120.0]
// For stable scoring across frontier changes, use static bounds:
let bounds = vec!;
let score = f.scalar_score_static;
Post-hoc filtering
Apply constraints after frontier construction:
use ;
let mut f = new;
f.push;
f.push;
f.push;
// Keep only points within budget
f.retain;
assert_eq!;
Convenience functions
For simple cases where you just need non-dominated indices from Vec<f32> points:
use ;
let points = vec!;
let idx = pareto_indices.unwrap; // general N-d
let idx2 = pareto_indices_2d.unwrap; // optimized 2-d path
// k-dominance: a point is dominated only if worse in >= k objectives
let relaxed = pareto_indices_k_dominance.unwrap;
// Non-dominated sorting: partition into successive Pareto layers
let layers = pareto_layers.unwrap;
// layers[0] = Pareto front, layers[1] = front of remainder, ...
Epsilon-dominance archiving
For streaming / online optimization where memory must be bounded:
use ;
let mut archive = new_uniform;
// Insert many points; archive stays bounded
for i in 0..1000
assert!; // at most ceil(1/0.1)^2 cells
// Convert to ParetoFrontier for analysis
let frontier = archive.into_frontier;
Quality indicators
Compare fronts with Generational Distance (GD) and Inverted Generational Distance (IGD):
use ;
let front = vec!;
let reference = vec!;
let gd = generational_distance.unwrap; // convergence
let igd = inverted_generational_distance.unwrap; // convergence + spread
Sensitivity analysis
The sensitivity module computes finite-difference Jacobians and objective redundancy analysis:
use ;
let objectives: = vec!;
let jac = finite_difference_jacobian;
let analysis = analyze_redundancy.unwrap;
// analysis.redundant_pairs -- objectives that move together
Examples
sensitivity_analysis.rs -- Multi-objective sensitivity analysis for a simulated 3-arm experiment with 9 covariate cells and 6 objectives. Builds a finite-difference Jacobian across the full parameter space, computes the eigenvalue spectrum to find how many objectives actually matter (effective dimensionality), identifies redundant pairs via cosine similarity, and reports which objectives can be dropped without losing decision power. Useful when you suspect your objective set is over-specified.
For background on dominance, crowding distance, and hypervolume, see
TECHNICAL_BACKGROUND.md.
License
MIT OR Apache-2.0