pub struct UniConstraintStream<S, A, E, F, Sc>where
Sc: Score,{ /* private fields */ }Expand description
Zero-erasure constraint stream over a single entity type.
UniConstraintStream accumulates filters and can be finalized into
an IncrementalUniConstraint via penalize() or reward().
All type parameters are concrete - no trait objects, no Arc allocations in the hot path.
§Type Parameters
S- Solution typeA- Entity typeE- Extractor function typeF- Combined filter typeSc- Score type
Implementations§
Source§impl<S, A, E, Sc> UniConstraintStream<S, A, E, TrueFilter, Sc>
impl<S, A, E, Sc> UniConstraintStream<S, A, E, TrueFilter, Sc>
Source§impl<S, A, E, F, Sc> UniConstraintStream<S, A, E, F, Sc>
impl<S, A, E, F, Sc> UniConstraintStream<S, A, E, F, Sc>
Sourcepub fn filter<P>(
self,
predicate: P,
) -> UniConstraintStream<S, A, E, AndUniFilter<F, FnUniFilter<P>>, Sc>
pub fn filter<P>( self, predicate: P, ) -> UniConstraintStream<S, A, E, AndUniFilter<F, FnUniFilter<P>>, Sc>
Adds a filter predicate to the stream.
Multiple filters are combined with AND semantics at compile time. Each filter adds a new type layer, preserving zero-erasure.
Sourcepub fn join_self<K, KA, KB>(
self,
joiner: EqualJoiner<KA, KB, K>,
) -> BiConstraintStream<S, A, K, E, KA, UniLeftBiFilter<F, A>, Sc>
pub fn join_self<K, KA, KB>( self, joiner: EqualJoiner<KA, KB, K>, ) -> BiConstraintStream<S, A, K, E, KA, UniLeftBiFilter<F, A>, Sc>
Joins this stream with itself to create pairs (zero-erasure).
Requires an EqualJoiner to enable key-based indexing for O(k) lookups.
For self-joins, pairs are ordered (i < j) to avoid duplicates.
Any filters accumulated on this stream are applied to both entities individually before the join.
Sourcepub fn join<B, EB, K, KA, KB>(
self,
extractor_b: EB,
joiner: EqualJoiner<KA, KB, K>,
) -> CrossBiConstraintStream<S, A, B, K, E, EB, KA, KB, UniLeftBiFilter<F, B>, Sc>
pub fn join<B, EB, K, KA, KB>( self, extractor_b: EB, joiner: EqualJoiner<KA, KB, K>, ) -> CrossBiConstraintStream<S, A, B, K, E, EB, KA, KB, UniLeftBiFilter<F, B>, Sc>
Joins this stream with another collection to create cross-entity pairs (zero-erasure).
Requires an EqualJoiner to enable key-based indexing for O(1) lookups.
Unlike join_self which pairs entities within the same collection,
join creates pairs from two different collections (e.g., Shift joined
with Employee).
Any filters accumulated on this stream are applied to the A entity before the join.
Sourcepub fn group_by<K, KF, C>(
self,
key_fn: KF,
collector: C,
) -> GroupedConstraintStream<S, A, K, E, KF, C, Sc>
pub fn group_by<K, KF, C>( self, key_fn: KF, collector: C, ) -> GroupedConstraintStream<S, A, K, E, KF, C, Sc>
Groups entities by key and aggregates with a collector.
Returns a zero-erasure GroupedConstraintStream that can be penalized
or rewarded based on the aggregated result for each group.
Sourcepub fn balance<K, KF>(
self,
key_fn: KF,
) -> BalanceConstraintStream<S, A, K, E, F, KF, Sc>
pub fn balance<K, KF>( self, key_fn: KF, ) -> BalanceConstraintStream<S, A, K, E, F, KF, Sc>
Creates a balance constraint that penalizes uneven distribution across groups.
Unlike group_by which scores each group independently, balance computes
a GLOBAL standard deviation across all group counts and produces a single score.
The key_fn returns Option<K> to allow skipping entities (e.g., unassigned shifts).
Any filters accumulated on this stream are also applied.
§Example
use solverforge_scoring::stream::ConstraintFactory;
use solverforge_scoring::api::constraint_set::IncrementalConstraint;
use solverforge_core::score::SimpleScore;
#[derive(Clone)]
struct Shift { employee_id: Option<usize> }
#[derive(Clone)]
struct Solution { shifts: Vec<Shift> }
let constraint = ConstraintFactory::<Solution, SimpleScore>::new()
.for_each(|s: &Solution| &s.shifts)
.balance(|shift: &Shift| shift.employee_id)
.penalize(SimpleScore::of(1000))
.as_constraint("Balance workload");
let solution = Solution {
shifts: vec![
Shift { employee_id: Some(0) },
Shift { employee_id: Some(0) },
Shift { employee_id: Some(0) },
Shift { employee_id: Some(1) },
],
};
// Employee 0: 3 shifts, Employee 1: 1 shift
// std_dev = 1.0, penalty = -1000
assert_eq!(constraint.evaluate(&solution), SimpleScore::of(-1000));Sourcepub fn if_exists_filtered<B, EB, K, KA, KB>(
self,
extractor_b: EB,
joiner: EqualJoiner<KA, KB, K>,
) -> IfExistsStream<S, A, B, K, E, EB, KA, KB, F, Sc>
pub fn if_exists_filtered<B, EB, K, KA, KB>( self, extractor_b: EB, joiner: EqualJoiner<KA, KB, K>, ) -> IfExistsStream<S, A, B, K, E, EB, KA, KB, F, Sc>
Filters A entities based on whether a matching B entity exists.
Use this when the B collection needs filtering (e.g., only vacationing employees).
The extractor_b returns a Vec<B> to allow for filtering.
Any filters accumulated on this stream are applied to A entities.
§Example
use solverforge_scoring::stream::ConstraintFactory;
use solverforge_scoring::stream::joiner::equal_bi;
use solverforge_scoring::api::constraint_set::IncrementalConstraint;
use solverforge_core::score::SimpleScore;
#[derive(Clone)]
struct Shift { id: usize, employee_idx: Option<usize> }
#[derive(Clone)]
struct Employee { id: usize, on_vacation: bool }
#[derive(Clone)]
struct Schedule { shifts: Vec<Shift>, employees: Vec<Employee> }
// Penalize shifts assigned to employees who are on vacation
let constraint = ConstraintFactory::<Schedule, SimpleScore>::new()
.for_each(|s: &Schedule| s.shifts.as_slice())
.filter(|shift: &Shift| shift.employee_idx.is_some())
.if_exists_filtered(
|s: &Schedule| s.employees.iter().filter(|e| e.on_vacation).cloned().collect(),
equal_bi(
|shift: &Shift| shift.employee_idx,
|emp: &Employee| Some(emp.id),
),
)
.penalize(SimpleScore::of(1))
.as_constraint("Vacation conflict");
let schedule = Schedule {
shifts: vec![
Shift { id: 0, employee_idx: Some(0) }, // assigned to vacationing emp
Shift { id: 1, employee_idx: Some(1) }, // assigned to working emp
Shift { id: 2, employee_idx: None }, // unassigned (filtered out)
],
employees: vec![
Employee { id: 0, on_vacation: true },
Employee { id: 1, on_vacation: false },
],
};
// Only shift 0 matches (assigned to employee 0 who is on vacation)
assert_eq!(constraint.evaluate(&schedule), SimpleScore::of(-1));Sourcepub fn if_not_exists_filtered<B, EB, K, KA, KB>(
self,
extractor_b: EB,
joiner: EqualJoiner<KA, KB, K>,
) -> IfExistsStream<S, A, B, K, E, EB, KA, KB, F, Sc>
pub fn if_not_exists_filtered<B, EB, K, KA, KB>( self, extractor_b: EB, joiner: EqualJoiner<KA, KB, K>, ) -> IfExistsStream<S, A, B, K, E, EB, KA, KB, F, Sc>
Filters A entities based on whether NO matching B entity exists.
Use this when the B collection needs filtering.
The extractor_b returns a Vec<B> to allow for filtering.
Any filters accumulated on this stream are applied to A entities.
§Example
use solverforge_scoring::stream::ConstraintFactory;
use solverforge_scoring::stream::joiner::equal_bi;
use solverforge_scoring::api::constraint_set::IncrementalConstraint;
use solverforge_core::score::SimpleScore;
#[derive(Clone)]
struct Task { id: usize, assignee: Option<usize> }
#[derive(Clone)]
struct Worker { id: usize, available: bool }
#[derive(Clone)]
struct Schedule { tasks: Vec<Task>, workers: Vec<Worker> }
// Penalize tasks assigned to workers who are not available
let constraint = ConstraintFactory::<Schedule, SimpleScore>::new()
.for_each(|s: &Schedule| s.tasks.as_slice())
.filter(|task: &Task| task.assignee.is_some())
.if_not_exists_filtered(
|s: &Schedule| s.workers.iter().filter(|w| w.available).cloned().collect(),
equal_bi(
|task: &Task| task.assignee,
|worker: &Worker| Some(worker.id),
),
)
.penalize(SimpleScore::of(1))
.as_constraint("Unavailable worker");
let schedule = Schedule {
tasks: vec![
Task { id: 0, assignee: Some(0) }, // worker 0 is unavailable
Task { id: 1, assignee: Some(1) }, // worker 1 is available
Task { id: 2, assignee: None }, // unassigned (filtered out)
],
workers: vec![
Worker { id: 0, available: false },
Worker { id: 1, available: true },
],
};
// Task 0's worker (id=0) is NOT in the available workers list
assert_eq!(constraint.evaluate(&schedule), SimpleScore::of(-1));Sourcepub fn penalize(
self,
weight: Sc,
) -> UniConstraintBuilder<S, A, E, F, impl Fn(&A) -> Sc + Send + Sync, Sc>where
Sc: Clone,
pub fn penalize(
self,
weight: Sc,
) -> UniConstraintBuilder<S, A, E, F, impl Fn(&A) -> Sc + Send + Sync, Sc>where
Sc: Clone,
Penalizes each matching entity with a fixed weight.
Sourcepub fn penalize_with<W>(
self,
weight_fn: W,
) -> UniConstraintBuilder<S, A, E, F, W, Sc>
pub fn penalize_with<W>( self, weight_fn: W, ) -> UniConstraintBuilder<S, A, E, F, W, Sc>
Penalizes each matching entity with a dynamic weight.
Note: For dynamic weights, use penalize_hard_with to explicitly mark as a hard constraint,
since the weight function cannot be evaluated at build time.
Sourcepub fn penalize_hard_with<W>(
self,
weight_fn: W,
) -> UniConstraintBuilder<S, A, E, F, W, Sc>
pub fn penalize_hard_with<W>( self, weight_fn: W, ) -> UniConstraintBuilder<S, A, E, F, W, Sc>
Penalizes each matching entity with a dynamic weight, explicitly marked as a hard constraint.
Sourcepub fn reward(
self,
weight: Sc,
) -> UniConstraintBuilder<S, A, E, F, impl Fn(&A) -> Sc + Send + Sync, Sc>where
Sc: Clone,
pub fn reward(
self,
weight: Sc,
) -> UniConstraintBuilder<S, A, E, F, impl Fn(&A) -> Sc + Send + Sync, Sc>where
Sc: Clone,
Rewards each matching entity with a fixed weight.
Sourcepub fn reward_with<W>(
self,
weight_fn: W,
) -> UniConstraintBuilder<S, A, E, F, W, Sc>
pub fn reward_with<W>( self, weight_fn: W, ) -> UniConstraintBuilder<S, A, E, F, W, Sc>
Rewards each matching entity with a dynamic weight.
Note: For dynamic weights, use reward_hard_with to explicitly mark as a hard constraint,
since the weight function cannot be evaluated at build time.
Sourcepub fn reward_hard_with<W>(
self,
weight_fn: W,
) -> UniConstraintBuilder<S, A, E, F, W, Sc>
pub fn reward_hard_with<W>( self, weight_fn: W, ) -> UniConstraintBuilder<S, A, E, F, W, Sc>
Rewards each matching entity with a dynamic weight, explicitly marked as a hard constraint.