Expand description
A genetic algorithm implementation for Rust. Inspired by the book Genetic Algorithms in Elixir
There are three main elements to this approach:
- The Genotype (the search space)
- The Fitness function (the search goal)
- The Strategy (the search strategy)
Terminology:
- Population: a population has
population_sizenumber of individuals (called chromosomes). - Chromosome: a chromosome has
genes_sizenumber of genes - Allele: alleles are the possible values of the genes
- Gene: a gene is a combination of position in the chromosome and value of the gene (allele)
- Genes: storage trait of the genes for a chromosome, always
Vec<Allele> - Genotype: Knows how to generate, mutate and crossover chromosomes efficiently
- Fitness: knows how to determine the fitness of a chromosome
Important: FitnessValue is isize (not f64). This
enables equality checks for staleness detection. For float-based fitness, scale manually:
Some((score / precision) as FitnessValue), or use the
fitness_value helper.
All multithreading mechanisms are implemented using rayon::iter and std::sync::mpsc.
§Quick Usage
use genetic_algorithm::strategy::evolve::prelude::*;
// the search space
let genotype = BinaryGenotype::builder() // boolean alleles
.with_genes_size(100) // 100 genes per chromosome
.build()
.unwrap();
println!("{}", genotype);
// the search goal to optimize towards (maximize or minimize)
#[derive(Clone, Debug)]
pub struct CountTrue;
impl Fitness for CountTrue {
type Genotype = BinaryGenotype; // Genes = Vec<bool>
fn calculate_for_chromosome(
&mut self,
chromosome: &FitnessChromosome<Self>,
_genotype: &FitnessGenotype<Self>
) -> Option<FitnessValue> {
Some(chromosome.genes.iter().filter(|&value| *value).count() as FitnessValue)
}
}
// the search strategy
let evolve = Evolve::builder()
.with_genotype(genotype)
.with_select(SelectElite::new(0.5, 0.02)) // sort the chromosomes by fitness to determine crossover order. Strive to replace 50% of the population with offspring. Allow 2% through the non-generational best chromosomes gate before selection and replacement
.with_crossover(CrossoverUniform::new(0.7, 0.8)) // crossover all individual genes between 2 chromosomes for offspring with 70% parent selection (30% do not produce offspring) and 80% chance of crossover (20% of parents just clone)
.with_mutate(MutateSingleGene::new(0.2)) // mutate offspring for a single gene with a 20% probability per chromosome
.with_fitness(CountTrue) // count the number of true values in the chromosomes
.with_fitness_ordering(FitnessOrdering::Maximize) // optional, default is Maximize, aim towards the most true values
.with_target_population_size(100) // evolve with 100 chromosomes
.with_target_fitness_score(100) // goal is 100 times true in the best chromosome
.with_reporter(EvolveReporterSimple::new(100)) // optional builder step, report every 100 generations
.call()
.unwrap();
println!("{}", evolve);
// it's all about the best genes after all
let (best_genes, best_fitness_score) = evolve.best_genes_and_fitness_score().unwrap();
assert_eq!(best_genes, vec![true; 100]);
assert_eq!(best_fitness_score, 100);§Tests
Use .with_rng_seed_from_u64(0) builder step to create deterministic tests results.
Exact results may change between library versions (even minor), but deterministic within a version.
§Examples
- N-Queens puzzle https://en.wikipedia.org/wiki/Eight_queens_puzzle
- See examples/evolve_nqueens
- See examples/hill_climb_nqueens
UniqueGenotype<u8>with a 64x64 chess board setup- custom
NQueensFitnessfitness
- Knapsack problem: https://en.wikipedia.org/wiki/Knapsack_problem
- See examples/evolve_knapsack
- See examples/permutate_knapsack
BinaryGenotype<Item(weight, value)>each gene encodes presence in the knapsack- custom
KnapsackFitness(&items, weight_limit)fitness
- Infinite Monkey theorem: https://en.wikipedia.org/wiki/Infinite_monkey_theorem
- See examples/evolve_monkeys
ListGenotype<char>100 monkeys randomly typing characters in a loop- custom fitness using hamming distance
- Permutation strategy instead of Evolve strategy for small search spaces, with a 100% guarantee
- HillClimb strategy instead of Evolve strategy, when crossover is impossible or inefficient
- Explore internal and external multithreading options
- Explore MutationType differences with visualization
- See examples/visualize_evolve_mutation_types
- See examples/visualize_permutate_mutation_types
- Generates visualizations showing exploration patterns of different mutation strategies
- Heterogeneous Genotype example (bool, options, continues and discrete in one genome)
- Use superset StrategyBuilder for easier switching in implementation
- Use fitness LRU cache
- See examples/evolve_binary_cache_fitness
- Note: doesn’t help performance much in this case… or any case, better fix your population diversity
- Custom Reporting implementation
- Custom Mutate implementation
§Heterogeneous Genotype Support
MultiRangeGenotype supports heterogeneous chromosomes that mix
different gene semantics (continuous values, numeric values, discrete choices, booleans) within a single
numeric type T.
§MutationType Visualization
The library supports various mutation strategies that affect how the genetic algorithm explores the search space. Random leads to the best results overall. Random is the default and is supported by all Genotypes.
But for numeric genotypes (RangeGenotype and MultiRangeGenotype) there are several alternatives. These might converge faster, but are all more sensitive to local optima than Random. The visualization example shows how different mutation types explore a 2D search space when searching for a target point:
§Evolve Strategy (and HillClimb)
The visualization demonstrates:
- Random: Chaotic exploration, can jump anywhere in search space
- Range: Local search with fixed radius around current position
- RangeScaled: Adaptive exploration that starts broad and narrows down (funnel-like convergence)
- Step: Fixed-step local search in cardinal directions
- StepScaled: Grid-like exploration with progressively finer resolution
- Discrete: ListGenotype behaviour, for categories in heterogeneous genotypes
Run the example with cargo run --example visualize_evolve_mutation_types --release to generate the visualization.
§Permutate Strategy (and HillClimb)
For exhaustive search in smaller spaces, all genotypes have their own permutation implementation, which systematically explore all value combinations.
But for numeric/continues genotypes (RangeGenotype and MultiRangeGenotype) permutation is only possible using Step, StepScaled, and Discrete mutation types (as it needs additional restrictions be become countable):
- Step: Systematically explores grid points at fixed intervals
- StepScaled: Hierarchical search that refines around promising regions
- Discrete: Exhaustive exploration of all round-to-integer value combinations
Run the example with cargo run --example visualize_permutate_mutation_types --release to generate this visualization.
§Performance considerations
For the Evolve strategy:
- Reporting: start with EvolveReporterSimple for basic understanding of:
- fitness v. framework overhead
- staleness and population characteristics (cardinality etc.)
- Select: no considerations. All selects are basically some form of in-place sorting of some kind based on chromosome metadata. This is relatively fast compared to the rest of the operations.
- Crossover: the workhorse of internal parts. Crossover touches most genes each generation, calculates genes hashes and clones up to the whole population to produce offspring (depending on selection-rate).
- Mutate: no considerations. It touches genes like crossover does, but should be used sparingly anyway; with low gene counts (<10%) and low probability (5-20%)
- Fitness: can be anything, but usually very dominant (>80% total time). This fully
depends on the user domain. Parallelize it using
with_par_fitness()in the Builder. But beware that parallelization has it’s own overhead and is not always faster.
So framework overhead is mostly Crossover. Practical overhead is mostly Fitness.
Regarding the optionality of genes hashing and chromosomes recycling: For large chromosomes, disabling chromosome recycling and enabling genes hashing leads to a 3x factor in framework overhead. For small chromosomes, neither feature has overhead effects. But do keep in mind that for large chromosomes the Fitness calculation will be even more dominant with regards to the framework overhead as it already is. See examples/evolve_large_genotype
Default configuration for correctness AND performance
- .with_genes_hashing(true) // Required for proper GA dynamics
- .with_chromosome_recycling(true) // Still worth it for large chromosomes, maybe disable for easier custom implementations
§AGENTS.md — AI Agent Guide for genetic-algorithm
This file helps AI coding agents use this library correctly. It covers decision guidance, API reference, gotchas, and copy-paste templates.
§Table of Contents
- Quick Start
- Critical: FitnessValue is isize
- Copy-Paste Templates (more in AGENTS_TEMPLATES.md)
- Decision Matrix: Problem Type → Configuration
- Constructor Parameter Reference
- Builder Methods (Evolve)
- Builder Methods (HillClimb)
- Builder Methods (Permutate)
- Builder Methods (StrategyBuilder)
- Running the Strategy
- Common Mistakes
- Retrieving Results
- Implementing Custom Fitness
- Troubleshooting
- Gotchas
§Quick Start
Add to your Cargo.toml:
[dependencies]
genetic_algorithm = "0.27.1"use genetic_algorithm::strategy::evolve::prelude::*;This single import brings in all types needed for the Evolve strategy. Similar preludes exist for other strategies:
genetic_algorithm::strategy::hill_climb::prelude::*genetic_algorithm::strategy::permutate::prelude::*genetic_algorithm::strategy::prelude::*(superset, all strategies)
Logging: Examples use env_logger::init() for log output. Add env_logger
to your [dependencies] and call env_logger::init() in main() to see
reporter output. Set RUST_LOG=info (or debug) when running.
Critical gotchas (see Gotchas for full list):
FitnessValueisisize, notf64. Scale floats:Some((score / precision) as FitnessValue).- Evolve/HillClimb require at least one ending condition (
target_fitness_score,max_stale_generations, ormax_generations).
§Critical: FitnessValue is isize
FitnessValue is isize, not f64. This is by design: isize enables
equality checks needed for staleness detection (stale generations are detected
when the best fitness score stops changing).
This library uses f32 as the default float type (e.g. RangeGenotype<f32>,
MultiRangeGenotype<f32>). GAs don’t need f64 precision — the stochastic search
process dominates any floating-point rounding.
For float-based fitness, scale to isize manually:
// divide by desired precision, then cast
let precision = 1e-5;
Some((score / precision) as FitnessValue)
// or use the helper function (accepts f32 and f64)
Some(fitness_value(score, precision))Return None from calculate_for_chromosome to mark a chromosome as invalid
(it will rank last in selection regardless of fitness ordering).
§Copy-Paste Templates
Minimal end-to-end example showing the general flow (genotype → fitness → strategy → result):
use genetic_algorithm::strategy::evolve::prelude::*;
#[derive(Clone, Debug)]
struct CountTrue;
impl Fitness for CountTrue {
type Genotype = BinaryGenotype;
fn calculate_for_chromosome(
&mut self,
chromosome: &FitnessChromosome<Self>,
_genotype: &FitnessGenotype<Self>,
) -> Option<FitnessValue> {
Some(chromosome.genes.iter().filter(|&&v| v).count() as FitnessValue)
}
}
fn main() {
let genotype = BinaryGenotype::builder()
.with_genes_size(10)
.build()
.unwrap();
let evolve = Evolve::builder()
.with_genotype(genotype)
.with_target_population_size(100)
.with_max_stale_generations(100)
.with_fitness(CountTrue)
.with_select(SelectTournament::new(0.5, 0.02, 4))
.with_crossover(CrossoverUniform::new(0.7, 0.8))
.with_mutate(MutateSingleGene::new(0.2))
.call()
.unwrap();
let (best_genes, best_fitness_score) = evolve.best_genes_and_fitness_score().unwrap();
println!("genes: {:?}, score: {}", best_genes, best_fitness_score);
}See AGENTS_TEMPLATES.md for more examples covering all genotype types, strategies, and patterns (call_repeatedly, HillClimb, Permutate, etc.).
§Decision Matrix: Problem Type → Configuration
§Which Genotype?
| Problem Type | Genotype | Example |
|---|---|---|
| Binary choices (include/exclude) | BinaryGenotype | Knapsack |
| Values from a fixed set | ListGenotype<T> | Monkeys typing Shakespeare |
| Permutation / ordering | UniqueGenotype<T> | N-Queens, TSP |
| Continuous values (uniform range) | RangeGenotype<T> | Function optimization |
| Per-gene value sets | MultiListGenotype<T> | Mixed categorical |
| Multiple permutation groups | MultiUniqueGenotype<T> | Multi-group assignment |
| Per-gene numeric ranges | MultiRangeGenotype<T> | Heterogeneous optimization |
When to use Multi* variants: Use Multi* when each gene needs different
settings (different allele lists, different ranges, different mutation types).
Use the regular variant when all genes share the same configuration.
§Gene Types in Fitness Functions
When implementing calculate_for_chromosome, access genes via chromosome.genes:
| Genotype | chromosome.genes type | Notes |
|---|---|---|
BinaryGenotype | Vec<bool> | |
ListGenotype<T> | Vec<T> | Default T = usize |
UniqueGenotype<T> | Vec<T> | Default T = usize, positional permutation (swap-only mutation), values do not have to be unique, they are only treated as such |
RangeGenotype<T> | Vec<T> | Default T = f32 |
MultiListGenotype<T> | Vec<T> | Default T = usize, each gene has its own set of possible values |
MultiUniqueGenotype<T> | Vec<T> | Default T = usize, positional permutation (swap-only mutation) within each group, values do not have to be unique, they are only treated as such |
MultiRangeGenotype<T> | Vec<T> | Default T = f32, each gene has its own range |
§Which Strategy?
| Situation | Strategy | Why |
|---|---|---|
| General optimization | Evolve | Full GA with crossover + mutation |
| Convex search space | HillClimb | Local search suffices |
| Small search space | Permutate | Exhaustive, 100% guarantee |
Scaling warning: Permutate evaluates all possible gene combinations in a stream, so no memory issues, but serious duration issues
§Which HillClimb Variant?
| Situation | Variant | Why |
|---|---|---|
| Large genome | Stochastic (default) | One random neighbor per step, fast iterations |
| Small genome | SteepestAscent | Evaluates all neighbors, guarantees best local move |
| Plateau traversal needed | Stochastic + high max_stale_generations | Stochastic can escape via replace_on_equal_fitness |
| Exact local optimum needed | SteepestAscent + call_repeatedly(n) | Deterministic per start, multiple random restarts |
Scaling warning: SteepestAscent evaluates n*(n-1)/2 neighbors for
UniqueGenotype of size n (e.g., 2016 neighbors for n=64). Use Stochastic
with call_repeatedly for genomes >20 genes.
§Which Crossover? (Evolve only)
| Genotype | Compatible Crossovers | Recommended |
|---|---|---|
BinaryGenotype | All | CrossoverUniform or CrossoverSinglePoint |
ListGenotype<T> | All | CrossoverUniform or CrossoverSinglePoint |
MultiListGenotype<T> | All | CrossoverUniform or CrossoverSinglePoint |
UniqueGenotype<T> | CrossoverClone, CrossoverRejuvenate ONLY (others are compile errors) | CrossoverClone |
MultiUniqueGenotype<T> | Point-based + CrossoverClone, CrossoverRejuvenate (gene-based are compile errors) | CrossoverSinglePoint |
RangeGenotype<T> | All | CrossoverUniform or CrossoverSinglePoint |
MultiRangeGenotype<T> | All | CrossoverUniform or CrossoverSinglePoint |
Compile-time safety: UniqueGenotype does not implement SupportsGeneCrossover
or SupportsPointCrossover, so incompatible crossovers are compile errors.
Use CrossoverClone (clones parents, relies on mutation for diversity) or
CrossoverRejuvenate (like Clone but optimized for less memory copying).
MultiUniqueGenotype implements SupportsPointCrossover only, so gene-based
crossovers (CrossoverUniform, CrossoverSingleGene, CrossoverMultiGene) are
compile errors.
Note on UniqueGenotype + Evolve: CrossoverClone with UniqueGenotype
produces valid code but is almost always less efficient than HillClimb +
call_repeatedly(n). Only use Evolve + CrossoverClone for UniqueGenotype
when you need Extensions or speciation.
§Which Select?
| Type | When to use |
|---|---|
SelectElite | Deterministic, sorts by fitness. Less diversity. |
SelectTournament | Default choice. When you want stochastic pressure. Better diversity. |
§Which Mutate?
| Type | When to use |
|---|---|
MutateSingleGene | Default choice. Simple, one gene per chromosome. |
MutateMultiGene | When faster exploration is needed. Multiple genes. |
MutateMultiGeneRange | When you want random variation in mutation count. |
MutateSingleGeneDynamic | Auto-adjusts probability based on population cardinality. |
MutateMultiGeneDynamic | Auto-adjusts probability based on cardinality. Multiple genes. |
§Which MutationType?
MutationType controls how gene values change during mutation (Range/MultiRange
genotypes only). Other genotypes ignore this setting.
| Variant | Behavior |
|---|---|
Random (default) | Uniform random within allele range. |
Range(bandwidth) | Uniform random within ±bandwidth of current value, clamped to range. |
Step(step) | Exactly +step or -step, clamped to range. |
Discrete | Integer-only mutations. Required for Permutate with Range genotypes. |
RangeScaled(vec) | Like Range, but advances through multiple bandwidths. |
StepScaled(vec) | Like Step, but advances through multiple step sizes. |
Random is the default when unspecified. For float genomes >50 genes, prefer
RangeScaled or StepScaled for progressive refinement.
Scale advancement: Scaled variants advance to the next phase when
max_stale_generations or max_generations is reached. Counters reset per
phase. A run with 3 phases and max_stale_generations(100) can run up to 300
stale generations before terminating.
§If unsure, start here
For binary/list genotypes:
// also requires: genotype, fitness, target_population_size, ending condition
.with_select(SelectTournament::new(0.5, 0.02, 4))
.with_crossover(CrossoverUniform::new(0.7, 0.8))
.with_mutate(MutateSingleGene::new(0.2))For range/float genotypes (>50 genes, see Troubleshooting for tuning):
// also requires: genotype, fitness, target_population_size, ending condition
.with_select(SelectTournament::new(0.5, 0.02, 4))
.with_crossover(CrossoverUniform::new(0.7, 0.8))
.with_mutate(MutateMultiGene::new(10, 0.8))For unique genotypes (swap-only problems):
// also requires: genotype, fitness, target_population_size, ending condition
.with_select(SelectTournament::new(0.5, 0.02, 4))
.with_crossover(CrossoverClone::new(0.7))
.with_mutate(MutateSingleGene::new(0.8))§Constructor Parameter Reference
§Select
SelectElite::new(
replacement_rate: f32, // 0.3-0.7 typical. Fraction of population replaced by offspring.
elitism_rate: f32, // 0.01-0.05 typical. Fraction preserved as elite (bypass selection gate).
)
SelectTournament::new(
replacement_rate: f32, // 0.3-0.7 typical.
elitism_rate: f32, // 0.01-0.05 typical.
tournament_size: usize, // 2-8 typical. Chromosomes compared per tournament.
)§Crossover
CrossoverUniform::new(
selection_rate: f32, // 0.5-0.8 typical. Fraction of parents selected for reproduction.
crossover_rate: f32, // 0.5-0.9 typical. Probability parent pair actually crosses over (vs clone).
)
CrossoverSinglePoint::new(selection_rate: f32, crossover_rate: f32)
CrossoverSingleGene::new(selection_rate: f32, crossover_rate: f32)
CrossoverMultiPoint::new(
selection_rate: f32,
crossover_rate: f32,
number_of_crossovers: usize, // Number of crossover points.
allow_duplicates: bool, // Allow same crossover point twice.
)
CrossoverMultiGene::new(
selection_rate: f32,
crossover_rate: f32,
number_of_crossovers: usize, // Number of genes to exchange.
allow_duplicates: bool, // Allow same gene index twice.
)
CrossoverClone::new(
selection_rate: f32, // No actual crossover, parents are cloned. Use with UniqueGenotype.
)
CrossoverRejuvenate::new(
selection_rate: f32, // Like Clone but drops non-selected first, then refills. Less memory copying.
)§Mutate
Rate guidance depends on genotype size and type — see “Mutation tuning” in
Troubleshooting. For float genomes >50 genes: use MutateMultiGene with
mutation_probability near 1.0 and scale number_of_mutations with genome
size. The “typical” mutation rates assume small genomes. For large genomes
(hundreds to thousands of genes), the effective per-gene mutation rate matters
more than the per-chromosome probability. Think in terms of what fraction of
all genes in the population actually change per generation.
MutateSingleGene::new(
mutation_probability: f32, // 0.05-0.3 typical for binary. See note above for floats.
)
MutateMultiGene::new(
number_of_mutations: usize, // Max genes mutated (sampled uniformly from 1..=n).
mutation_probability: f32, // Probability per chromosome.
)
MutateMultiGeneRange::new(
number_of_mutations_range: RangeInclusive<usize>, // e.g. 1..=5
mutation_probability: f32,
)
MutateSingleGeneDynamic::new(
mutation_probability_step: f32, // Step size for probability adjustment.
target_cardinality: usize, // Target unique chromosomes in population.
)
MutateMultiGeneDynamic::new(
number_of_mutations: usize, // Max genes mutated.
mutation_probability_step: f32, // Step size for adjustment.
target_cardinality: usize, // Target unique chromosomes.
)§Extension (Evolve only, all optional)
Extensions should not be needed when hyperparameters are properly tuned. They are
a fallback when the population keeps collapsing to clones despite reasonable
mutation/selection settings. Escalation order: MassDegeneration (most bang for buck)
→ MassDeduplication/MassExtinction/MassGenesis (mostly for completeness and the cambrian explosion metaphor).
ExtensionMassExtinction::new(
cardinality_threshold: usize, // Trigger when unique chromosomes drop below this.
survival_rate: f32, // Fraction that survives (random selection + elite).
elitism_rate: f32, // Fraction of elite preserved before random reduction.
)
// Randomly trims population. Recovery happens naturally through offspring in following generations.
ExtensionMassGenesis::new(
cardinality_threshold: usize, // Trims to only 2 unique best (Adam & Eve).
)
// Extreme version of MassExtinction. Population recovers through offspring in following generations.
ExtensionMassDegeneration::new(
cardinality_threshold: usize,
number_of_mutations: usize, // Number of gene mutations applied per chromosome.
elitism_rate: f32, // Fraction of elite preserved before mutation.
)
// Only extension that actually mutates genes. No population trim, same size throughout.
ExtensionMassDeduplication::new(
cardinality_threshold: usize, // Trims to only unique chromosomes (by genes hash).
)
// Removes duplicates. Population recovers through offspring in following generations.§Builder Methods (Evolve)
Builder method order does not matter.
Required:
.with_genotype(genotype)— the search space.with_fitness(fitness)— the evaluation function.with_select(select)— parent selection strategy.with_crossover(crossover)— how parents combine.with_mutate(mutate)— how offspring are varied- At least ONE ending condition (see below)
Ending conditions (at least one required):
.with_target_fitness_score(score)— stop when best fitness reaches this value.with_max_stale_generations(n)— stop after n generations without improvement.with_max_generations(n)— stop after n total generations
max_stale_generations guidance:
- HillClimb SteepestAscent without plateaus: 1-2 (use
call_repeatedlyfor restarts) - HillClimb SteepestAscent with plateaus (e.g. N-Queens): 1000+ (needs room to
traverse equal-fitness states via
replace_on_equal_fitness) - Binary/list problems: 100-1000
- Permutation problems: 1000-10000
- Range with StepScaled/RangeScaled: multiply by number of phases (each phase resets the stale counter)
Optional:
.with_target_population_size(n)— number of chromosomes (defaults to 100). Heuristic: small genomes (<50 genes): 100, medium (50-500): 200-500, large/complex: 1000+. Must be explicitly set forStrategyBuilder(defaults to 0). HillClimb does not use population size..with_fitness_ordering(FitnessOrdering::Minimize)— default is Maximize.with_par_fitness(true)— parallelize fitness calculation.with_fitness_cache(size)— LRU cache for expensive fitness.with_replace_on_equal_fitness(bool)— replace best even on equal score (default: true).with_extension(extension)— diversity management (fallback, hyperparameters smell).with_reporter(reporter)— progress monitoring. UseEvolveReporterSimple::new(100)for progress every 100 generations,EvolveReporterDuration::new()for performance profiling. Each strategy has its own reporter types:EvolveReporterSimple/Duration,HillClimbReporterSimple/Duration,PermutateReporterSimple/Duration. Use*ReporterNoopfor no reporting..with_rng_seed_from_u64(seed)— deterministic results (use 0 for tests).with_valid_fitness_score(score)— gates all ending conditions: no ending condition fires until best fitness reaches this threshold.with_max_chromosome_age(n)— removes chromosomes with age >= n from selection pool. Age resets to 0 for offspring, increments each generation..with_seed_genes_list(genes_list)— seed initial population with known solutions (set on the genotype builder, not the strategy builder)
§Builder Methods (HillClimb)
Required:
.with_genotype(genotype).with_fitness(fitness)- At least ONE ending condition (see below)
Ending conditions (at least one required):
.with_target_fitness_score(score)— stop when best fitness reaches this value.with_max_stale_generations(n)— stop after n generations without improvement.with_max_generations(n)— stop after n total generations
Optional:
.with_variant(HillClimbVariant::SteepestAscent)— default is Stochastic. Stochastic: fast, one random neighbor per generation, good for large genomes. SteepestAscent: evaluates all neighbors, finds best improvement, slow for large genomes. Warning: SteepestAscent evaluates n*(n-1)/2 neighbors for UniqueGenotype of size n. Use only for small genomes (<20 genes)..with_fitness_ordering(FitnessOrdering::Minimize)— default is Maximize.with_par_fitness(true)— parallelize fitness calculation. Note:par_fitnesshas no effect withHillClimbVariant::Stochastic(sequential by nature). Only useful withSteepestAscent..with_fitness_cache(size)— LRU cache for expensive fitness.with_replace_on_equal_fitness(bool)— replace best even on equal score (default: true). For minimization problems with plateaus (many solutions share the same fitness), this default is essential — without it, HillClimb cannot traverse plateaus to find improvements..with_valid_fitness_score(score)— only solutions with this score or better are valid.with_reporter(reporter)— progress monitoring.with_rng_seed_from_u64(seed)— deterministic results
HillClimb auto-disables genes_hashing (unless fitness_cache is set), so you
don’t need to set it manually.
HillClimb has no Select/Crossover/Mutate — it generates neighbors from the genotype directly.
§Builder Methods (Permutate)
Required:
.with_genotype(genotype).with_fitness(fitness)
Optional:
.with_fitness_ordering(FitnessOrdering::Minimize)— default is Maximize.with_par_fitness(true)— parallelize fitness calculation.with_replace_on_equal_fitness(bool)— replace best even on equal score (default: true).with_reporter(reporter)— progress monitoring
Permutate has no ending conditions — it exhaustively evaluates all possibilities.
Note: RangeGenotype/MultiRangeGenotype only support Permutate with
MutationType::Step, MutationType::StepScaled, or MutationType::Discrete
(these make the search space countable).
§Builder Methods (StrategyBuilder)
StrategyBuilder is a superset builder supporting all three strategies from one
configuration. Useful for dynamically sized problems where small spaces use
Permutate and larger ones use Evolve.
Requires use genetic_algorithm::strategy::prelude::*; and a genotype that
implements all three strategy traits. All genotypes are compatible:
BinaryGenotype, ListGenotype, UniqueGenotype, MultiListGenotype,
MultiUniqueGenotype, RangeGenotype, MultiRangeGenotype. Note that
RangeGenotype/MultiRangeGenotype only support the Permutate variant with
MutationType::Step, StepScaled, or Discrete (runtime check via
allows_permutation()).
Switch strategy with .with_variant(variant):
use genetic_algorithm::strategy::prelude::*;
let builder = StrategyBuilder::new()
.with_genotype(genotype)
.with_target_population_size(100)
.with_max_stale_generations(100)
.with_fitness(my_fitness)
.with_select(SelectTournament::new(0.5, 0.02, 4))
.with_crossover(CrossoverUniform::new(0.7, 0.8))
.with_mutate(MutateSingleGene::new(0.2));
// Switch strategy based on problem size
let variant = if search_space_size < 1_000_000 {
StrategyVariant::Permutate(PermutateVariant::Standard)
} else if is_permutation_problem {
StrategyVariant::HillClimb(HillClimbVariant::Stochastic)
} else {
StrategyVariant::Evolve(EvolveVariant::Standard)
};
let strategy = builder.with_variant(variant).call().unwrap();
println!("best: {:?}", strategy.best_fitness_score());Supports all 5 call variants (see “Running the Strategy” for availability table).
call_speciated/call_par_speciated fall back to call_repeatedly/call_par_repeatedly
for non-Evolve strategies.
§Running the Strategy
Two patterns:
// Pattern 1: One-shot (build + run in one call)
let evolve = Evolve::builder()
.with_genotype(genotype)
// ... other builder steps ...
.call() // builds AND runs
.unwrap();
// Pattern 2: Build then run (inspect state after)
let mut evolve = Evolve::builder()
.with_genotype(genotype)
// ... other builder steps ...
.build() // only builds
.unwrap();
evolve.call(); // runs separatelyAdditional Evolve builder call variants (return (best_run, all_runs) tuple):
.call_repeatedly(n)— n independent runs, return best + all.call_par_repeatedly(n)— parallel version, combine with with_par_fitness(false) to avoid double parallelization.call_speciated(n)— n species runs, then final run seeded with best genes.call_par_speciated(n)— parallel version, combine with with_par_fitness(false) to avoid double parallelization, but will hurt final run
let (best, _all_runs) = Evolve::builder()
// ... builder steps ...
.call_repeatedly(5)
.unwrap();Note: The best run is extracted from the runs vector — all_runs contains
N-1 results.
§Choosing a call variant
| Variant | When to use |
|---|---|
call() | Default. Sufficient for most problems. |
call_repeatedly(n) | Results vary across runs (local optima). Typical n: 10. |
call_par_repeatedly(n) | Parallel version of above. Beware of double parallelization with_par_fitness |
call_speciated(n) | Multiple runs seed a final refinement pass. Best for complex combinatorial problems, where characteristics of alternative refined solutions need combining |
call_par_speciated(n) | Parallel version of above. Beware of double parallelization with_par_fitness |
Call variant availability by builder:
| Variant | Evolve::builder() | HillClimb::builder() | Permutate::builder() | StrategyBuilder |
|---|---|---|---|---|
call() | yes | yes | yes | yes |
call_repeatedly(n) | yes | yes | no | yes |
call_par_repeatedly(n) | yes | yes | no | yes |
call_speciated(n) | yes | no | no | yes (falls back to call_repeatedly) |
call_par_speciated(n) | yes | no | no | yes (falls back to call_par_repeatedly) |
Only Evolve performs true speciation (seeding a final run with best genes from prior runs). Each run before the final run starts from a random population.
Permutate always falls back to call() for all variants.
Both .call() and .build() return Result<_, TryFromEvolveBuilderError>.
Builder validation catches: missing required fields and missing ending conditions.
Incompatible genotype + crossover combinations are caught at compile time via
trait bounds (SupportsGeneCrossover, SupportsPointCrossover).
§Common Mistakes
WRONG: UniqueGenotype + CrossoverUniform = COMPILE ERROR
FIX: Use CrossoverClone or CrossoverRejuvenate
WRONG: No ending condition = COMPILE/BUILD ERROR
FIX: Add .with_max_stale_generations(1000)
WRONG: Fitness returns f64 = TYPE ERROR
FIX: Return Some((score / precision) as FitnessValue)
WRONG: MutateSingleGene(0.2) with 1000+ float genes = DIVERSITY COLLAPSE
FIX: Use MutateMultiGene with higher mutation count, see Troubleshooting§Retrieving Results
These methods are available on all strategy types (Evolve, HillClimb,
Permutate):
// Best genes and fitness score (returns None if no valid fitness was found)
let (best_genes, best_fitness_score) = evolve.best_genes_and_fitness_score().unwrap();
// Or separately (gene type depends on genotype, e.g. Vec<bool> for BinaryGenotype)
let best_genes = evolve.best_genes();
let best_fitness_score = evolve.best_fitness_score();
// Generation when best was found (available on all strategies)
let best_generation = evolve.best_generation();§Implementing Custom Fitness
#[derive(Clone, Debug)]
struct MyFitness;
impl Fitness for MyFitness {
type Genotype = BinaryGenotype; // or any genotype type
fn calculate_for_chromosome(
&mut self,
chromosome: &FitnessChromosome<Self>,
_genotype: &FitnessGenotype<Self>,
) -> Option<FitnessValue> {
// Access genes via chromosome.genes (Vec<bool> for BinaryGenotype)
// Return Some(score) for valid solutions
// Return None for invalid solutions (ranks last)
let score = chromosome.genes.iter().filter(|&&v| v).count();
Some(score as FitnessValue)
}
}FitnessChromosome<Self> is Chromosome<Allele> — access genes via
chromosome.genes. FitnessGenotype<Self> provides genotype metadata (allele
ranges, lists).
Prefer penalties over None: Return Some(score_with_penalty) rather than
None for invalid solutions. None provides no gradient signal and ranks last
unconditionally, while penalties let the algorithm converge incrementally out of
invalid space.
The fitness struct must implement Clone + Send + Sync + Debug. Most structs
auto-derive Send + Sync; use Arc instead of Rc if you need shared references.
Why &mut self? The calculate_for_chromosome method takes &mut self so
you can pre-allocate buffers and reuse them across evaluations for performance.
For simple fitness functions, just ignore the mutability. When using
par_fitness(true), each thread gets its own clone via ThreadLocal.
§Using Custom Allele Types
ListGenotype<T> and UniqueGenotype<T> accept custom types as alleles. Allele
trait bounds: Clone + Copy + Send + Sync + Debug. Additionally, types must
implement Hash (for genes hashing) via the impl_allele! macro.
use genetic_algorithm::strategy::evolve::prelude::*;
#[derive(Clone, Copy, PartialEq, Hash, Debug)]
enum Color { Red, Green, Blue }
genetic_algorithm::impl_allele!(Color);
let genotype = ListGenotype::<Color>::builder()
.with_genes_size(10)
.with_allele_list(vec![Color::Red, Color::Green, Color::Blue])
.build()
.unwrap();§Genotype Builder Options
| Genotype | Required Builder Methods | genes_size |
|---|---|---|
BinaryGenotype | with_genes_size(n) | explicit |
ListGenotype<T> | with_genes_size(n), with_allele_list(vec) | explicit |
UniqueGenotype<T> | with_allele_list(vec) | derived from list length |
RangeGenotype<T> | with_genes_size(n), with_allele_range(range) | explicit |
MultiListGenotype<T> | with_allele_lists(vec) | derived from lists length |
MultiUniqueGenotype<T> | with_allele_lists(vec) | derived from lists length |
MultiRangeGenotype<T> | with_allele_ranges(vec) | derived from ranges length |
Notes: Setting with_genes_size() to a value conflicting with the derived
size is a build error. Use with_allele_list (singular) for ListGenotype and
UniqueGenotype; use with_allele_lists (plural) for Multi* variants. Using
the wrong variant gives a helpful build error.
All genotype builders support these optional settings:
.with_genes_hashing(true)(default) — required forfitness_cacheandExtension&Mutationcardinality_thresholds. Auto-disabled by HillClimb (unlessfitness_cacheis set)..with_chromosome_recycling(true)(default) — reuses chromosome memory allocations. Generally leave at default.
§Troubleshooting
Fitness not improving?
- Increase
target_population_size(more diversity) - Increase
mutation_probability(more exploration) - Try
MutateSingleGeneDynamicorMutateMultiGeneDynamic(auto-adjusts) - For
RangeGenotype/MultiRangeGenotype: useMutationType::RangeScaledfor progressive refinement instead ofMutationType::Random - Add an extension like
ExtensionMassDegenerationto escape local optima when diversity drops
Mutation tuning
The “typical” mutation rates assume small binary genomes. For large float genomes (hundreds to thousands of genes), the effective per-gene mutation rate matters more than the per-chromosome probability. Think in terms of what fraction of all genes in the population actually change per generation.
- Binary genes: mutation flips 0↔1, which is a large relative change. Low rates (1-5% per chromosome) suffice.
- Float genes: mutation nudges a continuous value. Each mutation has less relative impact, so you need far more mutations to maintain diversity.
Concrete example for a 2000-gene float genome, population 100:
MutateSingleGene(0.2)→ 1 gene × 20% of offspring = effective 0.01% of all genes change per generation. Population will collapse to near-clones.MutateMultiGene(10, 1.0)→ ~5.5 genes × 100% of offspring = effective 0.28% of all genes change per generation. Maintains diversity.
Rule of thumb for float genomes: target 0.1%-1.0% effective per-gene mutation
rate across the population. Use MutateMultiGene with high mutation_probability
and scale number_of_mutations with genome size.
But high mutation prevents convergence. Use scaled mutation types to get both exploration early and convergence late:
MutationType::RangeScaled(vec![100.0, 100.0, 50.0, 10.0, 1.0])— starts with full-range Random mutations (100% of allele range = effectively Random), then progressively narrows the mutation bandwidth. Best for float genomes: wide exploration phases first, then tight range-bound convergence.MutationType::StepScaled(vec![10.0, 1.0, 0.1])— fixed step sizes that decrease through phases. Better for grid-like or discrete problems.
Combine with max_stale_generations to trigger phase transitions automatically
(advances to next phase when fitness plateaus).
Runtime too slow?
- Use
.with_par_fitness(true)for expensive fitness calculations - Use
.with_fitness_cache(size)if many chromosomes share the same genes (is hyperparameter smell though) - Reduce
target_population_sizeif fitness is cheap but framework overhead is high - For
HillClimb::SteepestAscent: the neighbourhood can be very large, considerStochasticvariant instead par_fitnesshas no effect withHillClimbVariant::Stochastic(sequential by nature). Usecall_par_repeatedlyto parallelize independent Stochastic runs. Disablepar_fitnessto avoid double parallelization.
Getting None as best fitness?
- All chromosomes returned
Nonefromcalculate_for_chromosome - Check your fitness function’s validity constraints — they may be too strict
- Increase population size so some valid solutions appear in the initial population
- Prefer large penalties over
None(see “Implementing Custom Fitness”)
Multi-objective optimization?
This library optimizes a single FitnessValue. For multiple objectives, combine
them into a weighted sum: Some(((w1 * obj1 + w2 * obj2) / precision) as FitnessValue).
Adjust weights to control tradeoffs.
§Gotchas
- FitnessValue is isize. Scale floats manually:
(score / precision) as FitnessValue. - Ending condition required. Evolve/HillClimb need at least one of:
target_fitness_score,max_stale_generations,max_generations. - Fitness struct must be
Clone + Send + Sync + Debug. Most structs auto-deriveSend + Sync; useArcinstead ofRcif needed. - Permutate + RangeGenotype requires
MutationType::Step,StepScaled, orDiscrete. target_population_sizedefaults to 100. Override with.with_target_population_size(n)if needed.- Custom Crossover/Mutate/Extension must call
chromosome.reset_metadata(genotype.genes_hashing)after modifying genes directly. - For deterministic tests: use
.with_rng_seed_from_u64(0). Exact results may change between library versions, but deterministic within a version.
Modules§
- allele
- The possible values for a single gene
- chromosome
- The chromosome is a container for the genes and stores some useful values
- crossover
- The crossover phase where two parent chromosomes create two children chromosomes. The selection phase determines the order the parent pairing (overall with fitter first).
- errors
- extension
- When approacking a (local) optimum in the fitness score, the variation in the population goes down dramatically. The offspring will become clones of the parents and the only factor seeding randomness is the mutation of the offspring. But this remaining randomness might not be selected for, killing of the offspring again. This reduces the efficiency, but also has the risk of local optimum lock-in. To increase the variation in the population, an extension mechanisms can optionally be used
- fitness
- The search goal to optimize towards (maximize or minimize).
- genotype
- The search space for the algorithm.
- mutate
- The mutation strategy, very important for avoiding local optimum lock-in. But don’t overdo it, as it degenerates the population too much if overused. Use a mutation probability generally between 5% and 20%.
- population
- The population is a container for Chromosomes and handles optional chromsome recycling
- select
- The selection phase, where chromosomes are lined up for pairing in the crossover phase, dropping the chromosomes outside of the target_population_size.
- strategy
- solution strategies for finding the best chromosomes.
Macros§
- impl_
allele - Macro for implementing Allele with default hash_slice Use this for any type that implements Hash and needs the standard hashing behavior