anapao
anapao is a deterministic Rust testing utility for simulation and stochastic workflows.
This README is a linear tutorial for new users: you will build one scenario, run it deterministically, add expectations, run Monte Carlo batches, and persist CI-friendly artifacts.
What You Will Build
By the end, you will have a repeatable testing flow that can:
- compile a
ScenarioSpecinto a validated executable model, - execute seeded deterministic single runs,
- execute deterministic Monte Carlo batches,
- evaluate typed assertions with evidence,
- persist artifact packs (
manifest.json,events.jsonl,series.csv, and more).
Prerequisites
- Rust
1.70+ - Cargo
- A Rust test project where you want deterministic simulation checks
Add the dependency:
[]
= "0.1.0"
Step 1: Create ScenarioSpec
ScenarioSpec is your declarative model: nodes, edges, end conditions, and tracked metrics.
Snippet S01 — Build a Minimal Scenario
use ;
let mut scenario = source_sink
.with_end_condition;
scenario.tracked_metrics.insert;
assert_eq!;
assert_eq!;
What you learned:
- how to bootstrap a minimum source->sink scenario with a convenience constructor,
- how end conditions and tracked metrics are attached.
Step 2: Compile with Simulator::compile
Compilation validates and transforms your scenario into deterministic execution indexes.
Snippet S02 — Compile a Scenario
use ;
use Simulator;
let scenario = source_sink
.with_end_condition;
let compiled = compile.unwrap;
assert_eq!;
What you learned:
- compilation is explicit and deterministic,
- you should compile once and reuse the compiled form for runs.
Step 3: Configure RunConfig
RunConfig controls deterministic single-run execution (seed, max_steps, capture policy).
Snippet S03 — Create a Deterministic RunConfig
use ;
let run = for_seed.with_max_steps.with_capture;
assert_eq!;
assert_eq!;
assert_eq!;
What you learned:
- seeds pin determinism,
- capture configuration controls trace granularity.
Step 4: Execute a Deterministic Single Run
Now run one deterministic simulation and assert expected outputs.
Snippet S04 — Run Once and Verify Outputs
use ;
use MetricKey;
let compiled = compile.unwrap;
let report = run.unwrap;
assert!;
assert_eq!;
assert_eq!;
What you learned:
- deterministic single-run output can be asserted directly in tests.
Step 5: Create an Expectation Set
Expectation provides typed assertion semantics for run and batch reports.
Snippet S05 — Declare Expectations
use ;
use MetricKey;
let metric = fixture;
let expectations = vec!;
assert_eq!;
What you learned:
- expectations are data, not ad-hoc assertion code,
- selector controls whether you validate final value vs specific step.
Step 6: Run with Assertions and Event Sink
Use the integrated assertion path and capture ordered events for diagnostics.
Snippet S06 — run_with_assertions_and_sink + VecEventSink
use ;
use VecEventSink;
use MetricKey;
use ;
let compiled = compile.unwrap;
let expectations = vec!;
let mut sink = new;
let = run_with_assertions_and_sink
.unwrap;
assert!;
assert!;
What you learned:
- assertions and execution can be done in one call,
- event streams provide structured debugging context.
Step 7: Configure BatchConfig
BatchConfig controls deterministic Monte Carlo execution.
Snippet S07 — Create BatchConfig
use ;
let batch = for_runs
.with_execution_mode
.with_base_seed
.with_run_template
.with_max_steps;
assert_eq!;
assert_eq!;
assert_eq!;
What you learned:
runsscales the Monte Carlo sample size,base_seed+ run index derivation preserve reproducibility.
Step 8: Execute a Deterministic Batch Run
Run many deterministic simulations and check aggregate outputs.
Snippet S08 — Run Batch and Verify Ordering/Aggregates
use ;
use MetricKey;
let compiled = compile.unwrap;
let batch = run_batch.unwrap;
assert_eq!;
assert!;
assert!;
What you learned:
- batch summaries are deterministic and index-ordered.
completed_runscounts reported run summaries; inspect eachrun.completedfor semantic completion.
Step 9: Persist Artifacts and Inspect ManifestRef
Persist reports for CI diffing and post-run diagnostics.
Snippet S09 — Full Playbook (Setup -> Run -> Assert -> Artifacts)
use write_run_artifacts_with_assertions;
use ;
use VecEventSink;
use MetricKey;
use ;
let compiled = compile.unwrap;
let expectations = vec!;
let mut sink = new;
let = run_with_assertions_and_sink
.unwrap;
assert!;
assert!;
let output_dir = temp_dir.join;
let manifest = write_run_artifacts_with_assertions
.unwrap;
assert!;
assert!;
assert!;
What you learned:
- persisted artifacts become your CI and debugging contract,
- manifest keys are stable assertions for artifact expectations.
Step 10: Fixture-First Testing with testkit (and rstest)
Use testkit helpers to avoid duplicating setup across tests.
Snippet S10 — Reusable Fixture-Style Test Pattern
use ;
use MetricKey;
deterministic_fixture_smoke;
What you learned:
- fixture helpers keep tests concise and deterministic,
- you can wrap these helpers in your own
rstestfixture macros for larger matrices.
Common Failure Modes and Debugging Hints
- Missing tracked metric:
- symptom: expectation fails with missing observed value.
- fix: ensure metric key is in
scenario.tracked_metrics.
- Non-terminating scenarios:
- symptom: run ends at
max_stepsunexpectedly. - fix: verify
end_conditionsare configured and reachable.
- symptom: run ends at
- Seed confusion:
- symptom: output differs between runs.
- fix: pin
RunConfig.seedfor single runs and keep batchbase_seedstable (batch seeds derive frombase_seed+ run index).
- Sparse traces:
- symptom: insufficient snapshots for diagnostics.
- fix: adjust
RunConfig.capture(every_n_steps, step-zero/final flags).
Feature Flags
parallel: enables Rayon-backed batch execution mode (ExecutionMode::Rayon).analysis-polars: enables Polars DataFrame shaping helpers.assertions-extended: enables extra assertion/snapshot/property helper crates.
Module Surface (Reference)
anapao exports:
typeserrorrngvalidationenginestochasticeventsbatchstatsartifactassertionstestkitanalysis(only withanalysis-polars)Simulator(compile/run/batch facade)
Validation Commands
Performance Workflow (Manual Compare)
# capture baseline matrix
# compare matrix
# manual non-failing regression summary (+7% threshold)
# flamegraphs and csv summaries
BENCH_FEATURES=parallel