RSPOW
A simple multi-algorithm proof-of-work library for Rust.
Algorithms
- SHA-256
- SHA-512
- RIPEMD-320
- Scrypt
- Argon2id
- EquiX (Tor's Equi‑X puzzle; hash = sha256(solution-bytes))
API references are available at docs.rs/rspow.
Difficulty Modes
RSPOW supports two difficulty modes:
- AsciiZeroPrefix (default): the hash must start with
difficultybytes of ASCII'0'(0x30).- Expected attempts grow by ~256 per additional byte.
- Simple to explain, but coarse-grained and often too steep for memory-hard hashes.
- LeadingZeroBits: the hash must have at least
difficultyleading zero bits.- Expected attempts ≈
2^difficulty. - Fine-grained control suitable for tuning across a wide range.
- Expected attempts ≈
Notes:
PoW::calculate_target()returns the ASCII'0'prefix and is meaningful only forAsciiZeroPrefix.- In
LeadingZeroBitsmode, thetargetslice is ignored; pass an empty slice for clarity.
Examples
ASCII '0' prefix (default)
use ;
let data = "hello world";
let difficulty = 2; // requires prefix "00"
let algorithm = Sha2_512;
let pow = new.unwrap;
let target = pow.calculate_target; // [0x30; difficulty]
let = pow.calculate_pow;
assert!;
assert!;
Leading zero bits (fine-grained)
use ;
let data = "hello world";
let bits = 12; // expected attempts ~ 2^12 = 4096
let algorithm = Sha2_256;
let pow = with_mode.unwrap;
let = pow.calculate_pow; // target is ignored in bits mode
assert!;
assert!;
EquiX (Tor Equi‑X puzzle)
use ;
let data = b"hello world";
let bits = 1; // expected attempts ≈ 2^bits; EquiX solver may yield 0+ solutions per challenge
let pow = with_mode.unwrap;
// For demonstrations/tests you can also use bits=0 to avoid long loops
let = pow.calculate_pow;
assert!;
EquiX proof-carrying API (O(1) verification)
For production use, prefer a proof that carries the EquiX solution bytes so the server verifies in constant time without solving:
use ;
use ;
// Derive a domain-separated seed once per request
let server_nonce = b"signed-token"; // signed & time-limited by the server
let data = b"payload";
let mut h = new;
h.update;
h.update;
h.update;
h.update;
h.update;
let seed: = h.finalize.into;
// Client: search by varying work_nonce
let bits: u32 = 8;
let = equix_solve_with_bits?;
// Server: verify O(1)
let vhash = equix_verify_solution?;
assert_eq!;
assert!;
Notes:
- Recommended seed:
SHA256("rspow:equix:v1|" || encode(server_nonce) || encode(data)). Then buildchallenge = seed || LE(work_nonce)per attempt. - Submit
{ server_nonce, work_nonce, solution_bytes }. The server rebuildsseedand verifies viaequix_verify_solutionandequix_check_bits. - To increase pressure under attack, require
mindependent proofs with distinctwork_noncevalues; each proof still verifies in O(1).
EquiX proof bundles (batch verify + storage‑efficient anti‑replay)
For multiple concurrent proofs, bundle them so the server verifies all in O(1) per proof while storing only a single anti‑replay key:
use ;
use ;
// Client derives seed once (domain-separated) and solves in parallel
let seed = ;
let results = equix_solve_parallel_hits?; // Vec<(EquixProof, [u8;32])>
// Base tag from the first proof (server stores only this); the rest are derived
let = &results;
let base_tag: = ;
let bundle = EquixProofBundle ;
// Server: re-derive seed, verify all proofs in O(1), and derive follow-up tags to avoid multiple keys
let oks = bundle.verify_all?; // all true
let derived = bundle.derived_tags; // tag[1..]
Example:
cargo run --release --example equix_bundle_demo -- --data hello --server-nonce sn --bits 1 --hits 4 --threads 8
Examples and Benchmarks
-
Proof-carrying EquiX demo:
cargo run --release --example equix_proof_demo -- --data hello --server-nonce sn --bits 1 --start 0 -
General PoW benchmark (select algorithm and mode):
# Default repeats is 300 to reduce measurement noise. cargo run --release --example pow_bench -- --algo sha2_256 --mode bits --difficulty 12 --data hello cargo run --release --example pow_bench -- --algo scrypt --mode ascii --difficulty 2 --scrypt-logn 10 --scrypt-r 8 --scrypt-p 1 cargo run --release --example pow_bench -- --algo argon2id --mode bits --difficulty 8 --argon2-m-kib 65536 --argon2-t 3 --argon2-p 1 cargo run --release --example pow_bench -- --algo equix --mode bits --difficulty 1 --server-nonce sn --start-work-nonce 0 -
CSV columns:
- Per-run rows:
kind,algo,mode,difficulty,data_len,run_idx,time_ms,tries,nonce_or_work,hash_hex.- For EquiX,
triesequals the number of challenges (work_nonce values) attempted per found solution — this directly measures “attempts per solution”.
- For EquiX,
- Summary row (appended at the end with its own header):
kind,algo,mode,difficulty,data_len,mean_time_ms,std_time_ms,stderr_time_ms,ci95_low_time_ms,ci95_high_time_ms,mean_tries,std_tries,stderr_tries,ci95_low_tries,ci95_high_tries.
- Per-run rows:
Argon2id with custom parameters
use ;
let data = b"hello world";
// Example parameters only; tune for your threat model.
let params = new.unwrap;
let algorithm = Argon2id;
// Prefer LeadingZeroBits for smoother tuning with memory-hard functions
let bits = 8; // expected attempts ~ 256
let pow = with_mode.unwrap;
let = pow.calculate_pow;
assert!;
Benchmarking
CLI benchmark (Argon2id, leading zero bits)
The crate ships an example that measures Argon2id proof-of-work time across bit difficulties with configurable parameters.
cargo run --release --example bench_argon2_leading_bits -- \
--start-bits 1 --max-bits 12 --repeats 5 \
--m-mib 128 --t-cost 3 --p-cost 1 \
--data "hello world"
- Results stream as CSV: each run emits a
runrow immediately, followed by asummaryrow per difficulty. --random-start=true(default) draws a random starting nonce for every repetition so that tries follow the expected geometric distribution. Disable with--random-start falseif you only want runtime variation.--seed <u64>fixes the random sequence for reproducibility.- Additional options:
--m-kib,--repeats,--start-bits,--max-bits,--data. Run with--helpfor the full list.
WASM build & browser demo
Use the helper script to drive formatting/tests, build the wasm bundle, and (optionally) launch a local server:
./scripts/wasm_pipeline.sh --offline --serve --port 8080
Flags:
--offlinekeeps Cargo/wasm-pack from hitting the network (CARGO_NET_OFFLINE=1).--devswitches to debug profile (default is release).--skip-testskipscargo test.--servelaunchespython3 -m http.serverinsidewasm-demo/www.
After the script completes, open http://127.0.0.1:8080 (or your chosen port). The browser UI lets you configure start/max bits, repeats, Argon2 parameters, and whether to randomize the nonce. Results append to the textarea as CSV and include mean, standard deviation, standard error, plus 95% and 99% confidence intervals for both time (ms) and tries.
KPoW (k-of-puzzles) — concurrent PoW with predictable wall time
KPoW lets you solve k independent puzzles concurrently with a worker pool of size workers (alpha), collecting the first k successes. This keeps verification cheap (≈ one Argon2 per proof) while improving wall‑time predictability (variance ~ 1/√k) and utilizing multiple cores.
- Library API
use ;
use Argon2Params;
let bits = 5; // compute/verify ≈ 2^bits = 32x
let params = new?; // 64MiB, t=3, p=1
let workers = 4;
let seed = ;
let payload = b"ctx".to_vec;
let kpow = new;
// Production: compute k proofs (no timing/tries overhead)
let proofs: = kpow.solve_proofs?;
assert!;
// Benchmarking: compute proofs and get total stats
let = kpow.solve_proofs_with_stats?;
println!;
- Demo example
cargo run --release --example kpow_demo
# Environment overrides (optional):
# KPOW_WORKERS=<usize> number of worker threads (default 4)
# KPOW_K=<usize> number of proofs to collect (default 8)
- KPoW benchmark example (CSV streaming + summary)
cargo run --release --example kpow_bench_argon2_leading_bits -- \
--bits 5 --k 8 --workers 4 --repeats 10 \
--m-mib 64 --t-cost 3 --p-cost 1 --payload demo | tee kpow_64mib.csv
Notes:
- Compute/verify ratio is governed by
bits: ≈2^bits(independent of Argon2 params). Withbits=5, ratio ≈ 32x. - Wall‑time predictability improves with
k(roughly ~ 1/√k). Verification cost grows linearly withk(≈kArgon2 runs). m_kib/t_cost/p_costdecide the per‑hash costc. Larger memory or t_cost increasescroughly linearly.
WASM (browser) threading quick note
To use KPoW with true threads in the browser (std::thread over Web Workers):
- Build with target features
+atomics,+bulk-memory,+mutable-globalsforwasm32-unknown-unknown. - Serve pages under cross‑origin isolation (COOP: same-origin, COEP: require-corp) so SharedArrayBuffer is enabled.
- This crate enforces threaded‑WASM by default; single‑thread fallback on wasm32 is only allowed if you explicitly build with
--cfg kpow_allow_single_thread.
Tuning Guidance
- LeadingZeroBits: each additional bit doubles expected attempts; choose
bitsto match your time budget. - AsciiZeroPrefix: each additional byte multiplies attempts by ~256; easy but coarse.
- Memory-hard algorithms (e.g., Argon2id, Scrypt) may make multi-byte ASCII prefix targets impractical; prefer
LeadingZeroBits.
Compatibility
- Existing code using
PoW::newandcalculate_target()keeps the legacy behavior by default. - New code is encouraged to adopt
DifficultyMode::LeadingZeroBitsfor precise difficulty control.
Parallel client PoW (latency/throughput trade-off)
The parallel_bench example explores per-device scale-up by varying threads (default nproc-1) and measuring both time-to-first-hit and time-to-H-hits:
cargo run --release --example parallel_bench -- --algo equix --mode bits --difficulty 1 --hits 8 --threads 8
cargo run --release --example parallel_bench -- --algo equix --mode bits --difficulty 1 --hits 8 --threads-list 1,2,4,8
cargo run --release --example parallel_bench -- --algo sha2_256 --mode bits --difficulty 12 --hits 16
Output shows first_time_ms, total_time_ms and throughput_hits_per_s. Increasing parallelism often raises the latency of a single task slightly (contention, scheduling) yet raises total throughput substantially.
CSV layout (stdout; use | tee file.csv to save):
- Per-run rows (one per repeat per threads value):
- Header:
kind,algo,mode,bits_or_len,hits,threads,repeat_idx,first_time_ms,total_time_ms,throughput_hits_per_s. - Semantics:
first_time_ms: time to the first successful proof with the given parallelism.total_time_ms: time to collecthitsproofs (wall time).throughput_hits_per_s:hits / (total_time_ms/1000).
- Header:
- Per-threads summary row (one per threads value, preceded by its own header):
- Header:
kind,algo,mode,bits_or_len,hits,threads,mean_first_ms,std_first_ms,stderr_first_ms,ci95_low_first_ms,ci95_high_first_ms,mean_total_ms,std_total_ms,stderr_total_ms,ci95_low_total_ms,ci95_high_total_ms,mean_throughput,std_throughput,stderr_throughput,ci95_low_throughput,ci95_high_throughput. - These are computed over
--repeatssamples at the samethreads.
- Header:
Tips for more stable measurements:
- Use
--repeats 5(or higher) for each threads value. - Keep the system thermals and CPU scaling steady; avoid heavy background load.
- Expect some increase in
first_time_mswhen threads grow, whilethroughput_hits_per_stypically improves.