coren
Compute and resource normalization.
Measures what your machine can do. Built for irohds -- to tell you whether a computation is faster to run locally or fetch from the network.
Any two machines looking at the same function can independently agree on how much work it requires (an almost deterministic op count). Each machine knows its own capabilities (benchmarked once at startup). The verdict is local arithmetic: compare estimated compute time against estimated fetch time.
Integration into irohds, a decentralized memoization system for scientific computing, was the initial reason for making this package. It is also useful as a standalone tool for roofline analysis, ETL buffer sizing, and build parallelism decisions.
Install
Or from source:
Usage
# Describe the work (deterministic, same on every machine)
=
# Measure this machine (benchmarks run once, cached)
=
# Get the answer
=
# "compute (saves 0.712s)" or "fetch (saves 2.3s)"
How it works
Two layers:
FnCost describes a function's resource requirements in absolute
physical units. Four integers: ops (total arithmetic operations),
mem_bytes (memory traffic), peak_mem (peak RAM footprint),
result_bytes (output size). These are properties of the algorithm
and its inputs. Bitwise identical on every machine.
MachCap describes what this machine can do. Measured via micro-benchmarks (FMA throughput, STREAM triad, disk sequential I/O) and OS queries (NIC link speed, battery state, core count, RAM). Produces a roofline model: peak ops/s and memory bandwidth.
The verdict compares estimated compute time (from the roofline model) against estimated fetch time (result_bytes / NIC bandwidth). The score is the difference in seconds: positive means fetch is faster, negative means compute is faster. Infinity means one option is impossible (no RAM, or no network).
FnCost constructors
FnCost.new(ops, mem_bytes, peak_mem, result_bytes) raw values
FnCost.scan(n_bytes, result_bytes) linear scan
FnCost.sort(n, item_bytes, result_bytes) merge sort
FnCost.hash(n_bytes) crypto hash
FnCost.matmul(m, n, k, result_bytes) dense GEMM
FnCost.etl(rows, row_bytes, ops_per_row, result_bytes) row processing
FnCost.copy(size) file copy (ops=0)
Combinators
# sequential: ops sum, peak_mem = max, result = b's output
+ # same as then
# parallel: ops = max, peak_mem sums, result sums
# k iterations, peak_mem unchanged
Normalizing wall-clock measurements
When a function is executed and you only know the wall-clock time (not
the algorithmic complexity), MachCap.normalize() converts the
measurement into a FnCost suitable for local verdict computation.
WARNING: normalize() output is NOT deterministic across machines.
Different machines produce different ops/mem_bytes values for the same
function. Do NOT use normalize()-produced FnCost as cache keys,
content addresses, or any identifier that must match across peers.
For cache keys, use the static constructors (sort, hash, matmul,
etc.) or FnCost.new() with values derived from the function's
definition and parameters.
=
# cost is a safe overestimate, suitable for verdict() but NOT for cache keys
CLI
)
)
)
Rust
use ;
let cost = sort;
let cap = read;
let v = cap.verdict;
if v.should_fetch else
License
MIT OR Apache-2.0