
About
A command line wrapper for repeated runs, with metadata and lightweight tracking. Wraps any command, captures stdout/stderr with timestamps, writes compressed gzip logs, and optionally stores results in MongoDB.
Why?
During development, a full interface to HPC oriented workflow engines like
AiiDA, Fireworks, Jobflow, and the like is typically too heavy, and more
importantly, the API is often not stable. bless provides a minimal layer that
records what was run, when, and what it produced, without imposing any workflow
structure. It can also be used alongside pychum and workflow runners like
Snakemake to store metadata more generically.
Design
bless uses tokio::process for async subprocess execution with concurrent
stdout/stderr streaming. The wrapped command's exit code is passed through via
ExitCode, so scripts and CI can inspect the real status. All errors are
represented as BlessError (via thiserror), covering I/O, MongoDB, logger
init, and command failure variants.
Log levels
TRACE-- additional metadata written bybless(label, uuid, duration)INFO-- stdout of the wrapped commandWARN-- stderr of the wrapped commandERROR-- bless-level errors (command failure, I/O errors)
Output formats
The --format flag controls stdout rendering:
log(default) --[timestamp LEVEL] messagejsonl-- one JSON object per line withts,level,msgfields
The gzip file always uses the timestamped log format regardless of --format.
Installation
From source:
# Binary at ./target/release/bless
Or install directly:
To include serve mode (capnp RPC log aggregation):
Usage
Basic usage, wrapping a build command:
This creates myproject_{uuid}.log.gz with the full captured output.
Suppress timestamps on stdout (gzip file still has them):
JSONL output for structured log processing:
Custom output path:
Stdout only (no gzip file):
Split stdout and stderr into separate gzip files:
# Produces run_{uuid}_stdout.log.gz and run_{uuid}_stderr.log.gz
Serve mode (feature-gated)
When built with --features serve, two additional flags are available.
Start a log aggregation server (capnp RPC):
Stream logs from a run to a remote bless server:
Add --local to also write a local gzip alongside remote streaming:
Session data is stored under $XDG_DATA_HOME/bless/sessions/ (or
$HOME/.local/share/bless/sessions/).
MongoDB
Store run output and metadata in MongoDB:
MONGODB_URI="mongodb://localhost:27017/"
The gzip blob, command args, label, uuid, timestamps, and duration are saved to
the commands collection in the local database.
Assuming pixi is used to get an instance of mongod:
MONGODB_URI="mongodb://localhost:27017/"
Inspect results with npx mongosh:
# Show all entries
)
# Drop all entries
Extracting run output
Since the gzip is stored as binary data keyed to the entry, a small helper script is provided:
Documentation
The docs site can be built with:
License
MIT. However, this is an academic resource, so please cite as much as possible via:
- The Zenodo DOI for general use.
- TBD a publication