Expand description
AI Provenance Definition
A Provenance records how a Run was executed:
which LLM provider, model, and parameters were used, and how many
tokens were consumed. It is the “lab notebook” for AI execution —
capturing the exact configuration so results can be reproduced,
compared, and accounted for.
§Position in Lifecycle
Run ──(1:1)──▶ Provenance
│
├── patchsets ──▶ [PatchSet₀, ...]
├── evidence ──▶ [Evidence₀, ...]
└── decision ──▶ DecisionA Provenance is created once per Run, typically at run start
when the orchestrator selects the model and provider. Token usage
(token_usage) is populated after the Run completes. The
Provenance is a sibling of PatchSet, Evidence, and Decision —
all attached to the same Run but serving different purposes.
§Purpose
- Reproducibility: Given the same model, parameters, and
ContextSnapshot, the agent should produce equivalent results. - Cost Accounting:
token_usage.cost_usdenables per-Run and per-Task cost tracking and budgeting. - Optimization: Comparing Provenance across Runs of the same Task reveals which model/parameter combinations yield better results or lower cost.
Structs§
- Provenance
- LLM provider/model configuration and usage for a single Run.
- Token
Usage - Normalized token usage across providers.