Expand description
AI Evidence Definition
An Evidence captures the output of a single validation or quality
assurance step — running tests, linting code, compiling the project,
etc. It is the objective data that supports (or contradicts) the
agent’s proposed changes.
§Position in Lifecycle
⑥ ToolInvocation / ⑦ PatchSet
│ │
│ ▼
└──────────▶ Evidence (run_id + optional patchset_id)
│
▼
⑨ Decision (verdict justification)Evidence is produced during a Run, typically after a PatchSet is
generated. The orchestrator runs validation tools against the
PatchSet and creates one Evidence per tool invocation. A single
PatchSet may have multiple Evidence objects (e.g. test + lint +
build). Evidence that is not tied to a specific PatchSet (e.g. a
pre-run environment check) sets patchset_id to None.
§Purpose
- Validation: Proves that a PatchSet works as expected (tests pass, code compiles, lint clean).
- Feedback: Provides error messages, logs, and exit codes to the agent so it can fix issues and produce a better PatchSet.
- Decision Support: The
Decisionreferences Evidence to justify committing or rejecting changes. Reviewers can inspect Evidence to understand why a verdict was made.
§How Libra should use this object
- Create one
Evidenceobject per validation tool execution or report. - Attach
patchset_idwhen the validation targets a specific candidate diff. - Use
summary,exit_code, andreport_artifactsfor the durable audit record. - Derive pass/fail dashboards and gating status in Libra; do not
rewrite
PatchSetorRunsnapshots with validation summaries.
Structs§
- Evidence
- Output of a single validation step (test, lint, build, etc.).
Enums§
- Evidence
Kind - Kind of evidence.