Skip to main content

Module evidence

Module evidence 

Source
Expand description

AI Evidence Definition

An Evidence captures the output of a single validation or quality assurance step — running tests, linting code, compiling the project, etc. It is the objective data that supports (or contradicts) the agent’s proposed changes.

§Position in Lifecycle

Run ──patchsets──▶ [PatchSet₀, PatchSet₁, ...]
 │                       │
 │                       ▼
 └──────────────▶ Evidence (run_id + optional patchset_id)
                      │
                      ▼
                  Decision (uses Evidence to justify verdict)

Evidence is produced during a Run, typically after a PatchSet is generated. The orchestrator runs validation tools against the PatchSet and creates one Evidence per tool invocation. A single PatchSet may have multiple Evidence objects (e.g. test + lint + build). Evidence that is not tied to a specific PatchSet (e.g. a pre-run environment check) sets patchset_id to None.

§Purpose

  • Validation: Proves that a PatchSet works as expected (tests pass, code compiles, lint clean).
  • Feedback: Provides error messages, logs, and exit codes to the agent so it can fix issues and produce a better PatchSet.
  • Decision Support: The Decision references Evidence to justify committing or rejecting changes. Reviewers can inspect Evidence to understand why a verdict was made.

Structs§

Evidence
Output of a single validation step (test, lint, build, etc.).

Enums§

EvidenceKind
Kind of evidence.