Skip to main content

Crate llm_assisted_api_debugging_lab

Crate llm_assisted_api_debugging_lab 

Source
Expand description

Library entrypoint.

§Layering

Case + log  ->  Vec<Evidence>  ->  Diagnosis  -+->  Report (human)
                                               |->  Prompt (LLM)
                                               +->  Prompt JSON (LLM)
  1. cases loads a Case (a sanitized HTTP transaction) from a fixture file.
  2. evidence::collect_evidence normalizes the case and its matching log file into a Vec<Evidence>. The log parser (evidence::parse_log) is also exposed so the diagnose-log subcommand can run against a bare log without a JSON fixture.
  3. diagnose::diagnose is a pure function over (name, &[Evidence]) that produces a Diagnosis. There is no clock, no env, no fs, no randomness inside the rules — every snapshot test is reproducible on any machine.
  4. The renderers (render_report, render_short, render_prompt, render_prompt_json) each consume a &Diagnosis and produce user-visible output. None of them can reach back into the raw Case. This is the architectural guarantee that the LLM-facing surface cannot influence diagnostic truth.

§Re-exports

Every public item a downstream caller needs is re-exported from the crate root, so use llm_assisted_api_debugging_lab::diagnose; works without naming the module. The modules themselves remain pub for callers who want to reach internal helpers (e.g. report::render_evidence used by the tests).

Re-exports§

pub use cases::Case;
pub use cases::CaseError;
pub use cases::KNOWN_CASES;
pub use diagnose::diagnose;
pub use diagnose::Diagnosis;
pub use diagnose::Severity;
pub use diagnose::SeveritySource;
pub use evidence::collect_evidence;
pub use evidence::parse_log;
pub use evidence::Evidence;
pub use llm_prompt::render_prompt;
pub use llm_prompt::render_prompt_json;
pub use report::render_report;
pub use report::render_short;

Modules§

cases
Case fixture model and JSON loader.
diagnose
Deterministic rules engine.
evidence
Evidence model and collectors.
llm_prompt
LLM prompt template renderer.
prose
Per-rule prose loader.
report
Human-readable report renderer.