Expand description
MockLlm — deterministic backend that replays recorded responses.
Given a baseline trace, MockLlm indexes every chat_response record by
its parent chat_request id and serves it back on demand. Because the
request id is sha256(canonical_json(payload)) (SPEC §6), a request
with the same payload as one in the baseline always hits the mock — no
“fuzzy matching” or fallback is attempted in strict mode.
Use this in CI, tests, and the offline demo. For running new
configurations against live providers, see future backends in
python/src/shadow/llm/.
Structs§
- MockLlm
- Deterministic backend that replays recorded responses.