checkmate-cli 0.4.1

Checkmate - API Testing Framework CLI
# Checkmate API Testing Framework and Toolkit

The API testing is a rich area, where the amount of testing scenarios is
essentially infinite.

For example, our fraud prevention API takes 5 fields of order details and
returns back 500 fields with counters over different timeframes, normalized
data, checks against databases, unique counts of {field 1} for the value in
{field 2}, etc.

For our kibana replacement platform, the API takes either URL params, request
body or both to return select fields of data.

To test them, we need to either
- manually verify output schema and content (slowest, most reliable, longest)
- automate in heavy scripts and with error margins the fraud prevention API
  checks (somewhat reliable, takes long time to develop, needs maintenance)
- accept builtin checks (rarely address actual sent output, "black box" from
  consumer side, less reliable)

Here comes AI:
- capable of writing test cases, recognizing API interfaces accurately, fast
- capable to infer different strategies required either from description,
  frontend consumer code or API code, translate them to test cases
- capable of using the inferred strategies to decide on the comprehensiveness
  necessary (1 run for schema, 200 runs for count accuracies)

What's missing? A tool that lets humans/AI agents run the tests while developing
to ensure continued compliance, a tool that has a well defined, useful and
flexible schema for tests, which can house both simple and complex "checks",
e.g. a check for consistently incrementing counts, a check for the presence of a
field, a check for a single field's value, etc.

So altogether, I want to have a tool, which lets you:
1) Use its CLI to test the API and generate a report / get feedback in stdout
that can be used during development, in CI/CD, in code review, etc.

2) Use its CLI to print detailed instructions to AI agents that enable them to
write API tests with full flexibility to be compatible with any complexity or
simplicity the API works with

3) File-based configuration in project folders, which can be edited directly, or
having been created already, modified by CLI commands for simple changes like
endpoints, etc. The modifiability by CLI is not a hard requirement, but maybe
some safeguard against non-version-controlled config files being edited directly
is nice, e.g. if the AI had the ability to edit the file via commands like
`checkmate config set <key.of.field>=<value>` and for checkmate to create a
"change control" file that keeps track of what changes are made over time,
allowing for `checkmate config undo <number of changes, default 1>`, I think
it's reasonable to do.

4) Scriptability: to check that counts are increasing, number of unique emails
per name are increasing or that some value is above the threshold, we need some
sort of scripting capability. We don't need to be able to integrate full
programming into it, although complexity of integrating e.g. Lua may be lower
than working out a large number of scripts and commands that integrate well with
each other. Plus, Lua <-> API JSON output correlation is very intuitive, right?

5) Built-in report generation for AIs, CI/CD pipelines, Humans, including a
structured version of the "report" that keeps track of changes observed in
outputs, endpoint names, etc.

6) Any other features that you'd consider useful as an AI to your workflow of
working out API tests.