Crate deqp_runner

Crate deqp_runner 

Source
Expand description

deqp_runner consists of this library crate for supporting parallel testing of various test suites (dEQP, piglit, gtest, IGT GPU Tools) using baseline expectations plus known flakes, plus several binary commands for invoking this library.

Adding a new test suite should be a matter of:

  • Add a parse_testsuite module for parsing the output of your test suite and turning it into a TestStatus.
  • Add a testsuite_command module implementing TestCommand for invoking your test suite.
  • Add at least one of:
    • bin/testsuite.rs binary to call parallel_test on your TestCommand.
    • bin/deqp.rs support to read a description of your to run your testsuite from a suite.toml file and generate test groups for it along with the other testsuites in SuiteConfig. This is useful if a user would want to run your testsuite along with others all at once with one summary at the end.
  • Probably:
    • Add a mock subcommand in your bin/testsuite.rs that pretends to be your testsuite binary (for integration testing without having the testsuite installed) with whatever interesting behaviors your testsuite has you might want to test (timeouts, crashes, flakiness).
    • tests/integration/testsuite_runner.rs module to test calling your bin/testsuite.rs command, passing in testsuite.rs’s built binary path as the testsuite to call with the Mock subcommand.

Modules§

cl_cts_command
deqp_command
Module for invoking VK-GL-CTS (aka dEQP) tests.
fluster_command
Module for running fluster tests.
gtest_command
Module for invoking googletest tests.
igt_command
Module for invoking IGT GPU Tools tests.
mock_deqp
mock_fluster
mock_gtest
mock_igt
mock_piglit
mock_skqp
parse
parse_cl_cts
Parses the output of OpenCL CTS test binaries into a deqp-runner status
parse_fluster
parse_igt
parse_piglit
parse_skqp
piglit_command
Module for invoking piglit tests.
skqp_command

Structs§

BinaryTest
TestCase struct for test commands where each test is a separate binary and set of args.
CaselistResult
The collection of results from invoking a binary to run a bunch of tests (or parsing an old results.csv back off disk).
CaselistState
CommandLineRunOptions
Expectation
Expectations
FailCounter
JunitGeneratorOptions
ResultCounts
RunnerResult
RunnerResultNameHash
RunnerResults
SubRunConfig
TestConfiguration
TestResult
A test result entry in a CaselistResult from running a group of tests

Enums§

RunnerStatus
The result types logged to the console: like TestStatus but with ExpectedFail (yeah, it failed, but you knew it failed in the baseline anyway and weren’t expecting to have fixed it) and UnexpectedImprovement (congrats, you’ve fixed a test, go clear the xfail out of your baseline!).
TestCase
Some TestCommand implementations need more than a string test name to invoke their test binary. All of those variants are in this enum so that the core parallel_test can talk about lists of TestCases, while the specific TestCommand implementation can match the subtype out to get at the test’s details.
TestStatus
Enum for the basic types of test result that we track

Traits§

SingleBinaryTestCommand
trait for TestCommands that run a single BinaryTest per test group.
SingleNamedTestCommand
trait for TestCommands that run a single TestCase::Named() per test group.
SingleTestCommand
trait for TestCommands that run a single test per test group.
TestCommand
This is implemented by each supported test suite.

Functions§

parallel_test
The heart of deqp-runner: this distributes the list of test groups (each a TestCommand and list of tests to run an invocation of that command) to the thread pool to parallelize running them across cores, collecting the results to the RunnerResults and logging incremental status to the console along the way.
parse_regex_set
Parses a deqp-runner regex set list, used for the skips/flakes lists that the user specifies. We ignore empty lines and lines starting with “#”, so you can keep notes about why tests are disabled or flaky.
process_results
Called with the results of a parallel_test, this outputs the results.csv (all results with timings) and the failures.csv (unexpected results for the user the user needs to debug, or to use as their --baseline for future regression testing). Also prints a summary of failures, flakes, and runtimes for management of test runs.
read_baseline
Reads in the --baseline argument of expected-failing tests. Baselines are of the same format as a failures.csv so you can use your first run as the baseline for future runs.
read_lines
Common helper for reading the lines of a caselist/baseline/skips/flakes file, producing useful error context.
runner_thread_index
Returns the thread index, used in various TestCommand implementations to have separate paths per thread to avoid races between tests being run in parallel.