nim-testutils/testutils
Zahary Karadjov 3602b06f68 Integrate fuzz.nims into testrunner (to be renamed to ntu) 2020-06-12 14:23:11 +03:00
..
fuzzing Integrate fuzz.nims into testrunner (to be renamed to ntu) 2020-06-12 14:23:11 +03:00
config.nim Integrate fuzz.nims into testrunner (to be renamed to ntu) 2020-06-12 14:23:11 +03:00
fuzzing.nim Add support for honggfuzz 2020-06-11 17:21:55 +03:00
fuzzing_engines.nim Integrate fuzz.nims into testrunner (to be renamed to ntu) 2020-06-12 14:23:11 +03:00
helpers.nim initial 2020-02-21 16:14:14 -05:00
markdown_reports.nim Add markdown test report module 2020-03-10 11:35:49 +02:00
moduletests.nim doh, allow multiple tests blocks in a module 2020-02-27 00:06:22 +02:00
nimbletasks.nim Some new helpers: asyncTest and procSuite 2020-05-04 20:09:40 +03:00
readme.md modify readme.md and CI status images 2020-05-14 20:05:26 +03:00
spec.nim fix testrunner bug 2020-05-14 20:05:26 +03:00
unittests.nim Some new helpers: asyncTest and procSuite 2020-05-04 20:09:40 +03:00

readme.md

Testrunner Build Status

Build status

Usage

Command syntax:

testrunner [options] path
Run the test(s) specified at path. Will search recursively for test files
provided path is a directory.
Options:
--backends:"c cpp js objc"  Run tests for specified targets
--include:"test1 test2"     Run only listed tests (space/comma seperated)
--exclude:"test1 test2"     Skip listed tests (space/comma seperated)
--update                    Rewrite failed tests with new output
--sort:"source,test"        Sort the tests by program and/or test mtime
--reverse                   Reverse the order of tests
--random                    Shuffle the order of tests
--help                      Display this help and exit

The runner will look recursively for all *.test files at given path.

Test file options

The test files follow the configuration file syntax (similar as .ini), see also nim parsecfg module.

Required

  • program: A test file should have at minimum a program name. This is the name of the nim source minus the .nim extension.

Optional

  • max_size: To check the maximum size of the binary, in bytes.
  • timestamp_peg: If you don't want to use the default timestamps, you can define your own timestamp peg here.
  • compile_error: When expecting a compilation failure, the error message that should be expected.
  • error_file: When expecting a compilation failure, the source file where the error should occur.
  • os: Space and/or comma separated list of operating systems for which the test should be run. Defaults to "linux, macosx, windows". Tests meant for a different OS than the host will be marked as SKIPPED.
  • --skip: This will simply skip the test (will not be marked as failure).

Forwarded Options

Any other options or key-value pairs will be forwarded to the nim compiler.

A key-value pair will become a conditional symbol + value (-d:SYMBOL(:VAL)) for the nim compiler, e.g. for -d:chronicles_timestamps="UnixTime" the test file requires:

chronicles_timestamps="UnixTime"

If only a key is given, an empty value will be forwarded.

An option will be forwarded as is to the nim compiler, e.g. this can be added in a test file:

--opt:size

Verifying Expected Output

For outputs to be compared, the output string should be set to the output name (stdout or filename) from within an Output section:

[Output]
stdout="""expected stdout output"""
file.log="""expected file output"""

Triple quotes can be used for multiple lines.

Supplying Command-line Arguments

Optionally specify command-line arguments as an escaped string in the following syntax inside any Output section:

[Output]
args = "--title \"useful title\""

Multiple Invocations

Multiple Output sections denote multiple test program invocations. Any failure of the test program to match its expected outputs will short-circuit and fail the test.

[Output]
stdout = ""
args = "--no-output"

[Output_newlines]
stdout = "\n\n"
args = "--newlines"

Updating Expected Outputs

Pass the --update argument to testrunner to rewrite any failing test with the new outputs of the test.

Concurrent Test Execution

When built with threads, testrunner will run multiple test invocations defined in each test file simultaneously. You can specify nothreads in the preamble to disable this behavior.

nothreads = true

[Output_1st_serial]
args = "--first"

[Output_2nd_serial]
args = "--second"

The failure of any test will, when possible, short-circuit all other tests defined in the same file.

CPU Affinity

Specify affinity to clamp the first N concurrent test threads to the first N CPU cores.

affinity = true

[Output_1st_core]
args = "--first"

[Output_2nd_core]
args = "--second"

Testing Alternate Backends

By default, testrunner builds tests using Nim's C backend. Specify the --backends command-line option to build and run run tests with the backends of your choice.

$ testrunner --backends="c cpp" tests

Setting the Order of Tests

By default, testrunner will order test compilation and execution according to the modification time of the test program source. You can choose to sort by test program mtime, too.

$ testrunner --sort:test suite/

You can --reverse or --randomize the order of tests, too.

More Examples

See chonicles, where testutils was born:

License

Apache2 or MIT