Authors: | Zahary Karadjov, Ștefan Talpalaru |
---|
This module implements boilerplate to make unit testing easy.
The test status and name is printed after any output or traceback.
Tests can be nested, however failure of a nested test will not mark the parent test as failed. Setup and teardown are inherited. Setup can be overridden locally.
Compiled test files return the number of failed test as exit code, while
nim c -r testfile.nim
exits with 0 or 1.
Running individual tests
Specify the test names as command line arguments.
nim c -r test "my test name" "another test"
Multiple arguments can be used.
Running a single test suite
Specify the suite name delimited by "::".
nim c -r test "my suite name::"
Selecting tests by pattern
A single "*" can be used for globbing.
Delimit the end of a suite name with "::".
Tests matching any of the arguments are executed.
nim c -r test fast_suite::mytest1 fast_suite::mytest2 nim c -r test "fast_suite::mytest*" nim c -r test "auth*::" "crypto::hashing*" # Run suites starting with 'bug #' and standalone tests starting with '#' nim c -r test 'bug #*::' '::#*'
Running tests in parallel
To enable the threadpool-based test parallelisation, "--threads:on" needs to be passed to the compiler, along with "-d:nimtestParallel" or the NIMTEST_PARALLEL environment variable:
nim c -r --threads:on -d:nimtestParallel testfile.nim # or NIMTEST_PARALLEL=1 nim c -r --threads:on testfile.nim
There are some implicit barriers where we wait for all the spawned jobs to complete: before and after each test suite and at the main thread's exit.
The suite-related barriers are there to avoid mixing test output, but they also affect which groups of tests can be run in parallel, so keep them in mind when deciding how many tests to place in different suites (or between suites).
You may sometimes need to disable test parallelisation for a specific test, even though it was enabled in some configuration file in a parent dir. Do this with "-d:nimtestParallelDisabled" which overrides everything else.
Example
suite "description for this stuff": echo "suite setup: run once before the tests" setup: echo "run before each test" teardown: echo "run after each test" test "essential truths": # give up and stop if this fails require(true) test "slightly less obvious stuff": # print a nasty message and move on, skipping # the remainder of this block check(1 != 1) check("asd"[2] == 'd') test "out of bounds error is thrown on bad access": let v = @[1, 2, 3] # you can do initialization here expect(IndexError): discard v[4] suiteTeardown: echo "suite teardown: run once after the tests"
Types
TestStatus = enum OK, FAILED, SKIPPED
- The status of a test when it is done.
OutputLevel = enum PRINT_ALL, ## Print as much as possible. PRINT_FAILURES, ## Print only the failed tests. PRINT_NONE ## Print nothing.
- The output verbosity of the tests.
TestResult = object suiteName*: string ## Name of the test suite that contains this test case. ## Can be ``nil`` if the test case is not in a suite. testName*: string ## Name of the test case status*: TestStatus
OutputFormatter = ref object of RootObj
ConsoleOutputFormatter = ref object of OutputFormatter colorOutput: bool ## Have test results printed in color. ## Default is true for the non-js target, ## for which ``stdout`` is a tty. ## Setting the environment variable ## ``NIMTEST_COLOR`` to ``always`` or ## ``never`` changes the default for the ## non-js target to true or false respectively. ## The deprecated environment variable ## ``NIMTEST_NO_COLOR``, when set, ## changes the defualt to true, if ## ``NIMTEST_COLOR`` is undefined. outputLevel: OutputLevel ## Set the verbosity of test results. ## Default is ``PRINT_ALL``, unless ## the ``NIMTEST_OUTPUT_LVL`` environment ## variable is set for the non-js target. isInSuite: bool isInTest: bool
JUnitOutputFormatter = ref object of OutputFormatter stream: Stream testErrors: seq[string] testStartTime: float testStackTrace: string
Procs
proc clearOutputFormatters() {...}{.raises: [], tags: [].}
proc addOutputFormatter(formatter: OutputFormatter) {...}{.raises: [], tags: [].}
proc newConsoleOutputFormatter(outputLevel: OutputLevel = PRINT_ALL; colorOutput = true): ConsoleOutputFormatter {...}{. raises: [], tags: [].}
proc defaultConsoleFormatter(): ConsoleOutputFormatter {...}{.raises: [], tags: [ReadEnvEffect].}
proc newJUnitOutputFormatter(stream: Stream): JUnitOutputFormatter {...}{. raises: [Exception], tags: [WriteIOEffect].}
- Creates a formatter that writes report to the specified stream in JUnit format. The stream is NOT closed automatically when the test are finished, because the formatter has no way to know when all tests are finished. You should invoke formatter.close() to finalize the report.
proc close(formatter: JUnitOutputFormatter) {...}{.raises: [Exception], tags: [WriteIOEffect].}
- Completes the report and closes the underlying stream.
proc checkpoint(msg: string) {...}{.raises: [], tags: [].}
-
Set a checkpoint identified by msg. Upon test failure all checkpoints encountered so far are printed out. Example:
checkpoint("Checkpoint A") check((42, "the Answer to life and everything") == (1, "a")) checkpoint("Checkpoint B")
outputs "Checkpoint A" once it fails.
proc disableParamFiltering() {...}{.raises: [], tags: [].}
- disables filtering tests with the command line params
Methods
method suiteStarted(formatter: OutputFormatter; suiteName: string) {...}{.base, gcsafe, raises: [], tags: [].}
method testStarted(formatter: OutputFormatter; testName: string) {...}{.base, gcsafe, raises: [], tags: [].}
method failureOccurred(formatter: OutputFormatter; checkpoints: seq[string]; stackTrace: string) {...}{.base, gcsafe, raises: [], tags: [].}
- stackTrace is provided only if the failure occurred due to an exception. checkpoints is never nil.
method testEnded(formatter: OutputFormatter; testResult: TestResult) {...}{.base, gcsafe, raises: [], tags: [].}
method suiteEnded(formatter: OutputFormatter) {...}{.base, gcsafe, raises: [], tags: [].}
method suiteStarted(formatter: ConsoleOutputFormatter; suiteName: string) {...}{. raises: [IOError], tags: [WriteIOEffect].}
method testStarted(formatter: ConsoleOutputFormatter; testName: string) {...}{.raises: [], tags: [].}
method failureOccurred(formatter: ConsoleOutputFormatter; checkpoints: seq[string]; stackTrace: string) {...}{.raises: [], tags: [].}
method testEnded(formatter: ConsoleOutputFormatter; testResult: TestResult) {...}{. raises: [IOError], tags: [WriteIOEffect].}
method suiteEnded(formatter: ConsoleOutputFormatter) {...}{.raises: [], tags: [].}
method suiteStarted(formatter: JUnitOutputFormatter; suiteName: string) {...}{. raises: [Exception, ValueError], tags: [WriteIOEffect].}
method testStarted(formatter: JUnitOutputFormatter; testName: string) {...}{.raises: [], tags: [TimeEffect].}
method failureOccurred(formatter: JUnitOutputFormatter; checkpoints: seq[string]; stackTrace: string) {...}{.raises: [], tags: [].}
- stackTrace is provided only if the failure occurred due to an exception. checkpoints is never nil.
method testEnded(formatter: JUnitOutputFormatter; testResult: TestResult) {...}{. raises: [Exception, ValueError], tags: [TimeEffect, WriteIOEffect].}
method suiteEnded(formatter: JUnitOutputFormatter) {...}{.raises: [Exception], tags: [WriteIOEffect].}
Macros
macro check(conditions: untyped): untyped
-
Verify if a statement or a list of statements is true. A helpful error message and set checkpoints are printed out on failure (if outputLevel is not PRINT_NONE). Example:
import strutils check("AKB48".toLowerAscii() == "akb48") let teams = {'A', 'K', 'B', '4', '8'} check: "AKB48".toLowerAscii() == "akb48" 'C' in teams
macro expect(exceptions: varargs[typed]; body: untyped): untyped
-
Test if body raises an exception found in the passed exceptions. The test passes if the raised exception is part of the acceptable exceptions. Otherwise, it fails. Example:
import math, random proc defectiveRobot() = randomize() case random(1..4) of 1: raise newException(OSError, "CANNOT COMPUTE!") of 2: discard parseInt("Hello World!") of 3: raise newException(IOError, "I can't do that Dave.") else: assert 2 + 2 == 5 expect IOError, OSError, ValueError, AssertionError: defectiveRobot()
Templates
template suite(name, body) {...}{.dirty.}
-
Declare a test suite identified by name with optional setup and/or teardown section.
A test suite is a series of one or more related tests sharing a common fixture (setup, teardown). The fixture is executed for EACH test.
suite "test suite for addition": setup: let result = 4 test "2 + 2 = 4": check(2+2 == result) test "(2 + -2) != 4": check(2 + -2 != result) # No teardown needed
The suite will run the individual test cases in the order in which they were listed. With default global settings the above code prints:
[Suite] test suite for addition [OK] 2 + 2 = 4 [OK] (2 + -2) != 4
template test(name, body)
-
Define a single test case identified by name.
test "roses are red": let roses = "red" check(roses == "red")
The above code outputs:
[OK] roses are red
template fail()
-
Print out the checkpoints encountered so far and quit if abortOnError is true. Otherwise, erase the checkpoints and indicate the test has failed (change exit code and test status). This template is useful for debugging, but is otherwise mostly used internally. Example:
checkpoint("Checkpoint A") complicatedProcInThread() fail()
outputs "Checkpoint A" before quitting.
template skip()
-
Mark the test as skipped. Should be used directly in case when it is not possible to perform test for reasons depending on outer environment, or certain application logic conditions or configurations. The test code is still executed.
if not isGLConextCreated(): skip()
template require(conditions: untyped)
- Same as check except any failed test causes the program to quit immediately. Any teardown statements are not executed and the failed test output is not generated.