unittest2

Authors: Zahary Karadjov, Ștefan Talpalaru

This module implements boilerplate to make unit testing easy.

The test status and name is printed after any output or traceback.

Tests can be nested, however failure of a nested test will not mark the parent test as failed. Setup and teardown are inherited. Setup can be overridden locally.

Compiled test files return the number of failed test as exit code, while

nim c -r testfile.nim

exits with 0 or 1.

Running individual tests

Specify the test names as command line arguments.

nim c -r test "my test name" "another test"

Multiple arguments can be used.

Running a single test suite

Specify the suite name delimited by "::".

nim c -r test "my suite name::"

Selecting tests by pattern

A single "*" can be used for globbing.

Delimit the end of a suite name with "::".

Tests matching any of the arguments are executed.

nim c -r test fast_suite::mytest1 fast_suite::mytest2
nim c -r test "fast_suite::mytest*"
nim c -r test "auth*::" "crypto::hashing*"
# Run suites starting with 'bug #' and standalone tests starting with '#'
nim c -r test 'bug #*::' '::#*'

Command line arguments

--helpPrint short help and quit
--xml:fileWrite JUnit-compatible XML report to file
--consoleWrite report to the console (default, when no other output is selected)

Command line parsing can be disabled with -d:unittest2DisableParamFiltering.

Running tests in parallel

To enable the threadpool-based test parallelisation, "--threads:on" needs to be passed to the compiler, along with "-d:nimtestParallel" or the NIMTEST_PARALLEL environment variable:

nim c -r --threads:on -d:nimtestParallel testfile.nim
# or
NIMTEST_PARALLEL=1 nim c -r --threads:on testfile.nim

There are some implicit barriers where we wait for all the spawned jobs to complete: before and after each test suite and at the main thread's exit.

The suite-related barriers are there to avoid mixing test output, but they also affect which groups of tests can be run in parallel, so keep them in mind when deciding how many tests to place in different suites (or between suites).

You may sometimes need to disable test parallelisation for a specific test, even though it was enabled in some configuration file in a parent dir. Do this with "-d:nimtestParallelDisabled" which overrides everything else.

Example

suite "description for this stuff":
  echo "suite setup: run once before the tests"
  
  setup:
    echo "run before each test"
  
  teardown:
    echo "run after each test"
  
  test "essential truths":
    # give up and stop if this fails
    require(true)
  
  test "slightly less obvious stuff":
    # print a nasty message and move on, skipping
    # the remainder of this block
    check(1 != 1)
    check("asd"[2] == 'd')
  
  test "out of bounds error is thrown on bad access":
    let v = @[1, 2, 3]  # you can do initialization here
    expect(IndexError):
      discard v[4]
  
  suiteTeardown:
    echo "suite teardown: run once after the tests"

Types

TestStatus = enum
  OK, FAILED, SKIPPED
The status of a test when it is done.   Source Edit
OutputLevel = enum
  PRINT_ALL,                ## Print as much as possible.
  PRINT_FAILURES,           ## Print only the failed tests.
  PRINT_NONE                 ## Print nothing.
The output verbosity of the tests.   Source Edit
TestResult = object
  suiteName*: string ## Name of the test suite that contains this test case.
                     ## Can be ``nil`` if the test case is not in a suite.
  testName*: string          ## Name of the test case
  status*: TestStatus
  duration*: Duration
  Source Edit
OutputFormatter = ref object of RootObj
  Source Edit
ConsoleOutputFormatter = ref object of OutputFormatter
  colorOutput: bool ## Have test results printed in color.
                    ## Default is `auto` depending on `isatty(stdout)`, or override it with
                    ## `-d:nimUnittestColor:auto|on|off`.
                    ## 
                    ## Deprecated: Setting the environment variable `NIMTEST_COLOR` to `always`
                    ## or `never` changes the default for the non-js target to true or false respectively.
                    ## Deprecated: the environment variable `NIMTEST_NO_COLOR`, when set, changes the
                    ## default to true, if `NIMTEST_COLOR` is undefined.
  outputLevel: OutputLevel ## Set the verbosity of test results.
                           ## Default is `PRINT_ALL`, or override with:
                           ## `-d:nimUnittestOutputLevel:PRINT_ALL|PRINT_FAILURES|PRINT_NONE`.
                           ## 
                           ## Deprecated: the `NIMTEST_OUTPUT_LVL` environment variable is set for the non-js target.
  isInSuite: bool
  isInTest: bool
  Source Edit
JUnitOutputFormatter = ref object of OutputFormatter
  stream: Stream
  defaultSuite: JUnitSuite
  suites: seq[JUnitSuite]
  currentSuite: int
  Source Edit

Vars

abortOnError: bool

Set to true in order to quit immediately on fail. Default is false, or override with -d:nimUnittestAbortOnError:on|off.

Deprecated: can also override depending on whether NIMTEST_ABORT_ON_ERROR environment variable is set.

  Source Edit

Consts

paralleliseTests = false
Whether parallel test running was enabled (set at compile time). This constant might be useful in custom output formatters.   Source Edit

Procs

proc addOutputFormatter(formatter: OutputFormatter) {...}{.raises: [Defect], tags: [].}
  Source Edit
proc resetOutputFormatters() {...}{.raises: [Defect], tags: [].}
  Source Edit
proc newConsoleOutputFormatter(outputLevel: OutputLevel = outputLevelDefault;
                               colorOutput = true): ConsoleOutputFormatter {...}{.
    raises: [Defect], tags: [].}
  Source Edit
proc defaultConsoleFormatter(): ConsoleOutputFormatter {...}{.raises: [Defect],
    tags: [ReadEnvEffect].}
  Source Edit
proc newJUnitOutputFormatter(stream: Stream): JUnitOutputFormatter {...}{.
    raises: [Defect], tags: [WriteIOEffect].}
Creates a formatter that writes report to the specified stream in JUnit format. The stream is NOT closed automatically when the test are finished, because the formatter has no way to know when all tests are finished. You should invoke formatter.close() to finalize the report.   Source Edit
proc parseParameters(args: openArray[string]) {...}{.raises: [Defect],
    tags: [WriteIOEffect, ReadEnvEffect].}
  Source Edit
proc checkpoint(msg: string) {...}{.raises: [Defect], tags: [].}
Set a checkpoint identified by msg. Upon test failure all checkpoints encountered so far are printed out. Example:
checkpoint("Checkpoint A")
check((42, "the Answer to life and everything") == (1, "a"))
checkpoint("Checkpoint B")

outputs "Checkpoint A" once it fails.

  Source Edit
proc disableParamFiltering() {...}{.deprecated: "Compile with -d:unittest2DisableParamFiltering instead",
                               raises: [], tags: [].}
Deprecated: Compile with -d:unittest2DisableParamFiltering instead
  Source Edit

Methods

method suiteStarted(formatter: OutputFormatter; suiteName: string) {...}{.base,
    gcsafe, raises: [Defect], tags: [].}
  Source Edit
method testStarted(formatter: OutputFormatter; testName: string) {...}{.base, gcsafe,
    raises: [Defect], tags: [].}
  Source Edit
method failureOccurred(formatter: OutputFormatter; checkpoints: seq[string];
                       stackTrace: string) {...}{.base, gcsafe, raises: [Defect],
    tags: [].}
stackTrace is provided only if the failure occurred due to an exception. checkpoints is never nil.   Source Edit
method testEnded(formatter: OutputFormatter; testResult: TestResult) {...}{.base,
    gcsafe, raises: [Defect], tags: [].}
  Source Edit
method suiteEnded(formatter: OutputFormatter) {...}{.base, gcsafe, raises: [Defect],
    tags: [].}
  Source Edit
method testRunEnded(formatter: OutputFormatter) {...}{.base, gcsafe,
    raises: [Defect], tags: [].}
  Source Edit
method suiteStarted(formatter: ConsoleOutputFormatter; suiteName: string) {...}{.
    raises: [Defect], tags: [WriteIOEffect].}
  Source Edit
method testStarted(formatter: ConsoleOutputFormatter; testName: string) {...}{.
    raises: [Defect], tags: [].}
  Source Edit
method failureOccurred(formatter: ConsoleOutputFormatter;
                       checkpoints: seq[string]; stackTrace: string) {...}{.
    raises: [Defect], tags: [].}
  Source Edit
method testEnded(formatter: ConsoleOutputFormatter; testResult: TestResult) {...}{.
    raises: [Defect], tags: [WriteIOEffect].}
  Source Edit
method suiteEnded(formatter: ConsoleOutputFormatter) {...}{.raises: [Defect],
    tags: [].}
  Source Edit
method suiteStarted(formatter: JUnitOutputFormatter; suiteName: string) {...}{.
    raises: [Defect], tags: [].}
  Source Edit
method testStarted(formatter: JUnitOutputFormatter; testName: string) {...}{.
    raises: [Defect], tags: [].}
  Source Edit
method failureOccurred(formatter: JUnitOutputFormatter;
                       checkpoints: seq[string]; stackTrace: string) {...}{.
    raises: [Defect], tags: [].}
stackTrace is provided only if the failure occurred due to an exception. checkpoints is never nil.   Source Edit
method testEnded(formatter: JUnitOutputFormatter; testResult: TestResult) {...}{.
    raises: [Defect], tags: [].}
  Source Edit
method suiteEnded(formatter: JUnitOutputFormatter) {...}{.raises: [Defect], tags: [].}
  Source Edit
method testRunEnded(formatter: JUnitOutputFormatter) {...}{.raises: [Defect],
    tags: [WriteIOEffect].}
Completes the report and closes the underlying stream.   Source Edit

Macros

macro check(conditions: untyped): untyped
Verify if a statement or a list of statements is true. A helpful error message and set checkpoints are printed out on failure (if outputLevel is not PRINT_NONE).

Examples:

import
  std / strutils

check("AKB48".toLowerAscii() == "akb48")
let teams = {'A', 'K', 'B', '4', '8'}
check:
  "AKB48".toLowerAscii() == "akb48"
  'C' notin teams
  Source Edit
macro expect(exceptions: varargs[typed]; body: untyped): untyped
Test if body raises an exception found in the passed exceptions. The test passes if the raised exception is part of the acceptable exceptions. Otherwise, it fails.

Examples:

import
  std / [math, random, strutils]

proc defectiveRobot() =
  randomize()
  case rand(1 .. 4)
  of 1:
    raise newException(OSError, "CANNOT COMPUTE!")
  of 2:
    discard parseInt("Hello World!")
  of 3:
    raise newException(IOError, "I can\'t do that Dave.")
  else:
    assert 2 + 2 == 5

when (NimMajor, NimMinor, NimPatch) < (1, 4, 0):
  type
    AssertionDefect = AssertionError
expect IOError,OSError,ValueError,AssertionDefect:
  defectiveRobot()
  Source Edit

Templates

template suite(name, body) {...}{.dirty.}

Declare a test suite identified by name with optional setup and/or teardown section.

A test suite is a series of one or more related tests sharing a common fixture (setup, teardown). The fixture is executed for EACH test.

suite "test suite for addition":
  setup:
    let result = 4
  
  test "2 + 2 = 4":
    check(2+2 == result)
  
  test "(2 + -2) != 4":
    check(2 + -2 != result)
  
  # No teardown needed

The suite will run the individual test cases in the order in which they were listed. With default global settings the above code prints:

[Suite] test suite for addition
  [OK] 2 + 2 = 4
  [OK] (2 + -2) != 4
  Source Edit
template test(name: string; body: untyped)
Define a single test case identified by name.
test "roses are red":
  let roses = "red"
  check(roses == "red")

The above code outputs:

[OK] roses are red
  Source Edit
template fail()
Print out the checkpoints encountered so far and quit if abortOnError is true. Otherwise, erase the checkpoints and indicate the test has failed (change exit code and test status). This template is useful for debugging, but is otherwise mostly used internally. Example:
checkpoint("Checkpoint A")
complicatedProcInThread()
fail()

outputs "Checkpoint A" before quitting.

  Source Edit
template skip()
Mark the test as skipped. Should be used directly in case when it is not possible to perform test for reasons depending on outer environment, or certain application logic conditions or configurations. The test code is still executed.
if not isGLContextCreated():
  skip()
  Source Edit
template require(conditions: untyped)
Same as check except any failed test causes the program to quit immediately. Any teardown statements are not executed and the failed test output is not generated.   Source Edit