To find and run tests in your project, you can run
This will recursively search through the current directory
for modules with a name starting with
test_, and execute
any tests contained in the modules it finds.
Specifying test modules or directories
You can run tests in a specific directory or module using the
For example, to run all tests inside a directory called
To run tests in the current directory, you can just type
is functionally equivalent to
ward --path .
You can also directly specify a test module, for example:
You can supply multiple test directories by providing multiple
Ward will run all tests it finds across all given paths. If one of the specified paths is contained within another, it won't repeat the same test more than once.
Excluding modules or directories
You can tell Ward to ignore specific modules or directories using the
command line option. This option can be supplied multiple times, and supports glob patterns.
To configure excluded directories in a more permanent manner, you can use
[tool.ward]exclude = ["glob1", "glob2"]
Tag expressions: Selecting tests with
A tag expression is an infix boolean expression that can be used to accurately select
a subset of tests you wish to execute. Tests are tagged using the
tags keyword argument
@test decorator (e.g.
@test("eggs are green", tags=["unit"]).)
Here are some examples of tag expressions and what they mean:
|tests tagged with slow|
|tests tagged with both |
|tests tagged with |
|tests tagged with either |
Here's how you would run only tests that are tagged with either
ios in practice:
You can use parentheses in tag expressions to change precedence rules to suit your needs.
Searching: Selecting tests with
You can choose to limit which tests are collected and ran by Ward
--search STRING option. Module names, test descriptions and test function bodies
will be searched, and those which contain
STRING will be ran. Here are
Run all tests that call the
Run all tests that check if a
ZeroDivisionError is raised:
Run all tests decorated with the
Run a test described as
"my_function should return False":
Running tests inside a module:
The search takes place on the fully qualified name, so you can run a single
my_module) using the following command:
Of course, if a test name or body contains the string
"my_module.", that test
will also be selected and ran.
This approach is useful for quickly querying tests and running those which match a simple query, making it useful for development.
Customising the output with
As your project grows, it may become impractical to print
each test result on its own line. Ward provides alternative
test output styles that can be configured using the
The default test output of Ward looks like this (
If you run Ward with
module will be printed on its own line, and a single character
will be used to represent the outcome of each test in that
If that is still too verbose, you may wish to represent every
test outcome with a single character, without grouping them by
By default, Ward captures everything that is written to standard output and standard error as your tests run. If a test fails, everything that was printed during the time it was running will be printed as part of the failure output.
With output capturing enabled, if you run a debugger such as
pdb during test execution, everything
it writes to the stdout will be captured by Ward too.
Disabling output capturing
If you wish to disable output capturing you can do so using the
--no-capture-output flag on the command line.
You can also disable output capturing using the
capture-output key in
[tool.ward]capture-output = false
Randomise test execution order
Running tests in a random order can help identify tests that have hidden dependencies on each other. Tests should pass regardless of the order they run in, and they should pass if run in isolation.
Cancelling after a number of failures with
If you wish for Ward to cancel a run immediately after a specific number of failing tests,
you can use the
--fail-limit option. To have a run end immediately after 5 tests fail:
Finding slow running tests
--show-slowest 10 to print the 10 slowest tests after the test run completes.
Performing a dry run
--dry-run to simulate a test run. Ward will collect the tests and output as normal, but the
tests themselves won't actually run, nor will any fixtures your tests depend on. When using
tests will return with an outcome of
This is useful for determining which tests Ward would run if invoked normally.
Outputting to a file (Proposal)
--output-json to write test results to a JSON file after the test run completes.
If the file doesn't exist, Ward will create it. If the file does exist, Ward will overwrite it.
Tracking proposal on GitHub: https://github.com/darrenburns/ward/issues/123