Finding unused fixtures in your pytest tests
At Xelix our main (Django) monolith has about 105001 tests and about 41002 fixtures. I've been going through that entire test set, trying to make it better in various ways. While refactoring a test, I found a fixture, which wasn't actually used. I removed it, and then I quickly realised - after 4 years of agile development, changing requirements and big refactors, there are likely quite a few similar fixtures. But obviously, I will not be going through thousands of fixtures manually3.
If you're looking for just the instruction and aren't interested in the journey, go to Usage.
Surely, someone has already solved this 🔗
As per usual, I turned first to our neighbourhood friend Google, thinking that someone already solved this problem, and wrote a package for this.
And yes!, there is a package for this already, called
I ran it, started going through the output for a subsection of our tests, removed a couple of unused fixtures, and then came across a false positive, and then I came across another one. So started digging into how the package works, and unfortunately came to the conclusion that this package will not be able to find fixtures in our repos 4.
pytest-deadfixtures works by running static analysis. It looks at all available fixtures, looks at tests, and then compares the sets - but doesn't actually run the tests.
So far so good, that's actually a good way to go about it, it's obviously quite fast, and in theory should be enough, right?
Issues with static analysis 🔗
Dynamically requested fixtures 🔗
Pytest allows you to dynamically request fixtures, using the
getfixturevalue, which just takes a name of fixture.
This can be any string, and pytest will try to retrieve that fixture based on the name and raise an error if it can't find it.
Best practice should be to only full fixture names, but there is actually nothing stopping you from just generating the name on the fly
Why would you do this? Well, we mostly use it for parametrizing 5 tests with fixtures - running the same test with different setups.
Lately, we've actually been mostly using the great package
pytest-lazy-fixtures, which wraps around this mechanism, but you then don't have to call the method on
request yourself, it gets handled, and it's safe to use in parametrization.
So, with static analysis, you would have to go quite deep into code analysis to find all of these dynamically used fixtures.
Tests which don't actually run 🔗
If a test never (or usually) runs, and it uses a fixture, is that fixture used? I guess that's up for definition, but for the purpose of finding dead code, I would consider it unused, because it can point to dead code or dead tests.
The most common scenario for tests not running is skipping tests, which can be easier or harder to analyse.
Skipping in pytest can be done dynamically, based on conditions, or using raise from within the test itself.
Therefore, skipping gets evaluated at runtime - if you run
pytest --collect-only, it won't say how many of the collected tests will be skipped.
Runtime analysis 🔗
Since we determined that static analysis won't do, we need to evaluate at runtime. Pytest is an extremely pluggable framework, so it obviously provides most of the information we need, exposed in hooks. There's basically two pieces of information you need to figure our which fixture is unused - the available fixtures and which are used.
Finding out which fixtures are available is reasonably straightforward, as pytest support itself (
So after test collection, I just collect a set of all available fixtures, together with info about where they are defined.
For evaluating which fixture is used, we need to run tests and implement the
pytest_fixture_setup hook, which runs every time a fixture is requested by a test.
This thankfully works even in dynamic cases while using
Finally, we just need to subtract the two sets of fixtures from each other and bam! we have a list of unused fixtures.
Running tests in parallel 🔗
Since the solution of getting unused fixtures relies on running the tests, we also need to be able to handle running the tests in parallel, like with the most popular solution
xdist has main node & worker nodes, where the main mode controls the orchestration and reporting back to the CLI and workers run the individual tests.
So while we can run the test collection to figure out available fixtures on the main node, the information about which fixtures were run lives on the test nodes, as that's where tests get executed.
For this purpose,
xdist provides a communication method between main and the test nodes.
After tests finish running in a worker, we just serializer which fixtures were used, and then in the main node read the output from the worker.
To install the plugin, one just needs to install it with
pip (or a package manager of your choice).
Pytest will automatically load the plugin.
pip install pytest-unused-fixtures
Basic usage 🔗
By default, the plugin doesn't run (as it provides additional overhead). To run it, once needs to use the
The plugin automatically detects if
pytest-xdist is installed and enabled, so no extra configuration is needed when using it.
Example output (click to open)
With the following file:
import pytest @pytest.fixture def fixture_a(): return None @pytest.fixture def fixture_b(): return None def test_a(fixture_a): pass
$ pytest test.py --unused-fixtures ========================================== test session starts ========================================== platform linux -- Python 3.10.12, pytest-7.3.2, pluggy-1.0.0 rootdir: /home/miki/oss/pytest-unused-fixtures plugins: xdist-3.3.1, unused-fixtures-0.1.0 collected 1 item test.py . [100%] ============================================ UNUSED FIXTURES ============================================ cache -- venv/lib/python3.10/site-packages/_pytest/cacheprovider.py:509 capsysbinary -- venv/lib/python3.10/site-packages/_pytest/capture.py:1000 capfd -- venv/lib/python3.10/site-packages/_pytest/capture.py:1028 capfdbinary -- venv/lib/python3.10/site-packages/_pytest/capture.py:1056 capsys -- venv/lib/python3.10/site-packages/_pytest/capture.py:972 doctest_namespace [session scope] -- venv/lib/python3.10/site-packages/_pytest/doctest.py:736 pytestconfig [session scope] -- venv/lib/python3.10/site-packages/_pytest/fixtures.py:1359 record_property -- venv/lib/python3.10/site-packages/_pytest/junitxml.py:281 record_xml_attribute -- venv/lib/python3.10/site-packages/_pytest/junitxml.py:304 record_testsuite_property [session scope] -- venv/lib/python3.10/site-packages/_pytest/junitxml.py:342 tmpdir_factory [session scope] -- venv/lib/python3.10/site-packages/_pytest/legacypath.py:301 tmpdir -- venv/lib/python3.10/site-packages/_pytest/legacypath.py:308 caplog -- venv/lib/python3.10/site-packages/_pytest/logging.py:497 monkeypatch -- venv/lib/python3.10/site-packages/_pytest/monkeypatch.py:29 recwarn -- venv/lib/python3.10/site-packages/_pytest/recwarn.py:29 tmp_path_factory [session scope] -- venv/lib/python3.10/site-packages/_pytest/tmpdir.py:244 tmp_path -- venv/lib/python3.10/site-packages/_pytest/tmpdir.py:259 -------------------------------------- fixtures defined from test --------------------------------------- fixture_b -- test.py:8 =========================================== 1 passed in 0.00s ===========================================
Ignoring fixtures from the report 🔗
There are two ways to ignore fixtures from the unused fixtures report.
Ignore a specific fixture 🔗
The first one is a
@ignore_unused_fixture decorator. In the following example,
fixture_b will not be reported.
import pytest from pytest_unused_fixtures import ignore_unused_fixture @pytest.fixture def fixture_a(): return None @pytest.fixture @ignore_unused_fixture def fixture_b(): return None def test_a(fixture_a): pass
Ignoring fixtures from a path 🔗
You can ignore all fixtures from the report that are defined on a specific path (file or folder), so for example, you can ignore all fixtures defined in your virtual environment.
This is done with a
--unused-fixtures-ignore-path parameter, which can be used multiple times. For example, the following will not report fixtures defined in the
pytest --unused-fixtures --unused-fixtures-ignore-path venv
Example output (click to open)
Using the same file in Basic usage
$ pytest test.py --unused-fixtures --unused-fixtures-ignore-path venv ========================================== test session starts ========================================== platform linux -- Python 3.10.12, pytest-7.3.2, pluggy-1.0.0 rootdir: /home/miki/oss/pytest-unused-fixtures plugins: xdist-3.3.1, unused-fixtures-0.1.0 collected 1 item test.py . [100%] ============================================ UNUSED FIXTURES ============================================ -------------------------------------- fixtures defined from test --------------------------------------- fixture_b -- test.py:8 =========================================== 1 passed in 0.00s ===========================================
So you might be asking, how many fixtures were unused out of our about 4000 fixtures? I found about 50 fixtures which were unused, so sligtly more than 1% of all fixtures in the whole codebase, totalling about 500 lines as well.
Was this whole endeavour worth it for just 50 fixtures? 🤷 I was pretty happy with the outcome, as I did actually manage to find and delete a bunch of obsolete fixtures, I learned about pytest internals and hopefully others will find this helpful.
Furthermore, we've enabled this plugin in our CI, so we now have ongoing on monitoring of dead code.
The package is obviously open-source, you can find the source code on GitHub. There are not many tests, but there are some. Issues & pull requests are welcome. I am excited for the bug reports!
pytest --collect-only | grep 'tests collected'↩
pytest --fixtures | grep ' -- ' | wc -l↩
One can obviously exclude auto-used fixtures, which in our case would limit the search to only about 2000 fixtures.↩
There were a couple which should be detected by static analysis, but weren't - couldn't really find reason, but it's likely some edge cases in the library. Should be solvable, with some digging.↩
Parametrize or parametrise, that is the question - each time I type it, a different spelling comes up. As I live in the UK, I usually use the British spelling these days, but then pytest uses the American spelling, argh!↩