Approval Tests

Approval tests capture the output (snapshot) of a piece of code and compare it with a previously approved version of the output.

It's most useful in environments where frequent changes are expected or where the output is of complex nature but can be easily verified by humans aided for example by a diff-tool or a visual representation of the output.

A picture’s worth a 1000 tests.

Once the output has been approved then as long as the output stays the same the test will pass. A test fails if the received output is not identical to the approved version. In that case, the difference of the received and the approved output is reported to the tester. The representation of the report can take any form: A diff-tool comparing received and approved text or images side-by-side.

Be aware of the Guru Checks Output antipattern.

A good introduction into approval tests gives this video.

General Resources

pytest \
    --approvaltests-add-reporter="pycharm-community" \
    --approvaltests-add-reporter-args="diff"

# -s: disable all capturing
pytest \
    -s  \
    --approvaltests-add-reporter="nvim" \
    --approvaltests-add-reporter-args="-d"