/pytest-benchmark

py.test fixture for benchmarking code

Primary LanguagePythonBSD 2-Clause "Simplified" LicenseBSD-2-Clause

Overview

docs Documentation Status Join the chat at https://gitter.im/ionelmc/pytest-benchmark
tests
Travis-CI Build Status AppVeyor Build Status Requirements Status
Coverage Status Coverage Status
Code Quality Status Scrutinizer Status Codacy Code Quality Status CodeClimate Quality Status
package
PyPI Package latest release PyPI Wheel Supported versions Supported implementations
Commits since latest release

A py.test fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer. See calibration and FAQ.

  • Free software: BSD license

Installation

pip install pytest-benchmark

Documentation

For latest release: pytest-benchmark.readthedocs.org/en/stable.

For master branch (may include documentation fixes): pytest-benchmark.readthedocs.io/en/latest.

Examples

But first, a prologue:

This plugin tightly integrates into pytest. To use this effectively you should know a thing or two about pytest first. Take a look at the introductory material or watch talks.

Few notes:

  • This plugin benchmarks functions and only that. If you want to measure block of code or whole programs you will need to write a wrapper function.
  • In a test you can only benchmark one function. If you want to benchmark many functions write more tests or use parametrization <http://docs.pytest.org/en/latest/parametrize.html>.
  • To run the benchmarks you simply use py.test to run your "tests". The plugin will automatically do the benchmarking and generate a result table. Run py.test --help for more details.

This plugin provides a benchmark fixture. This fixture is a callable object that will benchmark any function passed to it.

Example:

def something(duration=0.000001):
    """
    Function that needs some serious benchmarking.
    """
    time.sleep(duration)
    # You may return anything you want, like the result of a computation
    return 123

def test_my_stuff(benchmark):
    # benchmark something
    result = benchmark(something)

    # Extra code, to verify that the run completed correctly.
    # Sometimes you may want to check the result, fast functions
    # are no good if they return incorrect results :-)
    assert result == 123

You can also pass extra arguments:

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.02)

Or even keyword arguments:

def test_my_stuff(benchmark):
    benchmark(time.sleep, duration=0.02)

Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:

def test_my_stuff(benchmark):
    @benchmark
    def something():  # unnecessary function call
        time.sleep(0.000001)

A better way is to just benchmark the final function:

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.000001)  # way more accurate results!

If you need to do fine control over how the benchmark is run (like a setup function, exact control of iterations and rounds) there's a special mode - pedantic:

def my_special_setup():
    ...

def test_with_setup(benchmark):
    benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100)

Screenshots

Normal run:

Screenshot of py.test summary

Compare mode (--benchmark-compare):

Screenshot of py.test summary in compare mode

Histogram (--benchmark-histogram):

Histogram sample

Development

To run the all tests run:

tox

Credits