pytest plugin for a better developer experience when working with the PyTorch test suite

Overview

pytest-pytorch

license repo status isort black tests status

What is it?

pytest-pytorch is a lightweight pytest-plugin that enhances the developer experience when working with the PyTorch test suite if you come from a pytest background.

Why do I need it?

Some testcases in the PyTorch test suite are automatically generated when a module is loaded in order to parametrize them. Trying to collect them with their names as written, e.g. pytest test_foo.py::TestFoo or pytest test_foo.py::TestFoo::test_bar, is unfortunately not possible. If you are used to this syntax or your IDE relies on it (PyCharm, VSCode), you can install pytest-pytorch to make it work.

How do I install it?

You can install pytest-pytorch with pip

$ pip install pytest-pytorch

or with conda:

$ conda install -c conda-forge pytest-pytorch

How do I use it?

With pytest-pytorch installed you can select test cases and tests as if the instantiation for different devices was performed by @pytest.mark.parametrize:

Use case Command
Run a test case against all devices pytest test_foo.py::TestBar
Run a test case against one device pytest test_foo.py::TestBar -k "$DEVICE"
Run a test against all devices pytest test_foo.py::TestBar::test_baz
Run a test against one device pytest test_foo.py::TestBar::test_baz -k "$DEVICE"

Can I have a little more background?

PyTorch uses its own method for generating tests that is for the most part compatible with unittest and pytest. Its custom test generation allows test templates to be written and instantiated for different device types, data types, and operators. Consider the following module test_foo.py:

from torch.testing._internal.common_utils import TestCase
from torch.testing._internal.common_device_type import instantiate_device_type_tests

class TestFoo(TestCase):
    def test_bar(self, device):
        pass
    
    def test_baz(self, device):
        pass

instantiate_device_type_tests(TestFoo, globals())

Assuming we "cpu" and "cuda" are available as devices, we can collect four tests:

  1. test_foo.py::TestFooCPU::test_bar_cpu,
  2. test_foo.py::TestFooCPU::test_baz_cpu,
  3. test_foo.py::TestFooCUDA::test_bar_cuda, and
  4. test_foo.py::TestFooCUDA::test_baz_cuda.

From a pytest perspective this is similar to decorating TestFoo with @pytest.mark.parametrize("device", ("cpu", "cuda"))) which would result in

  1. test_foo.py::TestFoo:test_bar[cpu],
  2. test_foo.py::TestFoo:test_bar[cuda],
  3. test_foo.py::TestFoo:test_baz[cpu], and
  4. test_foo.py::TestFoo:test_baz[cuda].

Since the PyTorch test framework renames testcases and tests, naively running pytest test_foo.py::TestFoo or pytest test_foo.py::TestFoo::test_bar fails, because it can't find anything matching these names. Of course you can get around it by using the regular expression matching (-k command line flag) that pytest offers.

pytest-pytorch performs this matching so you can keep your familiar workflow and your IDE is happy out of the box.

How do I contribute?

First and foremost: Thank you for your interest in development of pytest-pytorch's! We appreciate all contributions be it code or something else. Check out our contribution guide lines for details.

Comments
  • Fix broken link in readme

    Fix broken link in readme

    The blog link provided in the readme is broken. Please fix it.

    https://deploy-preview-211--quansight-labs.netlify.app/blog/2021/06/pytest-pytorch/

    I would suggest to replace it with:

    https://labs.quansight.org/blog/2021/06/pytest-pytorch/

    :bulb: Note: I am pushing a PR to fix this.

    opened by sugatoray 1
  • Change test workflows from PyTorch nightly to stable

    Change test workflows from PyTorch nightly to stable

    Previously, we used PyTorch nightly to have a second device available in CI. As of torch==1.9 the "meta" device is included in the stable binaries. Thus, there is no need for less safe nightly testing anymore.

    opened by pmeier 0
  • remove duplicate tests

    remove duplicate tests

    In some cases new_cmds == legacy_cmds. This made it more verbose to write and additionally resulted in duplicate tests.

    After this PR, if no legacy_cmds is passed to Config, the value of new_cmds is used. Plus, duplicate configs are filtered out.

    opened by pmeier 0
  • trim the test matrix

    trim the test matrix

    We don't need to test this for every os / python combination. It should be sufficient to test every os with the minimum python requirement as well as one (linux) with every supported python version.

    This should speed up the CI runs and save some resources.

    opened by pmeier 0
  • refactor test suite to test the actual collection

    refactor test suite to test the actual collection

    Before we tested the collection by using different outcomes for each test based on the respective parameter. Afterwards we checked the pytest result against that. This has two downsides:

    1. It takes more mental effort to not only parse which test will run, but also how the outcome of all run tests is.
    2. Since we could only test against the aggregated result for multiple tests, we can't be sure the test is actually right.

    With this PR we are actually testing the collection, by parsing the pytest output. Additionally, you can now add this codeblock

    # ======================================================================================
    # This block is necessary to autogenerate the parametrization for
    # tests/test_plugin.py::test_standard_collection.
    # It needs to be placed **after** the import of 'instantiate_device_type_tests' and
    # **before** its first usage.
    # ======================================================================================
    try:
        from _spy import Spy
    
        __spy__ = Spy()
        del Spy
        instantiate_device_type_tests = __spy__(instantiate_device_type_tests)
    except ModuleNotFoundError:
        pass
    # ======================================================================================
    

    to a test file to automatically test the selection of

    • everything in the file,
    • every test case, and
    • every test case function.

    Anything beyond that still needs to be configured manually.

    opened by pmeier 0
  • support selection of tests that use op infos

    support selection of tests that use op infos

    Fixes #16.

    Currently we rely on the device identifier comes directly after the test case function name. That is no longer true when using OpInfo's. The name of the instantiated test follows the scheme (template_name)_(op_name)_(device)_(dtype).

    This is a complete rewrite of the internal matching logic:

    • test case: Test cases are only parametrized by the device. Since every TestCase has a device_type attribute, we can simply strip the device identifier from the instantiated name.
    • test case function: Since both template_name and op_name in the pattern above might contain underscores and they are also separated by a single underscore, it is impossible to extract the two parts without further knowledge. To overcome this, we can inspect the source of the function and extract the template_name (which is the function name) directly.
    opened by pmeier 0
  • Re-add support for OpInfo's

    Re-add support for OpInfo's

    Consider the following test setup:

    from torch.testing._internal.common_device_type import (
        instantiate_device_type_tests,
        ops,
    )
    from torch.testing._internal.common_utils import TestCase
    from torch.testing._internal.common_methods_invocations import OpInfo
    
    BazOpInfo = OpInfo("add")
    
    
    class TestFoo(TestCase):
        @ops([BazOpInfo])
        def test_bar(self, device, dtype, op):
            pass
    
    
    instantiate_device_type_tests(TestFoo, globals(), only_for="cpu")
    

    Running pytest test_foo.py --collect-only on that results in:

    <PyTorchTestCase TestFooCPU>
      <PyTorchTestCaseFunction test_bar_add_cpu_float32>
      <PyTorchTestCaseFunction test_bar_add_cpu_float64>
    <PyTorchTestCase TestFooMETA>
      <PyTorchTestCaseFunction test_bar_add_meta_float32>
      <PyTorchTestCaseFunction test_bar_add_meta_float64>
    

    Naming schemes:

    • test cases: (template_name)(device)
    • test case functions: (template_name)_(op_name)_(device)_(dtype)

    After #12 test case functions now require the device to follow right after the template name. Thus, it is no longer possible to select an individual test by name pytest test_foo.py::TestFoo::test_bar

    opened by pmeier 0
  • make dtype testing more concise

    make dtype testing more concise

    Instead of instantiating the tests for all devices and use the @onlyCPU decorator everywhere, used the only_for keyword when instantiating. With that the meta tests are not generated in the first place and do not need to be considered for the number of skipped tests.

    opened by pmeier 0
  • Do not require torch at installation

    Do not require torch at installation

    Right now we have torch as dependency:

    https://github.com/Quansight/pytest-pytorch/blob/bd98f6b23214460605e4b8c0ee2bd4956e846291/setup.cfg#L32-L34

    If you set up the development environment to work on PyTorch, you would probably not have it installed already. Thus installing pytest-pytorch would also install torch which is usually not desired.

    opened by pmeier 0
  • Support for nested test case names

    Support for nested test case names

    Currently we match the test case (function) names based on this:

    https://github.com/Quansight/pytest-pytorch/blob/ff8f2d86906486a2d437b2617ef9973394f5e216/pytest_pytorch/plugin.py#L9-L10

    This works well until you have a setup similar to this:

    class TestFoo(TestCase):
        pass
    
    class TestFooBar(TestCase):
        pass
    

    If you run pytest test_foo.py::TestFoo on this, both test cases are collected instead of just TestFoo.

    opened by pmeier 0
  • CI tests are failing

    CI tests are failing

    The CI tests are failing, because we need torch>=1.9, but it is only available through the nightlies. Unfortunately, tox-ltt is not able to handle the nightly channel yet.

    opened by pmeier 0
Releases(v0.2.1)
  • v0.2.1(May 25, 2021)

    This adds the --disable-pytest-pytorch command line option (#25), which makes it easier to debug incompatibilites with the vanilla pytest collection.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 21, 2021)

    This minor release adds support for OpInfo's which are used more and more throughout the PyTorch test suite (#17).

    Furthermore @xmnlab helped us to get pytest-pytorch into conda-forge. Installation instruction can be found in the README (#18).

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Apr 20, 2021)

    This release includes two minor improvements:

    1. Support for selecting individual test cases if there names are nested, i.e. TestFoo and TestFooBar (#12)
    2. Removal of PyTorch as installation requirement (#14)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Apr 14, 2021)

Owner
Quansight
We grow talent, build technology, and discover products by helping companies grow OSS communities to organize and analyze their data.
Quansight
Given some test cases, this program automatically queries the oracle and tests your Cshanty compiler!

The Diviner A complement to The Oracle for compilers class. Given some test cases, this program automatically queries the oracle and tests your compil

Grant Holmes 2 Jan 29, 2022
Pytest support for asyncio.

pytest-asyncio: pytest support for asyncio pytest-asyncio is an Apache2 licensed library, written in Python, for testing asyncio code with pytest. asy

pytest-dev 1.1k Jan 02, 2023
Automatically mock your HTTP interactions to simplify and speed up testing

VCR.py 📼 This is a Python version of Ruby's VCR library. Source code https://github.com/kevin1024/vcrpy Documentation https://vcrpy.readthedocs.io/ R

Kevin McCarthy 2.3k Jan 01, 2023
frwk_51pwn is an open-sourced remote vulnerability testing and proof-of-concept development framework

frwk_51pwn Legal Disclaimer Usage of frwk_51pwn for attacking targets without prior mutual consent is illegal. frwk_51pwn is for security testing purp

51pwn 4 Apr 24, 2022
Fully functioning price detector built with selenium and python

Fully functioning price detector built with selenium and python

mark sikaundi 4 Mar 30, 2022
Baseball Discord bot that can post up-to-date scores, lineups, and home runs.

Sunny Day Discord Bot Baseball Discord bot that can post up-to-date scores, lineups, and home runs. Uses webscraping techniques to scrape baseball dat

Benjamin Hammack 1 Jun 20, 2022
Avocado is a set of tools and libraries to help with automated testing.

Welcome to Avocado Avocado is a set of tools and libraries to help with automated testing. One can call it a test framework with benefits. Native test

Ana Guerrero Lopez 1 Nov 19, 2021
MultiPy lets you conveniently keep track of your python scripts for personal use or showcase by loading and grouping them into categories. It allows you to either run each script individually or together with just one click.

MultiPy About MultiPy is a graphical user interface built using Dear PyGui Python GUI Framework that lets you conveniently keep track of your python s

56 Oct 29, 2022
It's a simple script to generate a mush on code forces, the script will accept the public problem urls only or polygon problems.

Codeforces-Sheet-Generator It's a simple script to generate a mushup on code forces, the script will accept the public problem urls only or polygon pr

Ahmed Hossam 10 Aug 02, 2022
Free cleverbot without headless browser

Cleverbot Scraper Simple free cleverbot library that doesn't require running a heavy ram wasting headless web browser to actually chat with the bot, a

Matheus Fillipe 3 Sep 25, 2022
A library for generating fake data and populating database tables.

Knockoff Factory A library for generating mock data and creating database fixtures that can be used for unit testing. Table of content Installation Ch

Nike Inc. 30 Sep 23, 2022
Cornell record & replay mock server

Cornell: record & replay mock server Cornell makes it dead simple, via its record and replay features to perform end-to-end testing in a fast and isol

HiredScoreLabs 134 Sep 15, 2022
Python dilinin Selenium kütüphanesini kullanarak; Amazon, LinkedIn ve ÇiçekSepeti üzerinde test işlemleri yaptığımız bir case study reposudur.

Python dilinin Selenium kütüphanesini kullanarak; Amazon, LinkedIn ve ÇiçekSepeti üzerinde test işlemleri yaptığımız bir case study reposudur. LinkedI

Furkan Gulsen 8 Nov 01, 2022
Front End Test Automation with Pytest Framework

Front End Test Automation Framework with Pytest Installation and running instructions: 1. To install the framework on your local machine: clone the re

Sergey Kolokolov 2 Jun 17, 2022
Python 3 wrapper of Microsoft UIAutomation. Support UIAutomation for MFC, WindowsForm, WPF, Modern UI(Metro UI), Qt, IE, Firefox, Chrome ...

Python 3 wrapper of Microsoft UIAutomation. Support UIAutomation for MFC, WindowsForm, WPF, Modern UI(Metro UI), Qt, IE, Firefox, Chrome ...

yin kaisheng 1.6k Dec 29, 2022
Find index entries in $INDEX_ALLOCATION attributes

INDXRipper Find index entries in $INDEX_ALLOCATION attributes Timeline created using mactime.pl on the combined output of INDXRipper and fls. See: sle

32 Nov 05, 2022
HTTP client mocking tool for Python - inspired by Fakeweb for Ruby

HTTPretty 1.0.5 HTTP Client mocking tool for Python created by Gabriel Falcão . It provides a full fake TCP socket module. Inspired by FakeWeb Github

Gabriel Falcão 2k Jan 06, 2023
tidevice can be used to communicate with iPhone device

tidevice can be used to communicate with iPhone device

Alibaba 1.8k Jan 08, 2023
create custom test databases that are populated with fake data

About Generate fake but valid data filled databases for test purposes using most popular patterns(AFAIK). Current support is sqlite, mysql, postgresql

Emir Ozer 2.2k Jan 04, 2023
Wraps any WSGI application and makes it easy to send test requests to that application, without starting up an HTTP server.

WebTest This wraps any WSGI application and makes it easy to send test requests to that application, without starting up an HTTP server. This provides

Pylons Project 325 Dec 30, 2022