Selects tests affected by changed files. Continous test runner when used with pytest-watch.

Related tags

Testingpytest-testmon
Overview

This is a pytest plug-in which automatically selects and re-executes only tests affected by recent changes. How is this possible in dynamic language like Python and how reliable is it? Read here: Determining affected tests

Quickstart

pip install pytest-testmon

# build the dependency database and save it to .testmondata
pytest --testmon

# change some of your code (with test coverage)

# only run tests affected by recent changes
pytest --testmon

To learn more about specifying multiple project directories and troubleshooting, please head to testmon.org

Comments
  • pytest-xdist support

    pytest-xdist support

    When combining testmon and xdist, more tests are rerun than necessary. I suspect the xdist runners don't update .testmondata.

    Is testmon compatible with xdist or is this due to misconfiguration?

    enhancement 
    opened by timdiels 22
  • Rerun tests should not fail.

    Rerun tests should not fail.

    There is a test (A) that fails when running pytest --testmon. After fixing test A, running pytest --testmon shows the results of when the test A was still failing. But If we change any code that affect test A with a NOP and run pytest --testmon the test A will pass once. But running pytest --testmon immediately after that will result in what I believe is a cached result to show up, which is the result failure.

    I believe the cached results in this case should be updated so it doesnt fail.

    opened by henningphan 17
  • Refactor

    Refactor

    This addresses issues: #53 #52 #51 #50 #32 partially #42 (poor mans solution)

    With the limited test cases it worked and the performance was OK. If @blueyed @boxed tell me it works for them I'll merge and release.

    opened by tarpas 14
  • performance with large source base

    performance with large source base

    There is a lot of repetition and json array in the .testmondata sqlite database, let's make it more efficient. @boxed you mentioned you have 20mb per commit Db, so this might interest you.

    opened by tarpas 10
  • Override pytest's exit code 5 for

    Override pytest's exit code 5 for "no tests were run"

    py.test will exit with code 5 in case no tests were ran (https://github.com/pytest-dev/pytest/issues/812, https://github.com/pytest-dev/pytest/issues/500#issuecomment-112204804).

    I can see that this is useful in general, but with pytest-testmon this should not be an error.

    Would it be possible and is it sensible to override pytest's exit code to be 0 in that case then?

    My use case it using pytest-watch and its --onfail feature, which should not be triggered because testmon made py.test skip all tests.

    opened by blueyed 10
  • using of debugger corrupts the testmon database

    using of debugger corrupts the testmon database

    from @max-imlian https://github.com/tarpas/pytest-testmon/issues/90#issuecomment-408984214 FYI I'm still having real issues with testmon, where it doesn't run tests despite both changes in the code and existing failures, even when I pass --tlf.

    I love the goals of testmon, and it's performs so well in 90% of cases that it's become an essential part of my workflow. As a result, when it ignores tests that have changed, it's frustrating. I've found that --tlf often doesn't work, which is a shame as it was often a 'last resort' to testmon making a mistake in ignoring too many tests.

    Is there any info I can supply that would help debug this? I'm happy to post anything.

    Would there be any use of a 'Conservative' mode, where testmon would lean towards testing too much? A Type I error is far less costly than a Type II.

    opened by tarpas 9
  • testmon_data.fail_reports might contain both failed and skipped

    testmon_data.fail_reports might contain both failed and skipped

    testmon_data.fail_reports might contain both failed and skipped:

    (Pdb++) nodeid
    'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output'
    (Pdb++) pp self.testmon_data.fail_reports[nodeid]
    [{'duration': 0.0020258426666259766,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'setup'},
     {'duration': 0.003198385238647461,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': 'tests/test_renderers.py:753: in test_schemajs_output\n'
                  '    output = renderer.render(\'data\', renderer_context={"request": request})\n'
                  'rest_framework/renderers.py:862: in render\n'
                  '    codec = coreapi.codecs.CoreJSONCodec()\n'
                  "E   AttributeError: 'NoneType' object has no attribute 'codecs'",
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'failed',
      'sections': [],
      'user_properties': [],
      'when': 'call'},
     {'duration': 0.008923768997192383,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'teardown'},
     {'duration': 0.0012934207916259766,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': ['tests/test_renderers.py', 743, 'Skipped: coreapi is not installed'],
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'skipped',
      'sections': [],
      'user_properties': [],
      'when': 'setup'},
     {'duration': 0.026836156845092773,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'teardown'}]
    (Pdb++) pp [unserialize_report('testreport', report) for report in self.testmon_data.fail_reports[nodeid]]
    [<TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='passed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='call' outcome='failed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='skipped'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>]
    

    I might have messed up some internals while debugging #101 / #102, but I think it should be ensured that this would never happen, e.g. on the DB level.

    opened by blueyed 9
  • combination of --testmon and --tlf will execute failing tests multiple time in some circumstances

    combination of --testmon and --tlf will execute failing tests multiple time in some circumstances

    I have a situation where executing pytest --testmon --tlf module1/module2 after deleting .testmondata will yield 3 failures.

    The next run of pytest --testmon --tlf module1/module2 yields 6 failures (3 duplicates of the first 3) and each subsequent run yields 3 more duplicates.

    If I run pytest --testmon --tlf module1/module2/tests I get back to 3 tests, the duplication starts as soon as I remove the final tests in the path (or execute with the root path of the repository).

    I've tried to find a simple way to reproduce this but failed so far. I've also tried looking at what changes in .testmondata but didn't spot anything.

    I'd be grateful if you could give me a hint how I might reproduce or analyze this, as I unfortunately can't share the code at the moment.

    opened by TauPan 9
  • deleting code causes an internal exception

    deleting code causes an internal exception

    I see this with pytest-testmon > 0.8.3 (downgrading to this version fixes it for me).

    How to reproduce:

    • delete .testmondata
    • run py.test --testmon once
    • Now delete some code (in my case, it was decorated with # pragma: no cover, not sure if that's relevant)
    • call py.test --testmon again And look at a backtrace like the following
    ===================================================================================== test session starts =====================================================================================
    platform linux2 -- Python 2.7.12, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
    Django settings: sis.settings_devel (from ini file)
    testmon=True, changed files: sis/lib/python/sis/rest.py, skipping collection of 529 items, run variant: default
    rootdir: /home/delgado/nobackup/git/sis/software, inifile: pytest.ini
    plugins: testmon-0.9.4, repeat-0.4.1, env-0.6.0, django-3.1.2, cov-2.4.0
    collected 122 items 
    INTERNALERROR> Traceback (most recent call last):
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 98, in wrap_session
    INTERNALERROR>     session.exitstatus = doit(config, session) or 0
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 132, in _main
    INTERNALERROR>     config.hook.pytest_collection(session=session)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
    INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
    INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 141, in pytest_collection
    INTERNALERROR>     return session.perform_collect()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 602, in perform_collect
    INTERNALERROR>     config=self.config, items=items)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
    INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
    INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/testmon/pytest_testmon.py", line 165, in pytest_collection_modifyitems
    INTERNALERROR>     assert item.nodeid not in self.collection_ignored
    INTERNALERROR> AssertionError: assert 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' not in set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...])
    INTERNALERROR>  +  where 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' = <TestCaseFunction 'test_should_fail_for_expired_events'>.nodeid
    INTERNALERROR>  +  and   set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...]) = <testmon.pytest_testmon.TestmonDeselect object at 0x7f85679b82d0>.collection_ignored
    
    ==================================================================================== 185 tests deselected =====================================================================================
    =============================================================================== 185 deselected in 0.34 seconds ================================================================================
    
    opened by TauPan 9
  • Performance issue on big projects

    Performance issue on big projects

    We have a big project with a big test suite. When starting pytest with testmon enabled it takes something like 8 minutes just to start when running (almost) no tests. A profile dump reveals this:

    Wed Dec  7 14:37:13 2016    testmon-startup-profile
    
             353228817 function calls (349177685 primitive calls) in 648.684 seconds
    
       Ordered by: cumulative time
       List reduced from 15183 to 100 due to restriction <100>
    
       ncalls  tottime  percall  cumtime  percall filename:lineno(function)
            1    0.001    0.001  648.707  648.707 env/bin/py.test:3(<module>)
     10796/51    0.006    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:335(_hookexec)
     10796/51    0.017    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:332(<lambda>)
     11637/51    0.063    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:586(execute)
            1    0.000    0.000  648.612  648.612 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/config.py:29(main)
      10596/2    0.016    0.000  648.612  324.306 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:722(__call__)
            1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:80(pytest_cmdline_main)
            1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:70(init_testmon_data)
            1    0.004    0.004  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:258(read_fs)
         4310    1.385    0.000  545.292    0.127 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:224(test_should_run)
         4310    3.995    0.001  542.647    0.126 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:229(<dictcomp>)
      4331550   54.292    0.000  538.652    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:104(checksums)
            1    0.039    0.039  537.138  537.138 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:273(compute_unaffected)
     73396811   67.475    0.000  484.571    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:14(checksum)
     73396871  360.852    0.000  360.852    0.000 {method 'encode' of 'str' objects}
            1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)
            1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)
    
    

    as you can see the last line is 80 seconds cumulative, but the two lines above are 360 and 484 respectively.

    This hurts our use case a LOT, and since we use a reference .testmondata file that has been produced by a CI job, it seems excessive (and useless) to recalculate this on each machine when it could be calculated once up front.

    So, what do you guys think about caching this data in .testmondata?

    opened by boxed 9
  • Failure still reported when whole module gets re-run

    Failure still reported when whole module gets re-run

    I have just seen this, after marking TestSchemaJSRenderer (which only contains test_schemajs_output) with @pytest.mark.skipif(not coreapi, reason='coreapi is not installed'):

    % .tox/venvs/py36-django20/bin/pytest --testmon
    ==================================================================================== test session starts =====================================================================================
    platform linux -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
    testmon=True, changed files: tests/test_renderers.py, skipping collection of 118 files, run variant: default
    rootdir: /home/daniel/Vcs/django-rest-framework, inifile: setup.cfg
    plugins: testmon-0.9.11, django-3.2.1, cov-2.5.1
    collected 75 items / 1219 deselected                                                                                                                                                         
    
    tests/test_renderers.py F                                                                                                                                                              [  0%]
    tests/test_filters.py F                                                                                                                                                                [  0%]
    tests/test_renderers.py ..............................................Fs                                                                                                               [100%]
    
    ========================================================================================== FAILURES ==========================================================================================
    _________________________________________________________________________ TestSchemaJSRenderer.test_schemajs_output __________________________________________________________________________
    tests/test_renderers.py:753: in test_schemajs_output
        output = renderer.render('data', renderer_context={"request": request})
    rest_framework/renderers.py:862: in render
        codec = coreapi.codecs.CoreJSONCodec()
    E   AttributeError: 'NoneType' object has no attribute 'codecs'
    _________________________________________________________________ BaseFilterTests.test_get_schema_fields_checks_for_coreapi __________________________________________________________________
    tests/test_filters.py:36: in test_get_schema_fields_checks_for_coreapi
        assert self.filter_backend.get_schema_fields({}) == []
    rest_framework/filters.py:36: in get_schema_fields
        assert coreschema is not None, 'coreschema must be installed to use `get_schema_fields()`'
    E   AssertionError: coreschema must be installed to use `get_schema_fields()`
    ________________________________________________________________ TestDocumentationRenderer.test_document_with_link_named_data ________________________________________________________________
    tests/test_renderers.py:719: in test_document_with_link_named_data
        document = coreapi.Document(
    E   AttributeError: 'NoneType' object has no attribute 'Document'
    ====================================================================================== warnings summary ======================================================================================
    None
      [pytest] section in setup.cfg files is deprecated, use [tool:pytest] instead.
    
    -- Docs: http://doc.pytest.org/en/latest/warnings.html
    ======================================================== 3 failed, 46 passed, 1 skipped, 1219 deselected, 1 warnings in 3.15 seconds =========================================================
    

    TestSchemaJSRenderer.test_schemajs_output should not show up in FAILURES.

    Note also that tests/test_renderers is listed twice, where the failure appears to come from the first entry.

    (running without --testmon shows the same number of tests (tests/test_renderers.py ..............................................Fs)).

    opened by blueyed 8
  • feat: merge DB

    feat: merge DB

    This is a very draft PR to answer this need.

    I had a hard time understanding how testmon works internally, and also how to test this.

    I'll happily get any feedback, so we can agree and merge this in testmon.

    It would be very helpful for us as we have multiple workers in our CI that run tests, and we would need to merge testmon result so it can be reused later

    opened by ElPicador 0
  • multiprocessing does not seem to be supported

    multiprocessing does not seem to be supported

    Suppose I have the following code and tests

    # mymodule.py
    class MyClass:
        def foo(self):
            return "bar"
    
    # test_mymodule.py
    import multiprocessing
    
    def __run_foo():
        from mymodule import MyClass
        c = MyClass()
        assert c.foo() == "bar"
    
    def test_foo():
        process = multiprocessing.Process(target=__run_foo)
        process.start()
        process.join()
        assert process.exitcode == 0
    

    After a first successful run of pytest --testmon test_mymodule.py, one can change the implementation of foo() without testmon noticing and no new runs are triggered.

    When running pytest --cov manually, we get a trace image

    opened by janbernloehr 1
  • Changes to global / class variables are ignored (if no method of their module is executed)

    Changes to global / class variables are ignored (if no method of their module is executed)

    Suppose you have the following files

    # mymodule.py
    foo_bar = "value"
    
    class MyClass:
        FOO = "bar"
    
    # test_mymodule.py
    from mymodule import MyClass, foo_bar
    
    def test_module():
        assert foo_bar == "value"
        assert MyClass.FOO == "bar"
    

    Now running pytest --testmon test_mymodule.py does not rerun test_module() when the value of MyClass.FOO or foo_bar is changed. Even worse, FOO can be completely removed, e.g.

    class MyClass:
        pass
    

    without triggering a re-run.

    When running pytest --cov manually, there seems to be a trace

    image

    opened by janbernloehr 2
  • Fix/improve linting of the code when using pytest-pylint

    Fix/improve linting of the code when using pytest-pylint

    This also includes changed source code files when pytest-pylint is enabled. By default source code files are ignored so the pylint is not able to process those files.

    opened by msemanicky 1
  • Testmon very sensitive towards library changes

    Testmon very sensitive towards library changes

    Background

    I have found pytest-testmon to be very sensitive towards the slightest changes for the packages in the environment it is installed. This makes very difficult to use pytest-testmon in practice when sharing a .testmondata file between our developers.

    Use Case:

    A developer is trying out a new library foo and have written a wrapper foo_bar.py for this. The developer only wants to run test_foo_bar.py for foo_bar.py, however the entire test suite is run since foo was installed.

    Example Solution

    Flag for disregarding library changes

    • I am happy to contribute with a solution, if it is considered possible.
    • If library needs funding for a solution, this is an option as well.
    opened by dgot 1
  • How does testmon handle data files?

    How does testmon handle data files?

    if I have in my code an open('path/to/file.json').read() command, would testmon be able to flag that file?

    I saw there is something called file tracers in coverage but not sure if it is for this use case.

    opened by uriva 1
Releases(v1.4.3b1)
Make Selenium work on Github Actions

Make Selenium work on Github Actions Scraping with BeautifulSoup on GitHub Actions is easy-peasy. But what about Selenium?? After you jump through som

Jonathan Soma 33 Dec 27, 2022
Faker is a Python package that generates fake data for you.

Faker is a Python package that generates fake data for you. Whether you need to bootstrap your database, create good-looking XML documents, fill-in yo

Daniele Faraglia 15.2k Jan 01, 2023
A test fixtures replacement for Python

factory_boy factory_boy is a fixtures replacement based on thoughtbot's factory_bot. As a fixtures replacement tool, it aims to replace static, hard t

FactoryBoy project 3k Jan 05, 2023
A Library for Working with Sauce Labs

Robotframework - Sauce Labs Plugin This is a plugin for the SeleniumLibrary to help with using Sauce Labs. This library is a plugin extension of the S

joshin4colours 6 Oct 12, 2021
A browser automation framework and ecosystem.

Selenium Selenium is an umbrella project encapsulating a variety of tools and libraries enabling web browser automation. Selenium specifically provide

Selenium 25.5k Jan 01, 2023
catsim - Computerized Adaptive Testing Simulator

catsim - Computerized Adaptive Testing Simulator Quick start catsim is a computerized adaptive testing simulator written in Python 3.4 (with modificat

Nguyễn Văn Anh Tuấn 1 Nov 29, 2021
PoC getting concret intel with chardet and charset-normalizer

aiohttp with charset-normalizer Context aiohttp.TCPConnector(limit=16) alpine linux nginx 1.21 python 3.9 aiohttp dev-master chardet 4.0.0 (aiohttp-ch

TAHRI Ahmed R. 2 Nov 30, 2022
Python version of the Playwright testing and automation library.

🎭 Playwright for Python Docs | API Playwright is a Python library to automate Chromium, Firefox and WebKit browsers with a single API. Playwright del

Microsoft 7.8k Jan 02, 2023
🐍 Material for PyData Global 2021 Presentation: Effective Testing for Machine Learning Projects

Effective Testing for Machine Learning Projects Code for PyData Global 2021 Presentation by @edublancas. Slides available here. The project is develop

Eduardo Blancas 73 Nov 06, 2022
The definitive testing tool for Python. Born under the banner of Behavior Driven Development (BDD).

mamba: the definitive test runner for Python mamba is the definitive test runner for Python. Born under the banner of behavior-driven development. Ins

Néstor Salceda 502 Dec 30, 2022
This repository has automation content to test Arista devices.

Network tests automation Network tests automation About this repository Requirements Requirements on your laptop Requirements on the switches Quick te

Netdevops Community 17 Nov 04, 2022
A cross-platform GUI automation Python module for human beings. Used to programmatically control the mouse & keyboard.

PyAutoGUI PyAutoGUI is a cross-platform GUI automation Python module for human beings. Used to programmatically control the mouse & keyboard. pip inst

Al Sweigart 7.5k Dec 31, 2022
A web scraping using Selenium Webdriver

Savee - Images Downloader Project using Selenium Webdriver to download images from someone's profile on https:www.savee.it website. Usage The project

Caio Eduardo Lobo 1 Dec 17, 2021
Browser reload with uvicorn

uvicorn-browser This project is inspired by autoreload. Installation pip install uvicorn-browser Usage Run uvicorn-browser --help to see all options.

Marcelo Trylesinski 64 Dec 17, 2022
Test python asyncio-based code with ease.

aiounittest Info The aiounittest is a helper library to ease of your pain (and boilerplate), when writing a test of the asynchronous code (asyncio). Y

Krzysztof Warunek 55 Oct 30, 2022
Python drivers for YeeNet firmware

yeenet-router-driver-python Python drivers for YeeNet firmware This repo is under heavy development. Many or all of these scripts are not likely to wo

Jason Paximadas 1 Dec 26, 2021
Scraping Bot for the Covid19 vaccination website of the Canton of Zurich, Switzerland.

Hi 👋 , I'm David A passionate developer from France. 🌱 I’m currently learning Kotlin, ReactJS and Kubernetes 👨‍💻 All of my projects are available

1 Nov 14, 2021
This project demonstrates selenium's ability to extract files from a website.

This project demonstrates selenium's ability to extract files from a website. I've added the challenge of connecting over TOR. This package also includes a personal archive site built in NodeJS and A

2 Jan 16, 2022
A suite of benchmarks for CPU and GPU performance of the most popular high-performance libraries for Python :rocket:

A suite of benchmarks for CPU and GPU performance of the most popular high-performance libraries for Python :rocket:

Dion Häfner 255 Jan 04, 2023