Hypothesis is a powerful, flexible, and easy to use library for property-based testing.

Overview

Hypothesis

Hypothesis is a family of testing libraries which let you write tests parametrized by a source of examples. A Hypothesis implementation then generates simple and comprehensible examples that make your tests fail. This simplifies writing your tests and makes them more powerful at the same time, by letting software automate the boring bits and do them to a higher standard than a human would, freeing you to focus on the higher level test logic.

This sort of testing is often called "property-based testing", and the most widely known implementation of the concept is the Haskell library QuickCheck, but Hypothesis differs significantly from QuickCheck and is designed to fit idiomatically and easily into existing styles of testing that you are used to, with absolutely no familiarity with Haskell or functional programming needed.

Hypothesis for Python is the original implementation, and the only one that is currently fully production ready and actively maintained.

Hypothesis for Other Languages

The core ideas of Hypothesis are language agnostic and in principle it is suitable for any language. We are interested in developing and supporting implementations for a wide variety of languages, but currently lack the resources to do so, so our porting efforts are mostly prototypes.

The two prototype implementations of Hypothesis for other languages are:

  • Hypothesis for Ruby is a reasonable start on a port of Hypothesis to Ruby.
  • Hypothesis for Java is a prototype written some time ago. It's far from feature complete and is not under active development, but was intended to prove the viability of the concept.

Additionally there is a port of the core engine of Hypothesis, Conjecture, to Rust. It is not feature complete but in the long run we are hoping to move much of the existing functionality to Rust and rebuild Hypothesis for Python on top of it, greatly lowering the porting effort to other languages.

Any or all of these could be turned into full fledged implementations with relatively little effort (no more than a few months of full time work), but as well as the initial work this would require someone prepared to provide or fund ongoing maintenance efforts for them in order to be viable.

Comments
  • Implement characters strategy

    Implement characters strategy

    This strategy allows to produce single unicode characters by specified rules: you may have exact characters or characters that belongs some Unicode Category. Additionally, you may reverse your rule to specify which characters you don't want to get.

    characters strategy preserved existed OneCharStringStrategy behaviour when with no arguments it produces all the characters except surrogates.

    characters with exact only argument converted into sampled_from strategy implicitly.

    Generally tests passed, but travis fails due to irrelevant reasons. What I can fix will send with additional PRs.

    Feedback is welcome (:

    opened by kxepal 44
  • Create profile loading mechanism

    Create profile loading mechanism

    Allows users to register settings profiles and load them in as needed. Allowing for different defaults for different environments

    Is this kind of what you were thinking for the profile setup?

    This code should enable the following pattern

    Settings.register_profile('dev', max_examples=5, max_shrinks=10, max_iterations=100)
    Settings.register_profile('ci', max_examples=1000, max_shrinks=1000, max_iterations=2000)
    
    Settings.load_profile(os.getenv('HYPOTHESIS_PROFILE', 'default'))
    

    If you think im close ill iterate with you as needed. If we get to something you like let me know and ill write up the docs.

    opened by Bachmann1234 44
  • On the possibility of renaming SearchStrategy

    On the possibility of renaming SearchStrategy

    One of my least favourite things about Hypothesis is that it uses the word 'strategy' to describe its data generators. This was an internal name that I unthinkingly propagated into the public API, and was at best vaguely descriptive of what it did back in Hypothesis < 3.0 and is completely irrelevant to its function now if not actively misleading.

    Additionally, it's looking increasingly likely that Hypothesis for other languages is going to start being a thing. If that's the case, I will not be propagating this mistake to them, and it would be good to have Hypothesis-for-Python use the common terminology.

    This feels like a huge big scary change, and it is but that's mostly updating the documentation, articles, and user perception. In terms of updating the code:

    1. One of the oddities around how Hypothesis is structured is that the type SearchStrategy is not actually part of the public API. There's nowhere public that users can actually import it from, and users are not allowed to subclass it - all creation of strategies goes through hypothesis.strategies. So nothing really needs to be done there.
    2. The hypothesis.strategies module can easily be renamed and have a module that just imports and re-exports everything in its __all__.

    Set against that relative ease is that we're renaming what is effectively the main entry point to the Hypothesis API. There's a lot of documentation to be updated and we'll be tripping over the terminology for a long time. So if we do this it comes with a significant cognitive burden and a moderate amount of user ill-will.

    So, two questions:

    1. Should we do this? My vote is yes, but I'm sufficiently nervous about the idea that I could easily be talked down.
    2. If so, what should we call it?

    Possible alternative names

    I'm not currently flush with good names for this. I'd like generator or something along those lines, but that would be horrendously confusing in a Python context. Alternatives:

    • Recipe
    • Schema
    • Kitten (we probably shouldn't go with this one)
    • Provider

    Suggestions very much welcome.

    enhancement meta docs 
    opened by DRMacIver 39
  • Implementation of hypothesis.extras.pandas

    Implementation of hypothesis.extras.pandas

    This implements an extras module providing pandas core data types.

    It is currently a very provisional pull request. It's probably buggy, probably slow, and definitely incomplete. I'm putting it out early to solicit feedback from the interested, as I don't really use pandas myself so would like some commentary from people who do! Feedback on the API would definitely be particularly appreciated - the end result will probably look more or less like what's currently there, but it's more of a sketch than anything else.

    TODO

    • [x] ~~Merge #826~~ I think we can release without. It will make datetime data types slightly problematic initially, but that will get fixed whenever #826 is released.
    • [x] More documentation, with usage examples
    • [x] Update RELEASE.rst
    • [x] Final sign-off on the API from @sritchie
    • [x] Review and anything that comes up in it
    opened by DRMacIver 38
  • Array API extra

    Array API extra

    What this introduces

    This implements strategies for Array API implementations inside hypothesis.extra.array_api (closes #3037). As the Array API is largely based on NumPy behaviour, I imitated hypothesis.extra.numpy where appropriate and so these strategies will hopefully feel familiar to the extra's contributors and users.

    Strategies in array_api do not import array modules, instead taking them as the xp argument. For the most part they assume that the user has indeed provided an Array API-compliant module.

    The strategies would be used by Array API implementers (e.g. @asmeurer's compliance suite) and array-consuming libraries (examples).

    Many tests are based on the test/numpy ones. There is a mock array-API implementation in xputils.py. Tests will try to import NumPy's Array API implementation (numpy/numpy#18585 was merged just today) and will fall back to the mocked one. I couldn't easily mock a "compliant-looking" array object so a particular non-compliance warning is suppressed and some assertions are skipped when necessary.

    cc @MattiP

    Specific requests for feedback

    An immediate concern I have is with how I form pretty ("reflective") reprs for xp consuming strategies. Default use of @defines_strategy (and thus LazyStrategy) is prone to produce some rather noisy repr strings, so I ended up wrapping these strategies with a custom decorator @pretty_xp_repr... it seems to be a hacky solution, especially since it gets called multiple times.

    I'm also wondering if you're happy with the get_strategies_namespace() method that Zac suggested. It could be nice to not even have the top-level strategies (i.e. array module needs to be passed) to require users to use it, which could mitigate confusion. There could also be some magic where a user could import say extra.array_api.pytorch and have Hypothesis auto import a (future) Array API implementation in PyTorch.

    I see the NumPy extra (and thus this PR) violates the house API style. Let me know if for array_api I should take the oppurtunity to drop potentially undesirable features in arrays() such as inferring strategies via dtype and passing kwargs via elements.

    The shape/axis/index strategies were implemented namely to avoid importing NumPy by using extra.numpy, which also allows use of Array API naming conventions and removes some NumPy-specific limitations. They near-verbatim emulate those in the NumPy extra, so a future PR could have extra.numpy wrap them to deal with small differences.

    The Array API is not finalised yet. Some tests may need slight modification in the future if the consortium decide in data-apis/array-api#212 that they don't want polymorphic return values in xp.unique().

    new-feature interop 
    opened by honno 34
  • PyCon Australia 2018 Sprints!

    PyCon Australia 2018 Sprints!

    Hello to everyone at the sprints! This issue is the place to get started, comment to claim an issue (please talk to me first), and so on. Thanks for helping out!

    The "what is Hypothesis" pack: my talk if you like videos, the docs as a general overview, and these quick excercises to try it out.

    All of the following are valued contributions: reading docs or trying to use hypothesis and telling me what you found confusing; adding Hypothesis tests to other open-source projects (e.g. Pandas, dateutil, Xarray, etc - ask me!); new documentation or blog posts or artwork; and of course traditional code whether bugfixes or new features!

    General checklist:

    1. Talk to Zac about what you want to do - I can help you find the right issue (start here) or other way to contribute :smile:
    2. (optional): read CONTRIBUTING.rst and check what's in the guides/ directory for tips.
    3. Comment below so people don't work on the same issue by accident!
    4. Do the thing :snake:
    5. Open a PR :tada:
    meta 
    opened by Zac-HD 34
  • Strategies from type hints, and inference of missing arguments to builds() and @given()

    Strategies from type hints, and inference of missing arguments to builds() and @given()

    Closes #293. This pull:

    • adds a new function from_type to look up a strategy that can generate instances of the given type
    • upgrades builds() to infer missing arguments based on type hints
    • upgrades @given to infer missing arguments from type hints
    • adds a new function register_type_strategy to register custom types that can't be automatically derived (based on a known child class or type hints, using builds)

    It's been a long time coming, but I think I'm done. (again 😉)

    opened by Zac-HD 34
  • Support for generating email addresses

    Support for generating email addresses

    The django integration currently relies on the simple email strategy in provisional.py. It would be nice to have a native email address type which was a bit better at hitting edge cases.

    When this issue was first opened, we relied on the fake-factory package for Django email fields, which was substantially slower and less devious than Hypothesis strategies. That has been fixed, but we'd still like to improve the email strategy before making it part of the public API.

    Anyone working on this issue should start with the provisional strategy, move it to strategies.py, and gradually expand it to generate unusual things allowed by the relevant RFCs (see below). Very obscure features - such as >255 character addresses, or using an IP address instead of a domain name - should be avoided, or at least gated behind an optional argument (eg emails(allow_obscure=False)).

    new-feature 
    opened by DRMacIver 34
  • Get check-coverage run time back under control

    Get check-coverage run time back under control

    The idea of tests/cover is that it's supposed to be a relatively small fast set of tests that gets 100% coverage and anything not required for that should go in tests/nocover. This idea has been seen more than in breach than observance (including from me), and as a result the coverage check is one of our slowest build jobs. See e.g. this build where it took 13 minutes.

    It would be nice to get the coverage check under 5 minutes on Travis. I propose many small applications of the following algorithm:

    1. Run "PYTHONPATH=src python -m pytest tests/cover --durations=10"
    2. Pick the slowest test
    3. Move it into nocover.
    4. Run "make check-coverage" (or "tox -e coverage")
    5. If this causes us to have less than 100% coverage, come up with a faster test to cover that line to put in tests/cover.
    6. Open a pull request with that one test move.

    This probably won't be sufficient to get the time under control on its own, but if we get it to the point where no individual test takes > a second we can start thinking about other measures.

    tests/build/CI 
    opened by DRMacIver 29
  • settings rationalization

    settings rationalization

    The Hypothesis settings system is currently a confusing mess of confusingness. It asks users to make decisions they can't possibly have sufficient information to make, and ties those decisions very closely to Hypothesis internals.

    I think we should give serious consideration to deprecating most of them. This ties in to #534

    Here is a rough list of what I think about specific settings:

    • buffer_size - very bad. Totally about Hypothesis internals, totally undocumented as to what it really means. Hypothesis should figure out something sensible to do here and then do it.
    • ~~database_file - not intrinsically bad, but redundant with database. May be worth considering making the database API more public and just using the database setting.~~ deprecated in #1196
    • database - good. Sensible operational decision that the user can reasonably have strong opinions about and Hypothesis shouldn't.
    • derandomize - Good though arguably redundant with seed.
    • max_examples - mostly reasonable but confusingly named (conflicts with example decorator while hilariously ignoring it entirely). I feel like having it be a max might also be a bad idea.
    • max_iterations - bad, often trips people up when they forget to set it, there's no real way people could have enough information to set it. Hypothesis should grow a better heuristic here.
    • max_shrinks - tuning heuristic. Only really useful to make testing strategies faster, but crucial there (~10x slowdown in our tests without it).
    • min_satisfying_examples - Same. See also #534 and comments on #518. This is very tied to Hypothesis's internal notion of what an example is.
    • perform_health_check - sensible operational decision in which the user deliberately decides to ignore warnings, but I feel like it might be better dropped and asking people to explicitly suppress the health check they don't want using perform_health_check.
    • phases - this is a useful feature that I think supplants a lot of the use cases for things like max_shrinks, max_examples, etc and apparently is completely undocumented! That should be fixed.
    • stateful_step_count - totally bad. Why is this even here? Hypothesis should just do something sensible automatically and provide a function argument to override that.
    • ~~strict - good. Sensible operational decision as to how the user wants to respond to Hypothesis deprecations~~ bad - deprecated in favour of normal warning control.
    • suppress_health_check - good. Explicit request to suppress health checks. Totally reasonable to want.
    • ~~timeout - not intrinsically bad from a user point of view but should probably go anyway due to #534~~ now deprecated
    • verbosity - Good. Useful debugging tool, totally sensible thing to want to tune.

    All of this should be done with a very careful deprecation process which preserves the current default behaviour, warns when you execute a behaviour that will change in future, and gives people an option to opt in to the future behaviour.

    Pending Deprecation Plans

    The following are suggested routes for some deprecations:

    • Deprecate perform_health_check=False and suggest suppress_health_check=list(HealthCheck)
    • Deprecate max_iterations, and default to not_set - meaning five times max_examples (the longstanding factor before the default for that was reduced to speed up under coverage).
    • Deprecate stateful_step_count and set it to not_set by default. Add a max_steps property on GenericStateMachine which also defaults to not_set (or maybe None?). Determine actual max_steps in order of "Look on object, look on settings, use a default". When the deprecation plan moves to dropping it,
    enhancement opinions-sought legibility 
    opened by DRMacIver 29
  • Fixing several problems with hypothes.extra.django

    Fixing several problems with hypothes.extra.django

    I really hate it when people submit a monster pull request to my projects. So... I decided to submit a monster pull request to yours.

    This improves hypothesis.extra.django in the following ways:

    • Allows developers to override the default field mappings using add_default_field_mapping (previously impossible).
    • Performs a full_clean() on all models before saving, ensuring they're valid and allowing the model to perform it's full lifecycle.
    • Provides public strategies for generating data for many django model field types. For example, you can get data for a SlugField using slug_field_values(MyModel._meta.get_field("some_field"))
    • Provides a public strategy for generating data for any registered field field_values(MyModel._meta.get_field("some_field"))
    • Provides a defines_field_strategy decorator for easily creating new field strategies.
    • Minimizes the size of generated data by biasing towards model default fields, configurable via the __default_bias argument to models(). Non-trivial models previously overflowed the data buffer frequently, triggering health checks.
    • Added built-in support for UrlField, DateField, TimeField and SlugField mapping.
    • Multi-db support for models() strategy via __db parameter.
    • Ability to use add_default_field_mapping as a decorator to a strategy factory.

    Fixes bugs:

    • Uncaught DataErrors no longer blowing up tests.
    • Fields no longer silently truncated by unicode null.
    • EmailField strategy no longer generates values that don't pass EmailValidator.

    Potential "breaking" changes:

    • The default_value strategy used to omit values from the model data dict. Now it includes the field's default value. This is now in line with what the public docs actually say, but it's still different from current behaviour. I'd class this as a bugfix.

    To do:

    • Improve performance of UrlField and EmailField implementations.
    • Update documentation.
    • Fix formatting errors.

    I'll happily tackle these todos if I get some sort of positive feedback on these changes. :)

    opened by etianen 29
  • New method: `@example(...).xfail()`

    New method: `@example(...).xfail()`

    Closes #3530, as @rsokl and I discussed a few weeks ago.

    Plus a small fix so that KeyboardInterrupt (and other exceptions not treated as test failure) interrupt immediately as they do for generated examples.

    new-feature 
    opened by Zac-HD 0
  • Add `@example(...).xfail(...)` to check inputs which are expected to fail

    Add `@example(...).xfail(...)` to check inputs which are expected to fail

    A classic error when testing is to write a test function that can never fail, even on inputs that aren't allowed or manually provided. @rsokl mentioned a nice design pattern for @pytest.mark.parametrize() of including an xfailed param to check that the test can fail; and after some discussion we agreed that this would be nice to have in Hypothesis too.

    So: following #3516, I'd like to add an @example(...).xfail(condition: bool = True, *, reason: str = "", raises: type[BaseException] | tuple[...] = BaseException) - matching the interface of pytest.mark.xfail() (omitting the subset that doesn't make sense here). Implementation-wise, this will return self if condition is False, otherwise return an instance of a new XfailExample subclass which is known and handled by the execution logic in core.py. If the exception raised is not an instance of raises and our failure_exceptions_to_catch() it propogates as usual; if nothing is raised then we raise an error (using pytest.xfail() if available). Naturally we'll also support .via().xfail() and .xfail().via() 😁

    new-feature 
    opened by Zac-HD 0
  • Improve reporting of failing examples

    Improve reporting of failing examples

    It's been a while since we revisited the way that Hypothesis reports failing examples, but I think it's time:

    • We'll need to revisit reporting in order to support https://github.com/HypothesisWorks/hypothesis/issues/3411, and it'd be nice to get that ready before the engine changes both to break the work into smaller chunks, and to get improvements into users' hands as soon as possible.
    • I'd like to encourage more use of the @example() decorator, perhaps with .via() on Python 3.9+, and reporting failures in that format seems like an effective way to do so - and makes reproducing a mere matter of copy-pasting when the database doesn't make it fully automatic.

    For both of these reasons, using the __repr__ of custom objects is unsatisfying, because they often - and by default - don't consist of executable code which would return an equivalent object. Happily, our friends over at CrossHair recently solved this problem in a way I think we can imitate: represent custom objects by (recursively) representing the call that we executed in order to create them!

    You can see their implementation here; in short we can store a tree of fragments (via get_pretty_function_description() etc.) in a dict keyed off the ID of the object, and spanning the lifetime of that example. For anything not in the dict, i.e. where the function call was not executed by Hypothesis builds() or .map() (or flatmap, or recursive, or etc.), we'll fall back to our existing pretty-printing code. One notable divergence: I'll want to store the start and end span of the underlying buffer that generated each fragment, since that's a key component of #3411.

    enhancement legibility 
    opened by Zac-HD 0
  • Strategie `from_type` fails in some cases when using  `tuple` instead of `Tuple` as type

    Strategie `from_type` fails in some cases when using `tuple` instead of `Tuple` as type

    There is a simple example:

    from dataclasses import dataclass
    from typing import Tuple
    from hypothesis import given, strategies as st
    
    @dataclass
    class ItWorks:
        a: dict[Tuple[int, int], str]
    
    @dataclass
    class ItDoesnt:
        a: dict[tuple[int, int], str]
    
    @given(st.from_type(ItWorks))
    def test_works(x: ItWorks):
        assert len(x.a) >=0
    
    
    @given(st.from_type(ItDoesnt))
    def test_doesnt_work(x: ItDoesnt):
        assert len(x.a) >=0 
    

    The difference between classes ItWorks and ItDoesnt is that the latter uses tuple instead of typing.Tuple. The first test passes, the second one produces the exception:

    _______________________________________________ test_doesnt_work ______________________________________________
    
        @given(st.from_type(ItDoesnt))
    >   def test_doesnt_work(x: ItDoesnt):
    
    tests/test_demo.py:19: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    .venv/lib/python3.10/site-packages/hypothesis/core.py:568: in process_arguments_to_given
        search_strategy.validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:417: in validate
        self.do_validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/collections.py:43: in do_validate
        s.validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:417: in validate
        self.do_validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:821: in do_validate
        self.mapped_strategy.validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:417: in validate
        self.do_validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/lazy.py:131: in do_validate
        w.validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:417: in validate
        self.do_validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:821: in do_validate
        self.mapped_strategy.validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:417: in validate
        self.do_validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/collections.py:43: in do_validate
        s.validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:417: in validate
        self.do_validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/lazy.py:131: in do_validate
        w.validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/core.py:871: in validate
        fixed_dictionaries(self.kwargs).validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:417: in validate
        self.do_validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/lazy.py:131: in do_validate
        w.validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:417: in validate
        self.do_validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:821: in do_validate
        self.mapped_strategy.validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:417: in validate
        self.do_validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/collections.py:43: in do_validate
        s.validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:417: in validate
        self.do_validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/lazy.py:131: in do_validate
        w.validate()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:418: in validate
        self.is_empty
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:136: in accept
        recur(self)
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/strategies.py:132: in recur
        mapping[strat] = getattr(strat, calculation)(recur)
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/deferred.py:65: in calc_is_empty
        return recur(self.wrapped_strategy)
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/deferred.py:35: in wrapped_strategy
        result = self.__definition()
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/core.py:1047: in <lambda>
        lambda thing: deferred(lambda: _from_type(thing)),
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/core.py:1167: in _from_type
        return types.from_typing_type(thing)
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/types.py:425: in from_typing_type
        if not any(
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    .0 = <list_iterator object at 0x7f017187cfd0>
    
        if not any(
    >       isinstance(T, type) and issubclass(int, T)
            for T in list(union_elems) + [elem_type]
        ):
    E   TypeError: issubclass() argument 2 cannot be a parameterized generic
    
    .venv/lib/python3.10/site-packages/hypothesis/strategies/_internal/types.py:426: TypeError
    

    Since it is now recommended to use directly the non-typing generic vs the typing.Tuple type, this is a problem.

    Note. This happens apparently only when the tuple is used as key in a dict.

    bug 
    opened by jldiaz 0
Releases(hypothesis-python-6.61.2)
Owner
Hypothesis
Hypothesis: Test faster, fix more
Hypothesis
PyQaver is a PHP like WebServer for Python.

PyQaver is a PHP like WebServer for Python.

Dev Bash 7 Apr 25, 2022
Generic automation framework for acceptance testing and RPA

Robot Framework Introduction Installation Example Usage Documentation Support and contact Contributing License Introduction Robot Framework is a gener

Robot Framework 7.7k Dec 31, 2022
AWS Lambda & API Gateway support for ASGI

Mangum Mangum is an adapter for using ASGI applications with AWS Lambda & API Gateway. It is intended to provide an easy-to-use, configurable wrapper

Jordan Eremieff 1.2k Jan 06, 2023
create custom test databases that are populated with fake data

About Generate fake but valid data filled databases for test purposes using most popular patterns(AFAIK). Current support is sqlite, mysql, postgresql

Emir Ozer 2.2k Jan 06, 2023
Faker is a Python package that generates fake data for you.

Faker is a Python package that generates fake data for you. Whether you need to bootstrap your database, create good-looking XML documents, fill-in yo

Daniele Faraglia 15.2k Jan 01, 2023
Radically simplified static file serving for Python web apps

WhiteNoise Radically simplified static file serving for Python web apps With a couple of lines of config WhiteNoise allows your web app to serve its o

Dave Evans 2.1k Jan 08, 2023
Robyn is an async Python backend server with a runtime written in Rust, btw.

Robyn is an async Python backend server with a runtime written in Rust, btw. Python server running on top of of Rust Async RunTime. Installation

Sanskar Jethi 1.8k Dec 30, 2022
Coroutine-based concurrency library for Python

gevent Read the documentation online at http://www.gevent.org. Post issues on the bug tracker, discuss and ask open ended questions on the mailing lis

gevent 5.9k Dec 28, 2022
HTTP client mocking tool for Python - inspired by Fakeweb for Ruby

HTTPretty 1.0.5 HTTP Client mocking tool for Python created by Gabriel Falcão . It provides a full fake TCP socket module. Inspired by FakeWeb Github

Gabriel Falcão 2k Jan 06, 2023
Hypothesis is a powerful, flexible, and easy to use library for property-based testing.

Hypothesis Hypothesis is a family of testing libraries which let you write tests parametrized by a source of examples. A Hypothesis implementation the

Hypothesis 6.4k Jan 01, 2023
A mocking library for requests

httmock A mocking library for requests for Python 2.7 and 3.4+. Installation pip install httmock Or, if you are a Gentoo user: emerge dev-python/httm

Patryk Zawadzki 452 Dec 28, 2022
livereload server in python (MAINTAINERS NEEDED)

LiveReload Reload webpages on changes, without hitting refresh in your browser. Installation python-livereload is for web developers who know Python,

Hsiaoming Yang 977 Dec 14, 2022
A modern API testing tool for web applications built with Open API and GraphQL specifications.

Schemathesis Schemathesis is a modern API testing tool for web applications built with Open API and GraphQL specifications. It reads the application s

Schemathesis.io 1.6k Jan 04, 2023
Official mirror of https://gitlab.com/pgjones/hypercorn https://pgjones.gitlab.io/hypercorn/

Hypercorn Hypercorn is an ASGI web server based on the sans-io hyper, h11, h2, and wsproto libraries and inspired by Gunicorn. Hypercorn supports HTTP

Phil Jones 432 Jan 08, 2023
Scalable user load testing tool written in Python

Locust Locust is an easy to use, scriptable and scalable performance testing tool. You define the behaviour of your users in regular Python code, inst

Locust.io 20.4k Jan 08, 2023
A test fixtures replacement for Python

factory_boy factory_boy is a fixtures replacement based on thoughtbot's factory_bot. As a fixtures replacement tool, it aims to replace static, hard t

FactoryBoy project 3k Jan 05, 2023
Mimesis is a high-performance fake data generator for Python, which provides data for a variety of purposes in a variety of languages.

Mimesis - Fake Data Generator Description Mimesis is a high-performance fake data generator for Python, which provides data for a variety of purposes

Isaak Uchakaev 3.8k Jan 01, 2023
The lightning-fast ASGI server. 🦄

The lightning-fast ASGI server. Documentation: https://www.uvicorn.org Community: https://discuss.encode.io/c/uvicorn Requirements: Python 3.6+ (For P

Encode 6k Jan 03, 2023
Let your Python tests travel through time

FreezeGun: Let your Python tests travel through time FreezeGun is a library that allows your Python tests to travel through time by mocking the dateti

Steve Pulec 3.5k Jan 09, 2023
FastWSGI - An ultra fast WSGI server for Python 3

FastWSGI - An ultra fast WSGI server for Python 3

James Roberts 343 Dec 22, 2022