Python Rest Testing

Related tags

Testingpyresttest
Overview

pyresttest

Table of Contents

What Is It?

  • A REST testing and API microbenchmarking tool
  • Tests are defined in basic YAML or JSON config files, no code needed
  • Minimal dependencies (pycurl, pyyaml, optionally future), making it easy to deploy on-server for smoketests/healthchecks
  • Supports generate/extract/validate mechanisms to create full test scenarios
  • Returns exit codes on failure, to slot into automated configuration management/orchestration tools (also supplies parseable logs)
  • Logic is written and extensible in Python

Status

NEW: Full Python 3 Support in Alpha - download it, 'pip install future' and give it a try!

Apache License, Version 2.0

Status Badge PyPI version PyPI

Join the chat at https://gitter.im/svanoort/pyresttest

Changelog shows the past and present, milestones show the future roadmap.

  • The changelog will also show features/fixes currently merged to the master branch but not released to PyPi yet (pending installation tests across platforms).

Installation

PyRestTest works on Linux or Mac with Python 2.6, 2.7, or 3.3+ (with module 'future' installed)

First we need to install package python-pycurl:

  • Ubuntu/Debian: (sudo) apt-get install python-pycurl
  • CentOS/RHEL: (sudo) yum install python-pycurl
  • Alpine: (sudo) apk add curl-dev
  • Mac: don't worry about it
  • Other platforms: unsupported. You may get it to work by installing pycurl & pyyaml manually. Also include 'future' for Python 3. No guarantees though. This is needed because the pycurl dependency may fail to install by pip. In very rare cases you may need to intall python-pyyaml if pip cannot install it correctly.

It is easy to install the latest release by pip: (sudo) pip install pyresttest (also install 'future' if on Python 3)

If pip isn't installed, we'll want to install it first: If that is not installed, we'll need to install it first:

  • Ubuntu/Debian: (sudo) apt-get install python-pip
  • CentOS/RHEL: (sudo) yum install python-pip
  • Mac OS X with homebrew: brew install python (it's included)
  • Or with just python installed: wget https://bootstrap.pypa.io/get-pip.py && sudo python get-pip.py

Releases occur every few months, if you want to use unreleased features, it's easy to install from source:

See the Change Log for feature status.

git clone https://github.com/svanoort/pyresttest.git
cd pyresttest
sudo python setup.py install

The master branch tracks the latest; it is unit tested, but less stable than the releases (the 'stable' branch tracks tested releases).

Troubleshooting Installation

Almost all installation issues are due to problems with PyCurl and PyCurl's native libcurl bindings. It is easy to check if PyCurl is installed correctly:

python -c 'import pycurl'

If this returns correctly, pycurl is installed, if you see an ImportError or similar, it isn't. You may also verify the pyyaml installation as well, since that can fail to install by pip in rare circumstances.

Error installing by pip

__main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory

This is caused by libcurl not being installed or recognized: first install pycurl using native packages as above. Alternately, try installing just the libcurl libraries:

  • On Ubuntu/Debian: sudo apt-get install libcurl4-openssl-dev
  • On CentOS/RHEL: yum install libcurl-devel

VirtualEnv installation

PyCurl should install by pip, but sometimes has issues with pycurl/libcurl. Manually copying in a working system pycurl installation may help:

cp /usr/lib/python2.7/dist-packages/pycurl* env/local/lib/python2.7/site-packages/

Sample Test

This will check that APIs accept operations, and will smoketest an application

---
- config:
    - testset: "Basic tests"
    - timeout: 100  # Increase timeout from the default 10 seconds
- test: 
    - name: "Basic get"
    - url: "/api/person/"
- test: 
    - name: "Get single person"
    - url: "/api/person/1/"
- test: 
    - name: "Delete a single person, verify that works"
    - url: "/api/person/1/"
    - method: 'DELETE'
- test: # create entity by PUT
    - name: "Create/update person"
    - url: "/api/person/1/"
    - method: "PUT"
    - body: '{"first_name": "Gaius","id": 1,"last_name": "Baltar","login": "gbaltar"}'
    - headers: {'Content-Type': 'application/json'}
    - validators:  # This is how we do more complex testing!
        - compare: {header: content-type, comparator: contains, expected:'json'}
        - compare: {jsonpath_mini: 'login', expected: 'gbaltar'}  # JSON extraction
        - compare: {raw_body:"", comparator:contains, expected: 'Baltar' }  # Tests on raw response
- test: # create entity by POST
    - name: "Create person"
    - url: "/api/person/"
    - method: "POST"
    - body: '{"first_name": "William","last_name": "Adama","login": "theadmiral"}'
    - headers: {Content-Type: application/json}

Examples

  • The Quickstart should be everyone's starting point
  • Here's a really good example for how to create a user and then do tests on it.
    • This shows how to use extraction from responses, templating, and different test types
  • If you're trying to do something fancy, take a look at the content-test.yaml.
    • This shows most kinds of templating & variable uses. It shows how to read from file, using a variable in the file path, and templating on its content!
  • PyRestTest isn't limited to JSON; there's an example for submitting form data
  • There's a whole folder of example tests to help get started

How Do I Use It?

Running A Simple Test

Run a basic test of the github API:

pyresttest https://api.github.com examples/github_api_smoketest.yaml

Using JSON Validation

A simple set of tests that show how json validation can be used to check contents of a response. Test includes both successful and unsuccessful validation using github API.

pyresttest https://api.github.com examples/github_api_test.yaml

(For help: pyresttest --help )

Interactive Mode

Same as the other test but running in interactive mode.

pyresttest https://api.github.com examples/github_api_test.yaml --interactive true --print-bodies true

Verbose Output

pyresttest https://api.github.com examples/github_api_test.yaml --log debug

Other Goodies

  • Simple templating of HTTP request bodies, URLs, and validators, with user variables
  • Generators to create dummy data for testing, with support for easily writing your own
  • Sequential tests: extract info from one test to use in the next
  • Import test sets in other test sets, to compose suites of tests easily
  • Easy benchmarking: convert any test to a benchmark, by changing the element type and setting output options if needed
  • Lightweight benchmarking: ~0.3 ms of overhead per request, and plans to reduce that in the future
  • Accurate benchmarking: network measurements come from native code in LibCurl, so test overhead doesn't alter them
  • Optional interactive mode for debugging and demos

Basic Test Set Syntax

As you can see, tests are defined in YAML format.

There are 5 top level test syntax elements:

  • url: a simple test, fetches given url via GET request and checks for good response code
  • test: a fully defined test (see below)
  • benchmark: a fully defined benchmark (see below)
  • config or configuration: overall test configuration (timeout is the most common option)
  • import: import another test set file so you Don't Repeat Yourself

Import example

---
# Will load the test sets from miniapp-test.yaml and run them
# Note that this will run AFTER the current test set is executed
# Also note that imported tests get a new Context: any variables defined will be lost between test sets
- import: examples/miniapp-test.yaml

Imports are intended to let you create top-level test suites that run many independent, isolated test scenarios (test sets). They may also be used to create sample data or perform cleanup as long as you don't rely on variables to store this information. For example, if one testset creates a user for a set of scenarios, tests that rely on that user's ID need to start by querying the API to get the ID.

Url Test With Timeout

A simple URL test is equivalent to a basic GET test with that URL. Also shows how to use the timeout option in testset config to descrease the default timeout from 10 seconds to 1.

---
- config:
    - testset: "Basic tests"
    - timeout: 1
- url: "/api/person/"  # This is a simple test
- test: 
    - url: "/api/person/"  # This does the same thing

Custom HTTP Options (special curl settings)

For advanced cases (example: SSL client certs), sometimes you will want to use custom Curl settings that don't have a corresponding option in PyRestTest.

This is easy to do: for each test, you can specify custom Curl arguments with 'curl_option_optionname.' For this, 'optionname' is case-insensitive and the optionname is a Curl Easy Option with 'CURLOPT_' removed.

For example, to follow redirects up to 5 times (CURLOPT_FOLLOWLOCATION and CURLOPT_MAXREDIRS):

---
- test: 
    - url: "/api/person/1"
    - curl_option_followlocation: True
    - curl_option_maxredirs: 5  

Note that while option names are validated, no validation is done on their values.

Syntax Limitations

  • Whenever possible, the YAML configuration handler tries to convert variable types as needed. We're all responsible adults, don't do anything crazy and it will play nicely.
  • Only a handful of elements can use dynamic variables (URLs, headers, request bodies, validators) - there are plans to change this in the next few releases.
  • The templating is quite limited (it's doing simple string subsitution). There are plans to improve this in the next few releases, but it isn't there yet.
  • One caveat: if you define the same element (example, URL) twice in the same enclosing element, the last value will be used. In order to preserve sanity, I use last-value wins.
  • No support for "for-each" on requests/responses natively - this can be done via custom extensions, and may be available in the distant future but it's a while out.

Benchmarking?

Oh, yes please! PyRestTest allows you to collect low-level network performance metrics from Curl itself.

Benchmarks are based off of tests: they extend the configuration elements in a test, allowing you to configure the REST call similarly. However, they do not perform validation on the HTTP response, instead they collect metrics.

There are a few custom configuration options specific to benchmarks:

  • warmup_runs: (default 10 if unspecified) run the benchmark calls this many times before starting to collect data, to allow for JVM warmup, caching, etc
  • benchmark_runs: (default 100 if unspecified) run the benchmark this many times to collect data
  • output_file: (default is None) file name to write benchmark output to, will get overwritten with each run, if none given, will write to terminal only
  • output_format: (default CSV if unspecified) format to write the results in ('json' or 'csv'). More on this below.
  • metrics: which metrics to gather (explained below), MUST be specified or benchmark will do nothing

Metrics

There are two ways to collect performance metrics: raw data, and aggregated stats. Each metric may yield raw data, plus one or more aggregate values.

  • Raw Data: returns an array of values, one for each benchmark run
  • Aggregates: runs a reduction function to return a single value over the entire benchmark run (median, average, etc)

To return raw data, in the 'metrics' configuration element, simply input the metric name in a list of values. The example below will return raw data for total time and size of download (101 values each).

- benchmark: # create entity
    - name: "Basic get"
    - url: "/api/person/"
    - warmup_runs: 7
    - 'benchmark_runs': '101'
    - output_file: 'miniapp-benchmark.csv'
    - metrics:
        - total_time
        - size_download

Aggregates are pretty straightforward:

  • mean or mean_arithmetic: arithmetic mean of data (normal 'average')
  • mean_harmonic: harmonic mean of data (useful for rates)
  • median: median, the value in the middle of sorted result set
  • std_deviation: standard deviation of values, useful for measuring how consistent they are
  • total or sum: total up the values given

Currently supported metrics are listed below, and these are a subset of Curl get_info variables. These variables are explained here (with the CURLINFO_ prefix removed): curl_easy_get_info documentation

Metrics: 'appconnect_time', 'connect_time', 'namelookup_time', 'num_connects', 'pretransfer_time', 'redirect_count', 'redirect_time', 'request_size', 'size_download', 'size_upload', 'speed_download', 'speed_upload', 'starttransfer_time', 'total_time'

Benchmark report formats:

CSV is the default report format. CSV ouput will include:

  • Benchmark name
  • Benchmark group
  • Benchmark failure count (raw HTTP failures)
  • Raw data arrays, as a table, with headers being the metric name, sorted alphabetically
  • Aggregates: a table of results in the format of (metricname, aggregate_name, result)

In JSON, the data is structured slightly differently:

{"failures": 0,
"aggregates":
    [["metric_name", "aggregate", "aggregateValue"] ...],
"failures": failureCount,
"group": "Default",
"results": {"total_time": [value1, value2, etc], "metric2":[value1, value2, etc], ... }
}

Samples:

---
- config:
    - testset: "Benchmark tests using test app"

- benchmark: # create entity
    - name: "Basic get"
    - url: "/api/person/"
    - warmup_runs: 7
    - 'benchmark_runs': '101'
    - output_file: 'miniapp-benchmark.csv'
    - metrics:
        - total_time
        - total_time: mean
        - total_time: median
        - size_download
        - speed_download: median

- benchmark: # create entity
    - name: "Get single person"
    - url: "/api/person/1/"
    - metrics: {speed_upload: median, speed_download: median, redirect_time: mean}
    - output_format: json
    - output_file: 'miniapp-single.json'

RPM-based installation

Pure RPM-based install?

It's easy to build and install from RPM:

Building the RPM:

python setup.py bdist_rpm  # Build RPM
find -iname '*.rpm'   # Gets the RPM name

Installing from RPM

sudo yum localinstall my_rpm_name
sudo yum install PyYAML python-pycurl  # If using python3, needs 'future' too
  • You need to install PyYAML & PyCurl manually because Python distutils can't translate python dependencies to RPM packages.

Gotcha: Python distutils add a dependency on your major python version. This means you can't build an RPM for a system with Python 2.6 on a Python 2.7 system.

Building an RPM for RHEL 6/CentOS 6

You'll need to install rpm-build, and then it should work.

sudo yum install rpm-build

Project Policies

  • PyRestTest uses the Github flow
    • The master branch is an integration branch for mature features
    • Releases are cut periodically from master (every 3-6 months generally, or more often if breaking bugs are present) and released to PyPi
    • Feature development is done in feature branches and merged to master by PR when tested (validated by continuous integration in Jenkins)
    • The 'stable' branch tracks the last release, use this if you want to run PyRestTest from source
  • The changelog is here, this will show past releases and features merged to master for the next release but not released
  • Testing: tested on Ubuntu 14/python 2.7 and CentOS 6/python 6.6, plus Debian Wheezy for Python 3.4.3
  • Releases occur every few months to PyPi once a few features are ready to go
  • PyRestTest uses Semantic Versioning 2.0
  • Back-compatibility is important! PyRestTest makes a strong effort to maintain command-line and YAML format back-compatibility since 1.0.
    • Extension method signatures are maintained as much as possible.
    • However, internal python implementations are subject to change.
    • Major version releases (1.x to 2.x, etc) may introduce breaking API changes, but only with a really darned good reason, and only there's not another way.

Feedback and Contributions

We welcome any feedback you have, including pull requests, reported issues, etc!

For new contributors there are a whole set of issues labelled with help wanted which are excellent starting points to offer a contribution!

For instructions on how to set up a dev environment for PyRestTest, see building.md.

For pull requests to get easily merged, please:

  • Include unit tests (and functional tests, as appropriate) and verify that run_tests.sh passes
  • Include documentation as appropriate
  • Attempt to adhere to PEP8 style guidelines and project style

Bear in mind that this is largely a one-man, outside-of-working-hours effort at the moment, so response times will vary. That said: every feature request gets heard, and even if it takes a while, all the reasonable features will get incorporated. If you fork the main repo, check back periodically... you may discover that the next release includes something to meet your needs and then some!

FAQ

Why not pure-python tests?

  • This is written for an environment where Python is not the sole or primary language
  • You totally can do pure-Python tests if you want!
    • Extensions provide a stable API for adding more complex functionality in python
    • All modules can be imported and used as libraries
    • Gotcha: the project is still young, so internal implementation may change often, much more than YAML features

Why YAML and not XML/JSON?

  • XML is extremely verbose and has many gotchas for parsing
  • You CAN use JSON for tests, it's a subset of YAML. See miniapp-test.json for an example.
  • YAML tends to be the most concise, natural, and easy to write of these three options

Does it do load tests?

  • No, this is a separate niche and there are already many excellent tools to fill it
  • Adding load testing features would greatly increase complexity
  • But some form might come eventually!

Why do you use PyCurl and not requests?

  • Maybe eventually. PyRestTest needs the low-level features of PyCurl for benchmarking, and benefits from its performance. However we may eventually abstract some of the core testing features away to allow for pure-python execution
Comments
  • JMESPath Extractor - in progress under marklz

    JMESPath Extractor - in progress under marklz

    What: As a pyresttest user who needs to do more detailed JSON analysis, I would like to be able to use full jsonpath support in working with request bodies (extraction for variables and content analysis). This should be available if the library is installed and fail if not.

    How: I would like to add a jsonpath-rw extractor, set up to auto-register in the same way as the jsonschema validator.

    It will also require adding extending the autoload extensions to iterate through a list of extensions. For now, that's sufficient but it might be desirable to try to autoload all of the 'ext' folder content.

    This is an isolated task, easy enough for someone to execute as a PR.

    enhancement 
    opened by svanoort 24
  • pyresttest speed issue

    pyresttest speed issue

    We're evaluating using requests instead of pyresttest because we've had issues with speed.

    My login.yaml test takes about 24 seconds to run using pyresttest and a more extended test using the requests module, no yaml, and unittest2 takes about 6 seconds. Has another else noticed this too?

    I'm speculating but the slow down in pyresttest may be due to the initial processing prior to sending the first curl request. After stepping through the program, with a break point on curl.perform() on line 350, it took about 4 to 5 seconds to before calling the curl statement. Is there a way to speed this up?

    bug 
    opened by nitrocode 16
  • Add Junit output support

    Add Junit output support

    I've added some code to be able to output a JUnit XML file. Thought you could find this interesting. I've added the --junit PATH_TO_FILE option to the command line args. If the option is present, an XML file is create to PATH_TO_FILE. I've used the cElementTree module, included in the python lib since 2.5.

    This is a little bit raw yet, but I can make it more robust if you consider adding this feature to pyresttest.

    Cheers ! Bastien.

    opened by b4nst 11
  • Support for HEAD and PATCH

    Support for HEAD and PATCH

    Are you planning to add support for HEAD and PATCH HTTP methods ? Currently it results in a HTTP 400 i.e. Bad Request while trying to use those methods

    bug 
    opened by ksramchandani 11
  • Using pyresttest as library and trying to execute two or more yaml files in parallel, causes issues.

    Using pyresttest as library and trying to execute two or more yaml files in parallel, causes issues.

    Using pyresttest as library and trying to execute two or more yaml files in parallel, causes issues.

    steps: try to run multiple yaml files in parallel.

    result: test runs interfere with each other and cause test runs to fail.

    bug needs more information 
    opened by akshay059 8
  • Unable to send body with a DELETE request

    Unable to send body with a DELETE request

    Consider the following template which is used to delete an entity from DB.

    - test:
        - group: "Basic Test"
        - name: "Delete 10.24.39.202"
        - url: "/v1/switch"
        - method: 'DELETE'
        - expected_status: [200]
        - body: '{"ip_address": "10.24.39.202"}'
        - headers: {'Authorization': 'Basic ZGV2ZWw6WjNGSmVFTlliMUJvYlVSclVrOWhXRVp3ZDIwNFUzQktSekpzTjBnMGVYVT0=',
                    'Content-Type': 'application/json'}
    

    This doesn't work and I get the following error while decoding the body:

      Traceback (most recent call last):
      File "/home/stack/git/server/switch.py", line 1880, in delete
        request = json.loads(self.request.body)
      File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
        return _default_decoder.decode(s)
      File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
        obj, end = self.raw_decode(s, idx=_w(s, 0).end())
      File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode
        raise ValueError("No JSON object could be decoded")
    ValueError: No JSON object could be decoded
    

    From the changelogs, pyresttest supports body in delete request

    1.6.0 Mon Oct 12 07:30:00 2015 -0400
    BETA: support setting request body on all request types, if present
    Allows (for example) DELETE methods that set a request body
    Caveat: does not set length if 0
    
    bug 
    opened by spradeepv 8
  • [Bug?] POST using x-www-form-urlencoded causes issue with templating

    [Bug?] POST using x-www-form-urlencoded causes issue with templating

    When I use POST, I saw that these 2 lines had different results. Is this a bug?

    # this works
        - body: '[email protected]&password=test'
    # this does not
        - body: '{"username": "[email protected]", "password": "test"}'
    

    I also tried templating with the working line but it did not have the correct results.

    - config:
        - variable_binds: {'user': '[email protected]', 'pass': 'test'}
    - test
        ...
        - body: {template: 'username=$user&password=$pass'}
    

    Tried using the following flags but wasn't able to do any further debugging:

    --print-bodies=true --print-headers=true --ssl-insecure --verbose --log=debug

    The only example using x-www-form-urlencoded doesn't use templating. Am I templating poorly?

    question needs more information 
    opened by nitrocode 8
  • ImportError: No module named 'past'

    ImportError: No module named 'past'

    Stack trace

    resttest.py http://localhost:5000 test.yaml Traceback (most recent call last): File "/sites/Flask-Scaffold/venv-3.4/bin/resttest.py", line 3, in from pyresttest import resttest File "/sites/Flask-Scaffold/venv-3.4/lib/python3.4/site-packages/pyresttest/resttest.py", line 25, in from past.builtins import basestring ImportError: No module named 'past'

    opened by Leo-G 7
  • Anyone help me to write yaml for following GET json response

    Anyone help me to write yaml for following GET json response

    YAML File:

    • config:
      • testset: "Basic tests"
    • test: # create entity
      • name: "Basic get"
      • url: "/nitro/v1/config/af_config_info"
      • auth_username: "testuser"
      • auth_password: "testuser"
      • validators:
        • extract_test: {jsonpath_mini: "0.af_config_info", test: "exists"}

    ERROR:Test Failed: Basic get URL=http://localhost:80/rest/v1/config/af_config_info Group=Default HTTP Status Code: 200 ERROR:Test Failure, failure type: Validator Failed, Reason: Extract and test validator failed on test: exists(None) ERROR:Validator/Error details:Extractor: Extractor Type: jsonpath_mini, Query: "0.af_config_info", Templated?: False ERROR:Test Failure, failure type: Validator Failed, Reason: Extract and test validator failed on test: exists(None)

    For the following JSON RESPONSE BODY:

    { "errorcode": 0, "message": "Done", "additionalInfo": { "cert_present": "false" }, "af_config_info": [ { "propkey": "NS_INSIGHT_LIC", "propvalue": "1" }, { "propkey": "CB_DEPLOYMENT", "propvalue": "FALSE" }, { "propkey": "NS_DEPLOYMENT", "propvalue": "TRUE" }, { "propkey": "CR_ENABLED", "propvalue": "0" }, { "propkey": "SLA_ENABLED", "propvalue": "1" }, { "propkey": "URL_COLLECTION_ENABLED", "propvalue": "1" }, { "propkey": "HTTP_DOMAIN_ENABLED", "propvalue": "1" }, { "propkey": "USER_AGENT_ENABLED", "propvalue": "1" }, { "propkey": "HTTP_REQ_METHOD_ENABLED", "propvalue": "1" }, { "propkey": "HTTP_RESP_STATUS_ENABLED", "propvalue": "1" }, { "propkey": "OPERATING_SYSTEM_ENABLED", "propvalue": "1" }, { "propkey": "REPORT_TIMEZONE", "propvalue": "local" }, { "propkey": "SERVER_UTC_TIME", "propvalue": "1438947611" }, { "propkey": "MEDIA_TYPE_ENABLED", "propvalue": "1" }, { "propkey": "CONTENT_TYPE_ENABLED", "propvalue": "1" } ] }

    question needs more information 
    opened by ksreddy543 7
  • Default timeout causing issues

    Default timeout causing issues

    Trying to execute a yaml file with pyresttest which issues HTTP requests. Because of the default timeout value(10s) set in tests.py, the operation is getting timed out.

    opened by surekasri 6
  • Unable to run pyresttest

    Unable to run pyresttest

    Hi There,

    I never used python but this project is really interesting. I am trying to test services written in express.js I am able to access Yaml file from browser URL but when I am trying to access using --URL and --test Somehow resttest.py is not able to find yaml file.

    Any help on understanding how resttest.py is trying to find a file would be greatly appreciated.

    help_wanted

    invalid 
    opened by VikramBPurohit 6
  • python3 can't run pyresttest

    python3 can't run pyresttest

    after installing pyresttest using pip3, I can't run pyresttest command:

    pyresttest: command not found
    

    run the following returns nothing (no error):

     python -c 'import pycurl'
    
    opened by tbswork1 3
  • How Can I set the character encoding

    How Can I set the character encoding

    When I get a response with chinese ,it's was gibberish:

    pyresttest http://172.19.211.172:8088 createNewOrder.yaml --print-bodies true --log debug DEBUG:Initial Test Result, based on expected response code: True DEBUG:no validators found {"code":500,"message":"订单有效日不能小于当天!","data":null}

    opened by WangBig 0
  • Bump django from 1.6.5 to 2.2.24

    Bump django from 1.6.5 to 2.2.24

    Bumps django from 1.6.5 to 2.2.24.

    Commits
    • 2da029d [2.2.x] Bumped version for 2.2.24 release.
    • f27c38a [2.2.x] Fixed CVE-2021-33571 -- Prevented leading zeros in IPv4 addresses.
    • 053cc95 [2.2.x] Fixed CVE-2021-33203 -- Fixed potential path-traversal via admindocs'...
    • 6229d87 [2.2.x] Confirmed release date for Django 2.2.24.
    • f163ad5 [2.2.x] Added stub release notes and date for Django 2.2.24.
    • bed1755 [2.2.x] Changed IRC references to Libera.Chat.
    • 63f0d7a [2.2.x] Refs #32718 -- Fixed file_storage.test_generate_filename and model_fi...
    • 5fe4970 [2.2.x] Post-release version bump.
    • 61f814f [2.2.x] Bumped version for 2.2.23 release.
    • b8ecb06 [2.2.x] Fixed #32718 -- Relaxed file name validation in FileField.
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Is there PATCH method compatibility?

    Is there PATCH method compatibility?

    Hello there, I've been configuring some testings recently and had no major issues except for when i need to test a PATCH API.

    I read the issues related where it is mentioned that there is PATCH compatibility but after double and triple checking the code it seems that the entity is not updated in the db, therefore the test returns error.

    This error is due tu validators and not the request itself. If i run the test script with --verbose and --log=DEBUG options i see that the method used in that specific test request is GET instead of PATCH.

    So my question is if there is compatibility with PATCH method. Here it is the output:

    Output:

    ERROR:Test Failed: Patch update customer 1. Add name and last name URL={URl_PATH} Group=Customers HTTP Status Code: 200
    ERROR:Test Failure, failure type: Validator Failed, Reason: Comparison failed, evaluating eq(None, Pyresttest_name) returned False
    ERROR:Validator/Error details:Extractor: Extractor Type: jsonpath_mini,  Query: "contact.firstName", Templated?: False
    Expected is templated, raw value: $customer_name
    ERROR:Test Failure, failure type: Validator Failed, Reason: Comparison failed, evaluating eq(None, Pyresttest_lastname) returned False
    ERROR:Validator/Error details:Extractor: Extractor Type: jsonpath_mini,  Query: "contact.lastName", Templated?: False
    
    
    opened by SebaRossi94 1
  • connection getting refused all the time

    connection getting refused all the time

    Hi, I am getting connection refused with too many open ports. if i start http server on 8000 it starts but from pyresttest if i provide 8000 it says connection refused. can any one help me with that pls?

    ERROR:Test Failed: Create person URL=http://127.0.0.1:63858/api/person/ Group=Quickstart HTTP Status Code: None ERROR:Test Failure, failure type: Curl Exception, Reason: Curl Exception: (7, 'Failed to connect to 127.0.0.1 port 63858: Connection refused') ERROR:Validator/Error details:Traceback (most recent call last): File "c:\automation\aglc\selenium\aglc_venv\lib\site-packages\pyresttest\resttest.py", line 351, in run_test curl.perform() # Run the actual call pycurl.error: (7, 'Failed to connect to 127.0.0.1 port 63858: Connection refused')

    opened by vbpatel73 0
Releases(v1.7.1)
Owner
Sam Van Oort
Sam Van Oort
The source code and slide for my talk about the subject: unittesing in python

PyTest Talk This talk give you some ideals about the purpose of unittest? how to write good unittest? how to use pytest framework? and show you the ba

nguyenlm 3 Jan 18, 2022
No longer maintained, please migrate to model_bakery

Model Mommy: Smart fixtures for better tests IMPORTANT: Model Mommy is no longer maintained and was replaced by Model Bakery. Please, consider migrati

Bernardo Fontes 917 Oct 04, 2022
Ab testing - basically a statistical test in which two or more variants

Ab testing - basically a statistical test in which two or more variants

Buse Yıldırım 5 Mar 13, 2022
This repository contains a set of benchmarks of different implementations of Parquet (storage format) <-> Arrow (in-memory format).

Parquet benchmarks This repository contains a set of benchmarks of different implementations of Parquet (storage format) - Arrow (in-memory format).

11 Dec 21, 2022
Travel through time in your tests.

time-machine Travel through time in your tests. A quick example: import datetime as dt

Adam Johnson 373 Dec 27, 2022
Divide full port scan results and use it for targeted Nmap runs

Divide Et Impera And Scan (and also merge the scan results) DivideAndScan is used to efficiently automate port scanning routine by splitting it into 3

snovvcrash 226 Dec 30, 2022
HTTP client mocking tool for Python - inspired by Fakeweb for Ruby

HTTPretty 1.0.5 HTTP Client mocking tool for Python created by Gabriel Falcão . It provides a full fake TCP socket module. Inspired by FakeWeb Github

Gabriel Falcão 2k Jan 06, 2023
This project is used to send a screenshot by email of your MyUMons schedule using Selenium python lib (headless mode)

MyUMonsSchedule Use MyUMonsSchedule python script to send a screenshot by email (Gmail) of your MyUMons schedule. If you use it on Windows, take care

Pierre-Louis D'Agostino 6 May 12, 2022
A Python Selenium library inspired by the Testing Library

Selenium Testing Library Slenium Testing Library (STL) is a Python library for Selenium inspired by Testing-Library. Dependencies Python 3.6, 3.7, 3.8

Anže Pečar 12 Dec 26, 2022
A browser automation framework and ecosystem.

Selenium Selenium is an umbrella project encapsulating a variety of tools and libraries enabling web browser automation. Selenium specifically provide

Selenium 25.5k Jan 01, 2023
Mock smart contracts for writing Ethereum test suites

Mock smart contracts for writing Ethereum test suites This package contains comm

Trading Strategy 222 Jan 04, 2023
The Penetration Testers Framework (PTF) is a way for modular support for up-to-date tools.

The PenTesters Framework (PTF) is a Python script designed for Debian/Ubuntu/ArchLinux based distributions to create a similar and familiar distribution for Penetration Testing

trustedsec 4.5k Dec 28, 2022
It's a simple script to generate a mush on code forces, the script will accept the public problem urls only or polygon problems.

Codeforces-Sheet-Generator It's a simple script to generate a mushup on code forces, the script will accept the public problem urls only or polygon pr

Ahmed Hossam 10 Aug 02, 2022
Python Projects - Few Python projects with Testing using Pytest

Python_Projects Few Python projects : Fast_API_Docker_PyTest- Just a simple auto

Tal Mogendorff 1 Jan 22, 2022
Command line driven CI frontend and development task automation tool.

tox automation project Command line driven CI frontend and development task automation tool At its core tox provides a convenient way to run arbitrary

tox development team 3.1k Jan 04, 2023
Testing Calculations in Python, using OOP (Object-Oriented Programming)

Testing Calculations in Python, using OOP (Object-Oriented Programming) Create environment with venv python3 -m venv venv Activate environment . venv

William Koller 1 Nov 11, 2021
pytest plugin for a better developer experience when working with the PyTorch test suite

pytest-pytorch What is it? pytest-pytorch is a lightweight pytest-plugin that enhances the developer experience when working with the PyTorch test sui

Quansight 39 Nov 18, 2022
Turn any OpenAPI2/3 and Postman Collection file into an API server with mocking, transformations and validations.

Prism is a set of packages for API mocking and contract testing with OpenAPI v2 (formerly known as Swagger) and OpenAPI v3.x. Mock Servers: Life-like

Stoplight 3.3k Jan 05, 2023
masscan + nmap 快速端口存活检测和服务识别

masnmap masscan + nmap 快速端口存活检测和服务识别。 思路很简单,将masscan在端口探测的高速和nmap服务探测的准确性结合起来,达到一种相对比较理想的效果。 先使用masscan以较高速率对ip存活端口进行探测,再以多进程的方式,使用nmap对开放的端口进行服务探测。 安

starnightcyber 75 Dec 19, 2022
This is a web test framework based on python+selenium

Basic thoughts for this framework There should have a BasePage.py to be the parent page and all the page object should inherit this class BasePage.py

Cactus 2 Mar 09, 2022