Asynchronous Python HTTP Requests for Humans using Futures

Overview

Asynchronous Python HTTP Requests for Humans

https://travis-ci.org/ross/requests-futures.png?branch=master

Small add-on for the python requests http library. Makes use of python 3.2's concurrent.futures or the backport for prior versions of python.

The additional API and changes are minimal and strives to avoid surprises.

The following synchronous code:

from requests import Session

session = Session()
# first requests starts and blocks until finished
response_one = session.get('http://httpbin.org/get')
# second request starts once first is finished
response_two = session.get('http://httpbin.org/get?foo=bar')
# both requests are complete
print('response one status: {0}'.format(response_one.status_code))
print(response_one.content)
print('response two status: {0}'.format(response_two.status_code))
print(response_two.content)

Can be translated to make use of futures, and thus be asynchronous by creating a FuturesSession and catching the returned Future in place of Response. The Response can be retrieved by calling the result method on the Future:

from requests_futures.sessions import FuturesSession

session = FuturesSession()
# first request is started in background
future_one = session.get('http://httpbin.org/get')
# second requests is started immediately
future_two = session.get('http://httpbin.org/get?foo=bar')
# wait for the first request to complete, if it hasn't already
response_one = future_one.result()
print('response one status: {0}'.format(response_one.status_code))
print(response_one.content)
# wait for the second request to complete, if it hasn't already
response_two = future_two.result()
print('response two status: {0}'.format(response_two.status_code))
print(response_two.content)

By default a ThreadPoolExecutor is created with 8 workers. If you would like to adjust that value or share a executor across multiple sessions you can provide one to the FuturesSession constructor.

from concurrent.futures import ThreadPoolExecutor
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ThreadPoolExecutor(max_workers=10))
# ...

As a shortcut in case of just increasing workers number you can pass max_workers straight to the FuturesSession constructor:

from requests_futures.sessions import FuturesSession
session = FuturesSession(max_workers=10)

FutureSession will use an existing session object if supplied:

from requests import session
from requests_futures.sessions import FuturesSession
my_session = session()
future_session = FuturesSession(session=my_session)

That's it. The api of requests.Session is preserved without any modifications beyond returning a Future rather than Response. As with all futures exceptions are shifted (thrown) to the future.result() call so try/except blocks should be moved there.

Tying extra information to the request/response

The most common piece of information needed is the URL of the request. This can be accessed without any extra steps using the request property of the response object.

from concurrent.futures import as_completed
from pprint import pprint
from requests_futures.sessions import FuturesSession

session = FuturesSession()

futures=[session.get(f'http://httpbin.org/get?{i}') for i in range(3)]

for future in as_completed(futures):
    resp = future.result()
    pprint({
        'url': resp.request.url,
        'content': resp.json(),
    })

There are situations in which you may want to tie additional information to a request/response. There are a number of ways to go about this, the simplest is to attach additional information to the future object itself.

from concurrent.futures import as_completed
from pprint import pprint
from requests_futures.sessions import FuturesSession

session = FuturesSession()

futures=[]
for i in range(3):
    future = session.get('http://httpbin.org/get')
    future.i = i
    futures.append(future)

for future in as_completed(futures):
    resp = future.result()
    pprint({
        'i': future.i,
        'content': resp.json(),
    })

Canceling queued requests (a.k.a cleaning up after yourself)

If you know that you won't be needing any additional responses from futures that haven't yet resolved, it's a good idea to cancel those requests. You can do this by using the session as a context manager:

from requests_futures.sessions import FuturesSession
with FuturesSession(max_workers=1) as session:
    future = session.get('https://httpbin.org/get')
    future2 = session.get('https://httpbin.org/delay/10')
    future3 = session.get('https://httpbin.org/delay/10')
    response = future.result()

In this example, the second or third request will be skipped, saving time and resources that would otherwise be wasted.

Iterating over a list of requests responses

Without preserving the requests order:

from concurrent.futures import as_completed
from requests_futures.sessions import FuturesSession
with FuturesSession() as session:
    futures = [session.get('https://httpbin.org/delay/{}'.format(i % 3)) for i in range(10)]
    for future in as_completed(futures):
        resp = future.result()
        print(resp.json()['url'])

Working in the Background

Additional processing can be done in the background using requests's hooks functionality. This can be useful for shifting work out of the foreground, for a simple example take json parsing.

from pprint import pprint
from requests_futures.sessions import FuturesSession

session = FuturesSession()

def response_hook(resp, *args, **kwargs):
    # parse the json storing the result on the response object
    resp.data = resp.json()

future = session.get('http://httpbin.org/get', hooks={
    'response': response_hook,
})
# do some other stuff, send some more requests while this one works
response = future.result()
print('response status {0}'.format(response.status_code))
# data will have been attached to the response object in the background
pprint(response.data)

Hooks can also be applied to the session.

from pprint import pprint
from requests_futures.sessions import FuturesSession

def response_hook(resp, *args, **kwargs):
    # parse the json storing the result on the response object
    resp.data = resp.json()

session = FuturesSession()
session.hooks['response'] = response_hook

future = session.get('http://httpbin.org/get')
# do some other stuff, send some more requests while this one works
response = future.result()
print('response status {0}'.format(response.status_code))
# data will have been attached to the response object in the background
pprint(response.data)   pprint(response.data)

A more advanced example that adds an elapsed property to all requests.

from pprint import pprint
from requests_futures.sessions import FuturesSession
from time import time


class ElapsedFuturesSession(FuturesSession):

    def request(self, method, url, hooks=None, *args, **kwargs):
        start = time()
        if hooks is None:
            hooks = {}

        def timing(r, *args, **kwargs):
            r.elapsed = time() - start

        try:
            if isinstance(hooks['response'], (list, tuple)):
                # needs to be first so we don't time other hooks execution
                hooks['response'].insert(0, timing)
            else:
                hooks['response'] = [timing, hooks['response']]
        except KeyError:
            hooks['response'] = timing

        return super(ElapsedFuturesSession, self) \
            .request(method, url, hooks=hooks, *args, **kwargs)



session = ElapsedFuturesSession()
future = session.get('http://httpbin.org/get')
# do some other stuff, send some more requests while this one works
response = future.result()
print('response status {0}'.format(response.status_code))
print('response elapsed {0}'.format(response.elapsed))

Using ProcessPoolExecutor

Similarly to ThreadPoolExecutor, it is possible to use an instance of ProcessPoolExecutor. As the name suggest, the requests will be executed concurrently in separate processes rather than threads.

from concurrent.futures import ProcessPoolExecutor
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10))
# ... use as before

Hint

Using the ProcessPoolExecutor is useful, in cases where memory usage per request is very high (large response) and cycling the interpreter is required to release memory back to OS.

A base requirement of using ProcessPoolExecutor is that the Session.request, FutureSession all be pickle-able.

This means that only Python 3.5 is fully supported, while Python versions 3.4 and above REQUIRE an existing requests.Session instance to be passed when initializing FutureSession. Python 2.X and < 3.4 are currently not supported.

# Using python 3.4
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
                         session=Session())
# ... use as before

In case pickling fails, an exception is raised pointing to this documentation.

# Using python 2.7
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
                         session=Session())
Traceback (most recent call last):
...
RuntimeError: Cannot pickle function. Refer to documentation: https://github.com/ross/requests-futures/#using-processpoolexecutor

Important

  • Python >= 3.4 required
  • A session instance is required when using Python < 3.5
  • If sub-classing FuturesSession it must be importable (module global)

Installation

pip install requests-futures
Owner
Ross McFarland
Ross McFarland
Python Simple SOAP Library

PySimpleSOAP / soap2py Python simple and lightweight SOAP library for client and server webservices interfaces, aimed to be as small and easy as possi

PySimpleSOAP 369 Jan 02, 2023
suite de mocks http em json

Ritchie Formula Repo Documentation Contribute to the Ritchie community This repository contains rit formulas which can be executed by the ritchie-cli.

Kaio Fábio Prates Prudêncio 1 Nov 01, 2021
Fast HTTP parser

httptools is a Python binding for the nodejs HTTP parser. The package is available on PyPI: pip install httptools. APIs httptools contains two classes

magicstack 1.1k Jan 07, 2023
Python requests like API built on top of Twisted's HTTP client.

treq: High-level Twisted HTTP Client API treq is an HTTP library inspired by requests but written on top of Twisted's Agents. It provides a simple, hi

Twisted Matrix Labs 553 Dec 18, 2022
hackhttp2 make everything easier

hackhttp2 intro This repo is inspired by hackhttp, but it's out of date already. so, I create this repo to make simulation and Network request easier.

youbowen 5 Jun 15, 2022
HTTP Request & Response Service, written in Python + Flask.

httpbin(1): HTTP Request & Response Service

Postman Inc. 11.3k Jan 01, 2023
As easy as /aitch-tee-tee-pie/ 🥧 Modern, user-friendly command-line HTTP client for the API era. JSON support, colors, sessions, downloads, plugins & more. https://twitter.com/httpie

HTTPie: human-friendly CLI HTTP client for the API era HTTPie (pronounced aitch-tee-tee-pie) is a command-line HTTP client. Its goal is to make CLI in

HTTPie 25.4k Jan 01, 2023
Pretty fast mass-dmer with multiple tokens support made with python requests

mass-dm-requests - Little preview of the Logger and the Spammer Features Logging User IDS Sending DMs (Embeds are supported) to the logged IDs Includi

karma.meme 14 Nov 18, 2022
Aiohttp simple project with Swagger and ccxt integration

crypto_finder What Where Documentation http://localhost:8899/docs Maintainer nordzisko Crypto Finder aiohttp application Application that connects to

Norbert Danisik 5 Feb 27, 2022
Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.

Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.

Paweł Piotr Przeradowski 8.6k Jan 04, 2023
HTTP Request Smuggling Detection Tool

HTTP Request Smuggling Detection Tool HTTP request smuggling is a high severity vulnerability which is a technique where an attacker smuggles an ambig

Anshuman Pattnaik 282 Jan 03, 2023
Asynchronous Python HTTP Requests for Humans using twisted

Asynchronous Python HTTP Requests for Humans Small add-on for the python requests http library. Makes use twisted's ThreadPool, so that the requests'A

Pierre Tardy 32 Oct 27, 2021
PycURL - Python interface to libcurl

PycURL -- A Python Interface To The cURL library PycURL is a Python interface to libcurl, the multiprotocol file transfer library. Similarly to the ur

PycURL 933 Jan 09, 2023
HTTP request/response parser for python in C

http-parser HTTP request/response parser for Python compatible with Python 2.x (=2.7), Python 3 and Pypy. If possible a C parser based on http-parser

Benoit Chesneau 334 Dec 24, 2022
Python Client for the Etsy NodeJS Statsd Server

Introduction statsd is a client for Etsy's statsd server, a front end/proxy for the Graphite stats collection and graphing server. Links The source: h

Rick van Hattem 107 Jun 09, 2022
A toolbelt of useful classes and functions to be used with python-requests

The Requests Toolbelt This is just a collection of utilities for python-requests, but don't really belong in requests proper. The minimum tested reque

892 Jan 06, 2023
Script to automate PUT HTTP method exploitation to get shell.

Script to automate PUT HTTP method exploitation to get shell.

devploit 116 Nov 10, 2022
🔄 🌐 Handle thousands of HTTP requests, disk writes, and other I/O-bound tasks simultaneously with Python's quintessential async libraries.

🔄 🌐 Handle thousands of HTTP requests, disk writes, and other I/O-bound tasks simultaneously with Python's quintessential async libraries.

Hackers and Slackers 15 Dec 12, 2022
Probe and discover HTTP pathname using brute-force methodology and filtered by specific word or 2 words at once

pathprober Probe and discover HTTP pathname using brute-force methodology and filtered by specific word or 2 words at once. Purpose Brute-forcing webs

NFA 41 Jul 06, 2022
Detects request smuggling via HTTP/2 downgrades.

h2rs Detects request smuggling via HTTP/2 downgrades. Requirements Python 3.x Python Modules base64 sys socket ssl certifi h2.connection h2.events arg

Ricardo Iramar dos Santos 89 Dec 22, 2022