PostgreSQL database access simplified

Overview

Queries: PostgreSQL Simplified

Queries is a BSD licensed opinionated wrapper of the psycopg2 library for interacting with PostgreSQL.

The popular psycopg2 package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. Check out the Usage section below to see how easy it can be.

Key features include:

  • Simplified API
  • Support of Python 2.7+ and 3.4+
  • PyPy support via psycopg2cffi
  • Asynchronous support for Tornado
  • Connection information provided by URI
  • Query results delivered as a generator based iterators
  • Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays
  • Ability to directly access psycopg2 connection and cursor objects
  • Internal connection pooling

Version Status Coverage License

Documentation

Documentation is available at https://queries.readthedocs.org

Installation

Queries is available via pypi and can be installed with easy_install or pip:

pip install queries

Usage

Queries provides a session based API for interacting with PostgreSQL. Simply pass in the URI of the PostgreSQL server to connect to when creating a session:

session = queries.Session("postgresql://[email protected]:5432/postgres")

Queries built-in connection pooling will re-use connections when possible, lowering the overhead of connecting and reconnecting.

When specifying a URI, if you omit the username and database name to connect with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your username is fred and you omit the URI when issuing queries.query the URI that is constructed would be postgresql://[email protected]:5432/fred.

If you'd rather use individual values for the connection, the queries.uri() method provides a quick and easy way to create a URI to pass into the various methods.

>>> queries.uri("server-name", 5432, "dbname", "user", "pass")
'postgresql://user:[email protected]:5432/dbname'

Environment Variables

Currently Queries uses the following environment variables for tweaking various configuration values. The supported ones are:

  • QUERIES_MAX_POOL_SIZE - Modify the maximum size of the connection pool (default: 1)

Using the queries.Session class

To execute queries or call stored procedures, you start by creating an instance of the queries.Session class. It can act as a context manager, meaning you can use it with the with keyword and it will take care of cleaning up after itself. For more information on the with keyword and context managers, see PEP343.

In addition to both the queries.Session.query and queries.Session.callproc methods that are similar to the simple API methods, the queries.Session class provides access to the psycopg2 connection and cursor objects.

Using queries.Session.query

The following example shows how a queries.Session object can be used as a context manager to query the database table:

>>> import pprint
>>> import queries
>>>
>>> with queries.Session() as session:
...     for row in session.query('SELECT * FROM names'):
...         pprint.pprint(row)
...
{'id': 1, 'name': u'Jacob'}
{'id': 2, 'name': u'Mason'}
{'id': 3, 'name': u'Ethan'}

Using queries.Session.callproc

This example uses queries.Session.callproc to execute a stored procedure and then pretty-prints the single row results as a dictionary:

>>> import pprint
>>> import queries
>>> with queries.Session() as session:
...   results = session.callproc('chr', [65])
...   pprint.pprint(results.as_dict())
...
{'chr': u'A'}

Asynchronous Queries with Tornado

In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between queries.TornadoSession and queries.Session is the TornadoSession.query and TornadoSession.callproc methods return the entire result set instead of acting as an iterator over the results. The following example uses TornadoSession.query in an asynchronous Tornado web application to send a JSON payload with the query result set.

from tornado import gen, ioloop, web
import queries

class MainHandler(web.RequestHandler):

    def initialize(self):
        self.session = queries.TornadoSession()

    @gen.coroutine
    def get(self):
        results = yield self.session.query('SELECT * FROM names')
        self.finish({'data': results.items()})
        results.free()

application = web.Application([
    (r"/", MainHandler),
])

if __name__ == "__main__":
    application.listen(8888)
    ioloop.IOLoop.instance().start()

Inspiration

Queries is inspired by Kenneth Reitz's awesome work on requests.

History

Queries is a fork and enhancement of pgsql_wrapper, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0.

Comments
  • Password with %5D login failed

    Password with %5D login failed

    Hi,

    I found that my password include %5D and it failed to login. And the same password I use psycopg2 library is OK.

    It's some wrong my code or anything i ignored?

    Thanks.

    opened by lycheng 3
  • python script cannot run by windows task scheduler with a simple

    python script cannot run by windows task scheduler with a simple "import queries"

    Python script runs fine but not by windows task scheduler. It exits with (0x1) in task scheduler. try the following script:

    import logging import pandas as pd import sqlalchemy import pprint #import queries from urllib.request import Request, urlopen import datetime

    logging.basicConfig(filename='./logs/mylog.log', filemode='a', format='%(asctime)s.%(msecs)03d %(name)s %(levelname)s %(message)s', datefmt='%Y-%m-%d,%H:%M:%S', level=logging.INFO)

    logging.info("===== process started =====")

    opened by meilao3rd 2
  • psycopg2 as a dependency

    psycopg2 as a dependency

    Installing queries via pip install psycopg2 as a dependency. However on many platforms now psycopg2-binary package must be used because of lack of PG dev packages available. For instance, AWS Lambda, Azure Functions etc.

    Therefore I believe that either psycopg2 should be removed as a direct dependency and let the user manually add the approprate package or offer a queries-binary variant.

    opened by louis77 2
  • Handle resultless statements when rowcount > 0

    Handle resultless statements when rowcount > 0

    When, for example, an insert without return is executed, attempting to access the queries. Result object's rows raises a psycopg2.ProgrammingError with a message of 'no results to fetch'. The existing check in queries is based on cursor.rowcount and thus fails in this example as rowcount contains the number of rows affected - not necessarily returned.

    This commit adds handling of said exception and matches the behaviors if the rowcount were 0.

    Can be replicated by accessing the queries.Result object for an insert that inserts one or more rows without returning any results.

    Example:

    >>> result = session.query('insert into some_table (id) values (123)')
    >>> list(result)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/mnt/storage1/queries/queries/results.py", line 48, in __iter__
        for row in self.cursor:
      File "/mnt/storage1/queries/env/lib/python3.6/site-packages/psycopg2-2.7.5-py3.6-linux-x86_64.egg/psycopg2/extras.py", line 117, in __iter__
        first = next(res)
    psycopg2.ProgrammingError: no results to fetch
    

    Handling of the error should probably be abstracted.

    opened by nvllsvm 2
  • Close the connection on a connecting error

    Close the connection on a connecting error

    If an error message is returned from the server during the connecting phase, the socket will remain open after the error has been raised. The socket will continue to be polled forever, causing excessive CPU use.

    opened by cknave 2
  • Add easy way to retrieve info about session state

    Add easy way to retrieve info about session state

    I am using the TornadoSession with hundreds of queries and if somehow a result isn't freed, tracking it down is a nightmare. I put this together and was wondering if it would be useful to have in the core of the application (if there is a better way please let me know)

    
    import queries
    
    
    class MySession(queries.TornadoSession):
    
        def query(self, sql, parameters=None):
            self.sql = sql
            self.parameters = parameters
    
            return super(MySession, self).query(sql=sql, parameters=parameters)
    
        def info(self):
            info = []
    
            for pid, pool in self._pool_manager._pools.items():
                i = pool.idle_connections
                b = [pool.connections[k] for k in pool.connections
                     if pool.connections[k].busy]
                u = [pool.connections[k].used_by().sql for k in
                     pool.connections if pool.connections[k].busy]
    
                info.append({
                    'pid': pid,
                    'idle': len(i),
                    'busy': len(b),
                    'running_queries': u
                })
    
            return info
    
    

    If you ever wanted info about the session, you'd just call session.info()

    enhancement 
    opened by emehrkay 2
  • Is there an executemany using the TornadoSession?

    Is there an executemany using the TornadoSession?

    I have seen the concurrent queries example by wrapping the sql statements in a list but that only works if you know beforehand how many queries you are going to send. Is there a method to executemany such as i would do below?

    `cars = ( (1, 'Audi', 52642), (2, 'Mercedes', 57127), (3, 'Skoda', 9000), (4, 'Volvo', 29000), (5, 'Bentley', 350000), (6, 'Citroen', 21000), (7, 'Hummer', 41400), (8, 'Volkswagen', 21600) )

    cur.executemany(query, cars)`

    Thanks in advance!

    opened by Knights22 2
  • Stacking tornado_session.Results.close calls doesn't close any connection

    Stacking tornado_session.Results.close calls doesn't close any connection

    Maybe I am doing something wrong, but if I were to simply wait until later in the execution stack to call close on a tornado result, it never closes.

    conn = queries.queries.TornadoSession(pool_max_size=3)
    ....
    res1 = yield conn.result('some query')
    res2 = yield conn.result('some query')
    res3 = yield conn.result('some query')
    
    res1.free()
    res2.free()
    res3.free()
    
    res4 = yield conn.result('some query')
    # PoolFullError: Pool <queries.pool.Pool object at $ID> is at its maximum capacity
    res4.free()
    

    If I were to free a result before using the connection again, things work fine. (I even tested with a counter by inspecting the conn_pool_manager._pools dict)

    Am I missing something?

    opened by emehrkay 2
  • Allow iterating over empty query results

    Allow iterating over empty query results

    Added an ability to iterate on empty query results. And a couple of, probably, incorrect tests for this condition.

    Previously it was causing psycopg2.ProgrammingError: scroll destination out of bounds. One could overcome this by checking for truthy result before iterating, but I'm not keen.

    opened by den-t 2
  • Requirement psycopg2<2.8 prevents Python 3.8 support

    Requirement psycopg2<2.8 prevents Python 3.8 support

    See https://github.com/psycopg/psycopg2/issues/854, the current requirements of psycopg2>=2.5.1,<2.8 attempts to install psycopg2-2.7.7, but this version is incompatible with python 3.8 due to the above issue, which was fixed in 2.8

    opened by dparker2 1
  • Broken link in doc

    Broken link in doc

    in : https://queries.readthedocs.io/en/2.0.0/

    the link to "Module Index" (https://queries.readthedocs.io/en/2.0.0/py-modindex.html) is broken.

    opened by BFJonk 1
  • Support connection_factory?

    Support connection_factory?

    I'll start with the question as I am not entirely certain if it is even possible; Is there a way to create a queries.Session using a connection_factory? And if not, would it be possible to add such support?

    Or.... perhaps there is another way to accomplish the following...?

    In cases where I wanted to use a cursor which supported both a NamedTuple result and also using a LoggingCursor I would combine the two cursors but I would also pass in the connection_factory which sets up the connection to utilize logging for the cursor. Is there some way of doing this in queries?

    from psycopg2.extras import NamedTupleCursor, LoggingCursor, LoggingConnection
    
    
    class MyLoggingCursor(LoggingCursor):
        def execute(self, query, vars=None):
            self.timestamp = time.time()
            return super(MyLoggingCursor, self).execute(query, vars)
    
        def callproc(self, procname, vars=None):
            self.timestamp = time.time()
            return super(MyLoggingCursor, self).callproc(procname, vars)
    
    
    class MyLoggingConnection(LoggingConnection):
        def filter(self, msg, curs):
            duration = int((time.time() - curs.timestamp) * 1000)
            output = f"{msg}  ==> {curs.rowcount} rows, {duration:d} ms"
            return output
    
        def cursor(self, *args, **kwargs):
            kwargs.setdefault('cursor_factory', MixinLoggedNamedTupleCursor)
            return LoggingConnection.cursor(self, *args, **kwargs)
    
    
    class MixinLoggedNamedTupleCursor(MyLoggingCursor, NamedTupleCursor):
        pass
    
    
    db_conn = psycopg2.connect(host=db_host, port=db_port,
                               user=db_user, password=db_pass,
                               database=db_name, 
                               connect_timeout=timeout,
                               connection_factory=MyLoggingConnection
                               )
    db_conn.initialize(logger)
    
    opened by bandophahita 0
  • Simple query fails if no rows found

    Simple query fails if no rows found

    Most simple example from docs:

    for row in session.query('SELECT * FROM foo'):
        print row
    

    Fails when there are no rows in the table:

    Traceback (most recent call last):
      File "venv/lib/python3.9/site-packages/queries/results.py", line 45, in __iter__
        raise StopIteration
    StopIteration
    

    Expected result: for loop should just pass without errors

    opened by borislopezaraoz 0
  • Handle resultless statements when rowcount > 0

    Handle resultless statements when rowcount > 0

    When, for example, an insert without return is executed, attempting to access the queries.Result object's rows raises a psycopg2.ProgrammingError with a message of 'no results to fetch'. The existing check in queries is based on cursor.rowcount and thus fails in this example as rowcount contains the number of rows affected - not necessarily returned.

    This commit adds handling of said exception and matches the behaviors if the rowcount were 0.

    opened by nvllsvm 2
  • TornadoSession as a context manager

    TornadoSession as a context manager

    Hello ! First of all thank you for the work you put in your lib !

    We spotted a strange behavior when using queries.TornadoSession in a context manager: The connection pool is not freed when exiting the with block. (This isn't the case for queries.Session).

    It is true that the documentation encourages to use result.free() when working with asynchronous sessions but IMHO it would be a nice feature if both Session classes would behave the same.

    Cheers !

    enhancement 
    opened by Morreski 3
Releases(2.0.0)
  • 2.0.0(Jan 29, 2018)

    • REMOVED support for Python 2.6
    • FIXED CPU Pegging bug: Cleanup IOLoop and internal stack in TornadoSession on connection error. In the case of a connection error, the failure to do this caused CPU to peg @ 100% utilization looping on a non-existent file descriptor. Thanks to cknave for his work on identifying the issue, proposing a fix, and writing a working test case.
    • Move the integration tests to use a local docker development environment
    • Added new methods queries.pool.Pool.report and queries.pool.PoolManager.Report for reporting pool status.
    • Added new methods to queries.pool.Pool for returning a list of busy, closed, executing, and locked connections.
    Source code(tar.gz)
    Source code(zip)
Owner
Gavin M. Roy
CTO @aweber. Technology-minded geek who enjoys Python & Erlang, music and gaming. Author of RabbitMQ in Depth.
Gavin M. Roy
A fast unobtrusive MongoDB ODM for Python.

MongoFrames MongoFrames is a fast unobtrusive MongoDB ODM for Python designed to fit into a workflow not dictate one. Documentation is available at Mo

getme 45 Jun 01, 2022
A selection of SQLite3 databases to practice querying from.

Dummy SQL Databases This is a collection of dummy SQLite3 databases, for learning and practicing SQL querying, generated with the VS Code extension Ge

1 Feb 26, 2022
SQL queries to collections

SQC SQL Queries to Collections Examples from sqc import sqc data = [ {"a": 1, "b": 1}, {"a": 2, "b": 1}, {"a": 3, "b": 2}, ] Simple filte

Alexander Volkovsky 0 Jul 06, 2022
Python client for Apache Kafka

Kafka Python client Python client for the Apache Kafka distributed stream processing system. kafka-python is designed to function much like the offici

Dana Powers 5.1k Jan 08, 2023
python-bigquery Apache-2python-bigquery (🥈34 · ⭐ 3.5K · 📈) - Google BigQuery API client library. Apache-2

Python Client for Google BigQuery Querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. Google

Google APIs 550 Jan 01, 2023
sync/async MongoDB ODM, yes.

μMongo: sync/async ODM μMongo is a Python MongoDB ODM. It inception comes from two needs: the lack of async ODM and the difficulty to do document (un)

Scille 428 Dec 29, 2022
Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment.

Databank Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment. No ORM, no frills. Thread-safe. Only ra

snapADDY GmbH 4 Apr 04, 2022
Python MYSQL CheatSheet.

Python MYSQL CheatSheet Python mysql cheatsheet. Install Required Windows(WAMP) Download and Install from HERE Linux(LAMP) install packages. sudo apt

Mohammad Dori 4 Jul 15, 2022
SQL for Humans™

Records: SQL for Humans™ Records is a very simple, but powerful, library for making raw SQL queries to most relational databases. Just write SQL. No b

Ken Reitz 6.9k Jan 03, 2023
Dlsite-doujin-renamer - Dlsite doujin renamer tool with python

dlsite-doujin-renamer Features 支持深度查找带有 RJ 号的文件夹 支持手动选择文件夹或拖拽文件夹到软件窗口 支持在 config

111 Jan 02, 2023
Makes it easier to write raw SQL in Python.

CoolSQL Makes it easier to write raw SQL in Python. Usage Quick Start from coolsql import Field name = Field("name") age = Field("age") condition =

Aber 7 Aug 21, 2022
Implementing basic MongoDB CRUD (Create, Read, Update, Delete) queries, using Python.

MongoDB with Python Implementing basic MongoDB CRUD (Create, Read, Update, Delete) queries, using Python. We can connect to a MongoDB database hosted

MousamSingh 4 Dec 01, 2021
Prometheus instrumentation library for Python applications

Prometheus Python Client The official Python 2 and 3 client for Prometheus. Three Step Demo One: Install the client: pip install prometheus-client Tw

Prometheus 3.2k Jan 07, 2023
Pysolr — Python Solr client

pysolr pysolr is a lightweight Python client for Apache Solr. It provides an interface that queries the server and returns results based on the query.

Haystack Search 626 Dec 01, 2022
A framework based on tornado for easier development, scaling up and maintenance

turbo 中文文档 Turbo is a framework for fast building web site and RESTFul api, based on tornado. Easily scale up and maintain Rapid development for RESTF

133 Dec 06, 2022
Py2neo is a client library and toolkit for working with Neo4j from within Python

Py2neo Py2neo is a client library and toolkit for working with Neo4j from within Python applications. The library supports both Bolt and HTTP and prov

py2neo.org 1.2k Jan 02, 2023
Baserow is an open source no-code database tool and Airtable alternative

Baserow is an open source no-code database tool and Airtable alternative

1.3k Jan 01, 2023
A supercharged SQLite library for Python

SuperSQLite: a supercharged SQLite library for Python A feature-packed Python package and for utilizing SQLite in Python by Plasticity. It is intended

Plasticity 703 Dec 30, 2022
Asynchronous Python client for InfluxDB

aioinflux Asynchronous Python client for InfluxDB. Built on top of aiohttp and asyncio. Aioinflux is an alternative to the official InfluxDB Python cl

Gustavo Bezerra 159 Dec 27, 2022
dask-sql is a distributed SQL query engine in python using Dask

dask-sql is a distributed SQL query engine in Python. It allows you to query and transform your data using a mixture of common SQL operations and Python code and also scale up the calculation easily

Nils Braun 271 Dec 30, 2022