Records is a very simple, but powerful, library for making raw SQL queries to most relational databases.

Overview

Records: SQL for Humans™

https://travis-ci.org/kennethreitz/records.svg?branch=master

Records is a very simple, but powerful, library for making raw SQL queries to most relational databases.

https://farm1.staticflickr.com/569/33085227621_7e8da49b90_k_d.jpg

Just write SQL. No bells, no whistles. This common task can be surprisingly difficult with the standard tools available. This library strives to make this workflow as simple as possible, while providing an elegant interface to work with your query results.

Database support includes RedShift, Postgres, MySQL, SQLite, Oracle, and MS-SQL (drivers not included).


☤ The Basics

We know how to write SQL, so let's send some to our database:

import records

db = records.Database('postgres://...')
rows = db.query('select * from active_users')    # or db.query_file('sqls/active-users.sql')

Grab one row at a time:

">
>>> rows[0]
<Record {"username": "model-t", "active": true, "name": "Henry Ford", "user_email": "[email protected]", "timezone": "2016-02-06 22:28:23.894202"}>

Or iterate over them:

for r in rows:
    print(r.name, r.user_email)

Values can be accessed many ways: row.user_email, row['user_email'], or row[3].

Fields with non-alphanumeric characters (like spaces) are also fully supported.

Or store a copy of your record collection for later reference:

, , , ...] ">
>>> rows.all()
[<Record {"username": ...}>, <Record {"username": ...}>, <Record {"username": ...}>, ...]

If you're only expecting one result:

">
>>> rows.first()
<Record {"username": ...}>

Other options include rows.as_dict() and rows.as_dict(ordered=True).

☤ Features

  • Iterated rows are cached for future reference.
  • $DATABASE_URL environment variable support.
  • Convenience Database.get_table_names method.
  • Command-line records tool for exporting queries.
  • Safe parameterization: Database.query('life=:everything', everything=42).
  • Queries can be passed as strings or filenames, parameters supported.
  • Transactions: t = Database.transaction(); t.commit().
  • Bulk actions: Database.bulk_query() & Database.bulk_query_file().

Records is proudly powered by SQLAlchemy and Tablib.

☤ Data Export Functionality

Records also features full Tablib integration, and allows you to export your results to CSV, XLS, JSON, HTML Tables, YAML, or Pandas DataFrames with a single line of code. Excellent for sharing data with friends, or generating reports.

>>> print(rows.dataset)
username|active|name      |user_email       |timezone
--------|------|----------|-----------------|--------------------------
model-t |True  |Henry Ford|[email protected]|2016-02-06 22:28:23.894202
...

Comma Separated Values (CSV)

>>> print(rows.export('csv'))
username,active,name,user_email,timezone
model-t,True,Henry Ford,[email protected],2016-02-06 22:28:23.894202
...

YAML Ain't Markup Language (YAML)

>>> print(rows.export('yaml'))
- {active: true, name: Henry Ford, timezone: '2016-02-06 22:28:23.894202', user_email: model-t@gmail.com, username: model-t}
...

JavaScript Object Notation (JSON)

>>> print(rows.export('json'))
[{"username": "model-t", "active": true, "name": "Henry Ford", "user_email": "[email protected]", "timezone": "2016-02-06 22:28:23.894202"}, ...]

Microsoft Excel (xls, xlsx)

with open('report.xls', 'wb') as f:
    f.write(rows.export('xls'))

Pandas DataFrame

>>> rows.export('df')
    username  active       name        user_email                   timezone
0    model-t    True Henry Ford model-t@gmail.com 2016-02-06 22:28:23.894202

You get the point. All other features of Tablib are also available, so you can sort results, add/remove columns/rows, remove duplicates, transpose the table, add separators, slice data by column, and more.

See the Tablib Documentation for more details.

☤ Installation

Of course, the recommended installation method is pipenv:

$ pipenv install records[pandas]
✨🍰✨

☤ Command-Line Tool

As an added bonus, a records command-line tool is automatically included. Here's a screenshot of the usage information:

Screenshot of Records Command-Line Interface.

☤ Thank You

Thanks for checking this library out! I hope you find it useful.

Of course, there's always room for improvement. Feel free to open an issue so we can make Records better, stronger, faster.

Owner
Kenneth Reitz
Software Engineer focused on abstractions, reducing cognitive overhead, and Design for Humans.
Kenneth Reitz
MySQLdb is a Python DB API-2.0 compliant library to interact with MySQL 3.23-5.1 (unofficial mirror)

==================== MySQLdb Installation ==================== .. contents:: .. Prerequisites ------------- + Python 2.3.4 or higher * http://ww

Sébastien Arnaud 17 Oct 10, 2021
GINO Is Not ORM - a Python asyncio ORM on SQLAlchemy core.

GINO - GINO Is Not ORM - is a lightweight asynchronous ORM built on top of SQLAlchemy core for Python asyncio. GINO 1.0 supports only PostgreSQL with

GINO Community 2.5k Dec 29, 2022
Pony Object Relational Mapper

Downloads Pony Object-Relational Mapper Pony is an advanced object-relational mapper. The most interesting feature of Pony is its ability to write que

3.1k Jan 04, 2023
google-cloud-bigtable Apache-2google-cloud-bigtable (🥈31 · ⭐ 3.5K) - Google Cloud Bigtable API client library. Apache-2

Python Client for Google Cloud Bigtable Google Cloud Bigtable is Google's NoSQL Big Data database service. It's the same database that powers many cor

Google APIs 39 Dec 03, 2022
A Python DB-API and SQLAlchemy dialect to Google Spreasheets

Note: shillelagh is a drop-in replacement for gsheets-db-api, with many additional features. You should use it instead. If you're using SQLAlchemy all

Beto Dealmeida 185 Jan 01, 2023
AWS SDK for Python

Boto3 - The AWS SDK for Python Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to wri

the boto project 7.8k Jan 04, 2023
Baserow is an open source no-code database tool and Airtable alternative

Baserow is an open source no-code database tool and Airtable alternative

1.3k Jan 01, 2023
MySQL Operator for Kubernetes

MySQL Operator for Kubernetes The MYSQL Operator for Kubernetes is an Operator for Kubernetes managing MySQL InnoDB Cluster setups inside a Kubernetes

MySQL 462 Dec 24, 2022
Anomaly detection on SQL data warehouses and databases

With CueObserve, you can run anomaly detection on data in your SQL data warehouses and databases. Getting Started Install via Docker docker run -p 300

Cuebook 171 Dec 18, 2022
A wrapper for SQLite and MySQL, Most of the queries wrapped into commands for ease.

Before you proceed, make sure you know Some real SQL, before looking at the code, otherwise you probably won't understand anything. Installation pip i

Refined 4 Jul 30, 2022
dask-sql is a distributed SQL query engine in python using Dask

dask-sql is a distributed SQL query engine in Python. It allows you to query and transform your data using a mixture of common SQL operations and Python code and also scale up the calculation easily

Nils Braun 271 Dec 30, 2022
Motor - the async Python driver for MongoDB and Tornado or asyncio

Motor Info: Motor is a full-featured, non-blocking MongoDB driver for Python Tornado and asyncio applications. Documentation: Available at motor.readt

mongodb 2.1k Dec 26, 2022
A simple python package that perform SQL Server Source Control and Auto Deployment.

deploydb Deploy your database objects automatically when the git branch is updated. Production-ready! ⚙️ Easy-to-use 🔨 Customizable 🔧 Installation I

Mert Güvençli 10 Dec 07, 2022
Python interface to Oracle Database conforming to the Python DB API 2.0 specification.

cx_Oracle version 8.2 (Development) cx_Oracle is a Python extension module that enables access to Oracle Database. It conforms to the Python database

Oracle 841 Dec 21, 2022
PyPika is a python SQL query builder that exposes the full richness of the SQL language using a syntax that reflects the resulting query. PyPika excels at all sorts of SQL queries but is especially useful for data analysis.

PyPika - Python Query Builder Abstract What is PyPika? PyPika is a Python API for building SQL queries. The motivation behind PyPika is to provide a s

KAYAK 1.9k Jan 04, 2023
A simple password manager I typed with python using MongoDB .

Python with MongoDB A simple python code example using MongoDB. How do i run this code • First of all you need to have a python on your computer. If y

31 Dec 06, 2022
sync/async MongoDB ODM, yes.

μMongo: sync/async ODM μMongo is a Python MongoDB ODM. It inception comes from two needs: the lack of async ODM and the difficulty to do document (un)

Scille 428 Dec 29, 2022
Async ORM based on PyPika

PyPika-ORM - ORM for PyPika SQL Query Builder The package gives you ORM for PyPika with asycio support for a range of databases (SQLite, PostgreSQL, M

Kirill Klenov 7 Jun 04, 2022
Py2neo is a comprehensive toolkit for working with Neo4j from within Python applications or from the command line.

Py2neo Py2neo is a client library and toolkit for working with Neo4j from within Python applications and from the command line. The library supports b

Nigel Small 1.2k Jan 02, 2023
Query multiple mongoDB database collections easily

leakscoop Perform queries across multiple MongoDB databases and collections, where the field names and the field content structure in each database ma

bagel 5 Jun 24, 2021