aiosql - Simple SQL in Python

Overview

aiosql - Simple SQL in Python

SQL is code. Write it, version control it, comment it, and run it using files. Writing your SQL code in Python programs as strings doesn't allow you to easily reuse them in SQL GUIs or CLI tools like psql. With aiosql you can organize your SQL statements in .sql files, load them into your python application as methods to call without losing the ability to use them as you would any other SQL file.

This project supports standard and asyncio based drivers for SQLite and PostgreSQL out of the box (sqlite3, aiosqlite, psycopg2, asyncpg). Extensions to support other database drivers can be written by you! See: Database Driver Adapters

!DANGER!

This project supports asyncio based drivers and requires python versions >3.6.

Badges

https://github.com/nackjicholson/aiosql/actions/workflows/aiosql-package.yml/badge.svg?branch=master

Installation

pip install aiosql

Or if you you use poetry:

poetry add aiosql

Usage

users.sql

-- name: get-all-users
-- Get all user records
select userid,
       username,
       firstname,
       lastname
  from users;


-- name: get-user-by-username^
-- Get user with the given username field.
select userid,
       username,
       firstname,
       lastname
  from users
 where username = :username;

You can use aiosql to load the queries in this file for use in your Python application:

>> [(1, "nackjicholson", "William", "Vaughn"), (2, "johndoe", "John", "Doe"), ...] users = queries.get_user_by_username(conn, username="nackjicholson") # >>> (1, "nackjicholson", "William", "Vaughn")">
import aiosql
import sqlite3

conn = sqlite3.connect("myapp.db")
queries = aiosql.from_path("users.sql", "sqlite3")

users = queries.get_all_users(conn)
# >>> [(1, "nackjicholson", "William", "Vaughn"), (2, "johndoe", "John", "Doe"), ...]

users = queries.get_user_by_username(conn, username="nackjicholson")
# >>> (1, "nackjicholson", "William", "Vaughn")

Writing SQL in a file and executing it from methods in python!

Async Usage

greetings.sql

-- name: get_all_greetings
-- Get all the greetings in the database
select greeting_id, greeting from greetings;

-- name: get_user_by_username^
-- Get a user from the database
select user_id,
       username,
       name
  from users
 where username = :username;

example.py

import asyncio
import aiosql
import aiosqlite


queries = aiosql.from_path("./greetings.sql", "aiosqlite")


async def main():
    # Parallel queries!!!
    async with aiosqlite.connect("greetings.db") as conn:
        greetings, user = await asyncio.gather(
            queries.get_all_greetings(conn),
            queries.get_user_by_username(conn, username="willvaughn")
        )
        # greetings = [(1, "Hi"), (2, "Aloha"), (3, "Hola")]
        # user = (1, "willvaughn", "William")

        for _, greeting in greetings:
            print(f"{greeting}, {user[2]}!")
        # Hi, William!
        # Aloha, William!
        # Hola, William!

asyncio.run(main())

This example has an imaginary SQLite database with greetings and users. It prints greetings in various languages to the user and showcases the basic feature of being able to load queries from a sql file and call them by name in python code. It also happens to do two SQL queries in parallel using aiosqlite and asyncio.

Why you might want to use this

  • You think SQL is pretty good, and writing SQL is an important part of your applications.
  • You don't want to write your SQL in strings intermixed with your python code.
  • You're not using an ORM like SQLAlchemy or Django, and you don't need to.
  • You want to be able to reuse your SQL in other contexts. Loading it into psql or other database tools.

Why you might NOT want to use this

  • You're looking for an ORM.
  • You aren't comfortable writing SQL code.
  • You don't have anything in your application that requires complicated SQL beyond basic CRUD operations.
  • Dynamically loaded objects built at runtime really bother you.

Table of Contents

.. toctree::
   :maxdepth: 2
   :caption: Contents:

   Getting Started 
  
   Defining SQL Queries 
  
   
   Advanced Topics 
   
    
   Database Driver Adapters 
    
     
   Contributing 
     
      
   API 
       
      
     
    
   
  
 
Comments
  • Single character parameters cause parsing issues

    Single character parameters cause parsing issues

    Python 3.10 aiosql 3.4 asyncpg 0.25.0

    -- name: insert_into_symbol_arima_models*!
    INSERT INTO symbol_arima_models(symbol, "interval", p, d, q, "P", "D", "Q", s, k, n_exog) VALUES (:symbol, :interval, :p, :d, :q, :P, :D, :Q, :s, :k, :n_exog);
    
    sql = 'INSERT INTO symbol_arima_models(symbol, "interval", p, d, q, "P", "D", "Q", s, k, n_exog) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10$11og);'
    

    $10$11og ???

    opened by singlecheeze 13
  • Could I drop support for python 3.6?

    Could I drop support for python 3.6?

    #29 This came up, dataclasses aren't usable in this library until we support 3.7+

    Worse is the aioctxlib.py hack I wrote to get around not having this contextlib.asynccontextmanager in < 3.7

    I think this library being a target for users of asyncio and modern versions of python gives me a pretty good reason to not support 3.6 if it's becoming inconvenient. Asyncio itself went through major changes from 3.6 to 3.7 and people should update asap for that.

    opened by nackjicholson 13
  • Bug with % on PyFormatAdapter and PyMySQL

    Bug with % on PyFormatAdapter and PyMySQL

    Hello, some bug with %

    aiosql==5.0
    contextlib2==21.6.0
    PyMySQL==1.0.2
    

    queries.sql

    -- name: get_now
    SELECT NOW();
    
    
    -- name: get_now_date
    SELECT DATE_FORMAT(NOW(),'%Y-%m-%d %h:%i:%s');
    
    import os
    
    import aiosql
    import pymysql.cursors
    
    queries = aiosql.from_path('queries.sql', driver_adapter='pymysql')
    
    connection = pymysql.connect(
            host=os.environ.get('DB_HOST'),
            user=os.environ.get('DB_USER'),
            password=os.environ.get('DB_PASSWORD'),
            database=os.environ.get('DB_DATABASE'),
            cursorclass=pymysql.cursors.DictCursor
        )
    
    with connection:
        print(queries.get_now(connection))
        print(queries.get_now_date(connection))
    
    [{'NOW()': datetime.datetime(2022, 7, 28, 17, 44, 58)}]
    
      File "/opt/Project/env/lib/python3.10/site-packages/aiosql/queries.py", line 63, in fn
        return self.driver_adapter.select(
      File "/opt/Project/env/lib/python3.10/site-packages/aiosql/adapters/generic.py", line 23, in select
        cur.execute(sql, parameters)
      File "/opt/Project/env/lib/python3.10/site-packages/pymysql/cursors.py", line 146, in execute
        query = self.mogrify(query, args)
      File "/opt/Project/env/lib/python3.10/site-packages/pymysql/cursors.py", line 125, in mogrify
        query = query % self._escape_args(args, conn)
    TypeError: not enough arguments for format string
    
    opened by romkazor 9
  • `ValueError: duplicate parameter name` when the same parameter name appears twice

    `ValueError: duplicate parameter name` when the same parameter name appears twice

    aiosql 3.4.0 cannot load SQL queries when the same parameter :name appears multiple time. It did work with aiosql 3.3.1.

    This can be reproduced with this example:

    # aiosql_test.py
    import aiosql
    
    sql_str = "-- name: get^\n" "select * from test where foo=:foo and foo=:foo"
    aiosql.from_str(sql_str, "aiosqlite")
    print("OK")
    

    Working with aiosql 3.3.1:

    $ pip install "aiosql<3.4.0"
    $ python3 aiosql_test.py
    OK
    

    Failing with aiosql 3.4.0:

    $ pip install "aiosql==3.4.0"
    $ python3 aiosql_test.py
    Traceback (most recent call last):
      File "aiosql_test.py", line 4, in <module>
        aiosql.from_str(sql_str, "aiosqlite")
      File "lib/python3.9/site-packages/aiosql/aiosql.py", line 84, in from_str
        query_data = query_loader.load_query_data_from_sql(sql)
      File "lib/python3.9/site-packages/aiosql/query_loader.py", line 117, in load_query_data_from_sql
        query_data.append(self._make_query_datum(query_sql_str, ns_parts))
      File "lib/python3.9/site-packages/aiosql/query_loader.py", line 27, in _make_query_datum
        signature = self._extract_signature(sql)
      File "lib/python3.9/site-packages/aiosql/query_loader.py", line 103, in _extract_signature
        return inspect.Signature(parameters=[self] + params) if params else None
      File "lib/python3.9/inspect.py", line 2826, in __init__
        raise ValueError(msg)
    ValueError: duplicate parameter name: 'foo'
    
    opened by vivienm 9
  • Generate a Signature from SQL Parameters

    Generate a Signature from SQL Parameters

    This change adds another step to building a QueryFn which attempts to generate a function signature from the query parameters.

    This adds helpful introspection:

    >>> import inspect
    >>> import aiosql
    ... 
    ... sql_str = (
    ...     "-- name: get^\n"
    ...     "select * from test where foo=:foo and bar=:bar"
    ... )
    ... queries = aiosql.from_str(sql_str, "aiosqlite")
    >>> inspect.signature(queries.get)
    <Signature (*, foo, bar)>
    >>> help(queries.get)
    Help on method get in module aiosql.queries:
    
    async get(*, foo, bar) method of aiosql.queries.Queries instance
    

    Interactive REPLs are able to provide hints/tooltips based on the signature as well.

    opened by seandstewart 7
  • Add query filename info so that goto-definition works in editors

    Add query filename info so that goto-definition works in editors

    Great project. The only thing that was missing for me currently was goto-definition.

    Editors can use fn.__code__.co_filename to implement goto-definition.

    With this change, goto-definition for a query function will go to that function's SQL file, in case the function was created with aiosql.from_path.

    opened by abo-abo 6
  • Add SQLOperationType to QueryFn for improved introspection.

    Add SQLOperationType to QueryFn for improved introspection.

    I use this library in my day-to-day work and I'm looking to automate some additional bootstrapping I do for managing my query library. Having introspection into the operation type of each function would be extremely helpful!

    opened by seandstewart 6
  • Off-by-one on AsyncPGDriverAdapter.var_replacements

    Off-by-one on AsyncPGDriverAdapter.var_replacements

    aiosql version: 3.2.0 Python version: 3.8

    Description

    First of all, thanks for maintaining such an excellent library. I've been itching for a chance to use it for a while now and finally have the opportunity.

    I've come across an issue where if there are multiple child query clients with similar parameters, the AsyncPGDriverAdapter will not properly count them when processing the SQL file.

    The issue appears to be isolated to this block:

    https://github.com/nackjicholson/aiosql/blob/f2095151d6601eed49ae29e45e37e30e0f67ca0c/aiosql/adapters/asyncpg.py#L40-L46

    If there are multiple queries with the same name which share an argument with the same name, the count will not be incremented when that var name is replaced.

    There are two issues with this:

    1. All queries with the same name must implement matching arguments in exactly the same order. (Easily worked with)
    2. Subsequent queries' additional arguments will all be off-by-one. (Breaks this use-case)

    Proposed solution

    I think the simplest solution would be to namespace the keys in var_replacements with the query source as well as name. This would prevent argument collision from occurring at all.

    Current Workaround

    My current workaround is to load each query directory individually. This works for my use-case, but is still a bit of a gotcha.

    bug 
    opened by seandstewart 6
  • Templating Identifers into SQL

    Templating Identifers into SQL

    https://github.com/honza/anosql/issues/45 and https://github.com/honza/anosql/issues/47

    Are both asking for support for Identifiers in order to template other values into the SQL, like table names. I've never had need for this, and actually wasn't aware of it as a feature in psycopg2, but it's worth some thought to see if it's something we can do in this project.

    One potential problem in the way of doing this is that the other driver libraries like sqlite or asyncpg may not have safe features for doing this. I just really don't know since I've never had to do this kind of table name formatting.

    help wanted 
    opened by nackjicholson 6
  • Add support for writing from dataclasses.

    Add support for writing from dataclasses.

    Currently, there is support for reading database rows into dataclasses which significantly reduces the amount of boilerplate required. But the inverse is not the case.

    e.g.

    @dataclass
    class Article:
      title: str
      body: str
      created: datetime
    
      def create(self, conn):
        return queries.insert_article(conn, title=self.title, body=self.body, created=self.created)
      
      def update(self, conn):
        return queries.update_article(conn, title=self.title, body=self.body, created=self.created)
    

    If the parameters supported dot notation, this could be simplified to

    -- name: update_article
    UPDATE article SET title=:article.title body=:article.body ...
    
    
    def update(self, conn):
      return queries.update_article(conn, article=self)
    
    opened by xlevus 6
  • Taking this library to its full potential isn't something I've been able to do. Anyone else interested?

    Taking this library to its full potential isn't something I've been able to do. Anyone else interested?

    I still think this project is great, and I'm so pleased that there are so many people who use it. However, when I began maintaining anosql, and then forked this project, I was a full-time software engineer with the time and focus to spend driving improvements to it. In the last 2-3 years my personal and professional circumstances have changed a lot. I've only been able to make sporadic commits a few times a year. I think it's time to admit to myself that for this project to grow and evolve, I can't be the person with the keys.

    @seandstewart and @zx80 you two have been extremely helpful in keeping things moving somewhat. Are either of you in a position to drive this thing? I think we could probably keep it where it is for continuity, here on github. I would have to do a little bit of permissioning to make sure you have full access to change the CI stuff and to push new versions up to pypi. That's all possible to get done.

    I think there is still lots of exciting work here, and I'm sure any new maintainer would find additional avenues for bettering aiosql.

    Let me know your interest when you can.

    help wanted 
    opened by nackjicholson 5
  • Add support for multiline comments.

    Add support for multiline comments.

    I Changed the _SQL_COMMENT regex in aiosql/query_loader.py to work on single and multiline /* */ style comments. I created a new test for it in tests/test_loading.py to show that it didn't break the previous behavior.

    opened by danofsteel32 2
  • Marshalling Data

    Marshalling Data

    I've recently started playing with this library in a new project, and so far have found it very enjoyable to use :tada:

    I was searching for a way to be able to get dict output from the queries which led me to a bunch of similar questions around marshalling of data.

    • https://github.com/nackjicholson/aiosql/blob/7.0/aiosql/query_loader.py#L70
    • https://github.com/nackjicholson/aiosql/issues/33
    • https://github.com/nackjicholson/aiosql/issues/12
    • https://github.com/nackjicholson/aiosql/issues/77
    • https://github.com/nackjicholson/aiosql/issues/67

    After some thinking about it, I decided to smash out a proof-of-concept for how it might work. I'm opening up as an issue first as I suspect there might be some discussion needed before moving to any PR.

    Overview of changes

    • Adds a Marshal class that can be used to marshal data received from certain query types.
      • Included are some default/example marshallers: DictMarshal, NamedTupleMarshal, DataclassMarshal.
    • Removed record_classes

    Marshals can be provided in two locations:

    1. They can be passed as default_marshal in aiosql.from_str(..., default_marshal=Marshal), aiosql.from_path(..., default_marshal=Marshal).
    2. They can be passed into specific (supporting) queries q.some_func(conn, arg1=arg1, arg2=arg2, marshal=marshal). In this case they override the default_marshal.
    • Note: under the current implementation it's not possible to remove the default_marshal as passing None to a query indicates that that the default_marshal should be used. That said, it is possible to use the Marshal base class which just passes through what it was provided.
    • Supported query types are select, select_one, insert_returning. Note: due to instabiltiy of APIs, insert_returning does not use the default_marshal.

    Sample Code

    From the tests:

    def run_parameterized_query_dict_marshal(conn, queries):
        actual = queries.users.get_by_lastname(conn, lastname="Doe", marshal=aiosql.marshallers.DictMarshal)
        expected = [
            {
                "userid": 3,
                "username": "janedoe",
                "firstname": "Jane",
                "lastname": "Doe",
            },
            {
                "userid": 2,
                "username": "johndoe",
                "firstname": "John",
                "lastname": "Doe",
            },
        ]
        assert actual == expected
    

    Reasoning / Considerations

    Summarising some of the discussions from the above sources.

    record_classes are undesirable from an implementation point of view, but are wanted by users.

    Allows removing of record_classes with a clear path to upgrade.

    This also prevents leaking python details into the SQL.

    This does make a breaking change, however to keep both features would be much harder to maintain.

    We want to avoid turning aiosql into an ORM, functionality should be kept simple. This allows any other tools to hook in with their own marshalling of data without needing to further modify aiosql. This "hook" style of library I think is much more powerful and useful to others than a library that needs to be wrapped.

    This is not the job of aiosql, the library should be wrapped.

    Even the various DB libraries allow some form of marshalling, mostly through the creation of DictCursors or similar. Being able to change the precise output format is a super common task.

    What about marshalling of data before calling the query?

    The Marshal classes could easily have a marshal_input(self, *args, **kwargs) -> tuple(args, kwargs) or something similar added. We'd likely want to be able to use different marshallers by either adding different arguments default_[input|output]_marshaller etc etc, or by providing a "meta" marshaller than can be provided two marshallers, one for input one for output, and thus we don't have to change any function signatures. We'd have to modify aiosql.queries._make_sync_fn

    (I included samples of these in my POC).

    Why marshal_one_row and marshal_many_rows

    Having two functions allows for implementation optimisations in the marshallers, whilst preventing typos when they are called.

    I did originally have one function but marshal_rows(column_names, [row])[0] is very gross to write for the single row cases.

    Keeping row[s] as apart of the name allows for disambiguation in the future if we add a function for marshalling the incoming data.

    Further work

    Before a PR can be accepted, more tests will need to be added, especially for handling the insert_returning marshalling. We'd potentially want to have some tests of the marshallers without being attached to any database.

    opened by nhairs 9
Releases(7.1)
  • 7.1(Nov 11, 2022)

  • 7.0(Oct 28, 2022)

    Features:

    • add MariaDB support
    • allow to change file extension when loading directories
    • switch mysql.connector dependency to mysql-connector-python because mysql-connector is obsolete
    • improved documentation

    Tests:

    • simplify github CI tests, no service needed
    • run from Python 3.7 to 3.11
    • extensive rework of tests to ignore missing modules, use marks…
    • add tests with docker images, needed for mariadb
    • depend on pytest 7.
    Source code(tar.gz)
    Source code(zip)
  • 6.5(Oct 7, 2022)

    • improved documentation
    • add missing dependency on setuptools
    • improved Makefile
    • add untested mariadb support (will be tested and advertised in the next release)
    • some refactoring (for next release really)
    Source code(tar.gz)
    Source code(zip)
  • 6.4(Sep 6, 2022)

    Version 6.4 brings:

    • improved CI checks with rstcheck
    • dynamic query functions point to the SQL query file
    • illegal query names such as 1st are filtered out
    • documentation has been improved
    • some code refactoring…
    Source code(tar.gz)
    Source code(zip)
  • 6.3(Aug 29, 2022)

  • 6.2(Aug 8, 2022)

  • 6.1(Jul 31, 2022)

  • 6.0(Jul 29, 2022)

    The v6.0 release:

    • adds support for pygresql postgres driver.
    • works around pymysql and mysqldb driver issues.
    • adds a few more tests.
    • improves the documentation, including badges which look soooo cooool.
    • simplifies the development environment, CI configuration.
    • updates myproject.toml.
    • does some cleanup.
    Source code(tar.gz)
    Source code(zip)
  • 5.0(Jul 23, 2022)

    New in 5.0:

    • Add Postgres adapter for pg8000 driver.
    • Add register_adapter.
    • Use more recent CI image.
    • Add flake8 to CI.
    • Update documentation.
    • Update dependencies.
    Source code(tar.gz)
    Source code(zip)
  • 4.0(Jul 10, 2022)

    This release:

    • add support for MySQL, on top of Postgres and SQLite
    • add support for more drivers : psycopg3, APSW
    • refactors the base code significantly to reduce the line count
    • CI covers Python 3.7 to 3.10
    • add coverage tests to about 100%, including error paths
    • updates dependencies and the doc accordingly
    • switch to 2-level version numbering
    Source code(tar.gz)
    Source code(zip)
  • 3.4.1(Jan 31, 2022)

  • 3.4.0(Dec 24, 2021)

  • 3.3.0(Jul 23, 2021)

    • #62 Fixes an off-by-one bug that affected variable name translation for the asyncpg driver. Thank you @seandstewart for reporting the issue!
    • Fixes documentation css styling via .nojekyll plugin for sphinx
    Source code(tar.gz)
    Source code(zip)
  • 3.2.1(Jul 19, 2021)

  • 3.2.0(Sep 27, 2020)

    #35 Fix bug with SQL comments at top of files. #45 Introduced QueryFn Protocol (by @dlax) #46 execute_script returns strings. Technically a change in API, but I just really anyone one was depending on this function returning a None. We shall see if it causes anyone problems. #47 Add a select_value operation (by @wagnerflo)

    Source code(tar.gz)
    Source code(zip)
  • 3.1.3(Sep 26, 2020)

  • 3.1.2(Aug 11, 2020)

    Adding type checking and Protocols for database adapters. Refactor async and sync fn wrappers for better code reuse and performance.

    #28 #30 #32

    Source code(tar.gz)
    Source code(zip)
  • 3.1.1(Aug 9, 2020)

  • 3.1.0(Jul 8, 2020)

  • 2.0.2(Dec 8, 2018)

  • v2.0.1(Dec 8, 2018)

Owner
Will Vaughn
https://git.sr.ht/~willvaughn/ VP of Software Eng at Carbon Lighthouse. Reach out to join us in our mission to stop climate change.
Will Vaughn
Logica is a logic programming language that compiles to StandardSQL and runs on Google BigQuery.

Logica: language of Big Data Logica is an open source declarative logic programming language for data manipulation. Logica is a successor to Yedalog,

Evgeny Skvortsov 1.5k Dec 30, 2022
asyncio compatible driver for elasticsearch

asyncio client library for elasticsearch aioes is a asyncio compatible library for working with Elasticsearch The project is abandoned aioes is not su

97 Sep 05, 2022
Lazydata: Scalable data dependencies for Python projects

lazydata: scalable data dependencies lazydata is a minimalist library for including data dependencies into Python projects. Problem: Keeping all data

629 Nov 21, 2022
Python ODBC bridge

pyodbc pyodbc is an open source Python module that makes accessing ODBC databases simple. It implements the DB API 2.0 specification but is packed wit

Michael Kleehammer 2.6k Dec 27, 2022
ClickHouse Python Driver with native interface support

ClickHouse Python Driver ClickHouse Python Driver with native (TCP) interface support. Asynchronous wrapper is available here: https://github.com/myma

Marilyn System 957 Dec 30, 2022
Sample scripts to show extracting details directly from the AIQUM database

Sample scripts to show extracting details directly from the AIQUM database

1 Nov 19, 2021
A simple python package that perform SQL Server Source Control and Auto Deployment.

deploydb Deploy your database objects automatically when the git branch is updated. Production-ready! ⚙️ Easy-to-use 🔨 Customizable 🔧 Installation I

Mert Güvençli 10 Dec 07, 2022
MongoX is an async python ODM for MongoDB which is built on top Motor and Pydantic.

MongoX MongoX is an async python ODM (Object Document Mapper) for MongoDB which is built on top Motor and Pydantic. The main features include: Fully t

Amin Alaee 112 Dec 04, 2022
Py2neo is a client library and toolkit for working with Neo4j from within Python

Py2neo Py2neo is a client library and toolkit for working with Neo4j from within Python applications. The library supports both Bolt and HTTP and prov

py2neo.org 1.2k Jan 02, 2023
Implementing basic MongoDB CRUD (Create, Read, Update, Delete) queries, using Python.

MongoDB with Python Implementing basic MongoDB CRUD (Create, Read, Update, Delete) queries, using Python. We can connect to a MongoDB database hosted

MousamSingh 4 Dec 01, 2021
A Python library for Cloudant and CouchDB

Cloudant Python Client This is the official Cloudant library for Python. Installation and Usage Getting Started API Reference Related Documentation De

Cloudant 162 Dec 19, 2022
asyncio (PEP 3156) Redis support

aioredis asyncio (PEP 3156) Redis client library. Features hiredis parser Yes Pure-python parser Yes Low-level & High-level APIs Yes Connections Pool

aio-libs 2.2k Jan 04, 2023
A Python DB-API and SQLAlchemy dialect to Google Spreasheets

Note: shillelagh is a drop-in replacement for gsheets-db-api, with many additional features. You should use it instead. If you're using SQLAlchemy all

Beto Dealmeida 185 Jan 01, 2023
This repository is for active development of the Azure SDK for Python.

Azure SDK for Python This repository is for active development of the Azure SDK for Python. For consumers of the SDK we recommend visiting our public

Microsoft Azure 3.4k Jan 02, 2023
Class to connect to XAMPP MySQL Database

MySQL-DB-Connection-Class Class to connect to XAMPP MySQL Database Basta fazer o download o mysql_connect.py e modificar os parâmetros que quiser. E d

Alexandre Pimentel 4 Jul 12, 2021
SAP HANA Connector in pure Python

SAP HANA Database Client for Python Important Notice This public repository is read-only and no longer maintained. The active maintained alternative i

SAP Archive 299 Nov 20, 2022
Py2neo is a comprehensive toolkit for working with Neo4j from within Python applications or from the command line.

Py2neo Py2neo is a client library and toolkit for working with Neo4j from within Python applications and from the command line. The library supports b

Nigel Small 1.2k Jan 02, 2023
MinIO Client SDK for Python

MinIO Python SDK for Amazon S3 Compatible Cloud Storage MinIO Python SDK is Simple Storage Service (aka S3) client to perform bucket and object operat

High Performance, Kubernetes Native Object Storage 582 Dec 28, 2022
Python PostgreSQL adapter to stream results of multi-statement queries without a server-side cursor

streampq Stream results of multi-statement PostgreSQL queries from Python without server-side cursors. Has benefits over some other Python PostgreSQL

Department for International Trade 6 Oct 31, 2022
High level Python client for Elasticsearch

Elasticsearch DSL Elasticsearch DSL is a high-level library whose aim is to help with writing and running queries against Elasticsearch. It is built o

elastic 3.6k Jan 03, 2023