Monty, Mongo tinified. MongoDB implemented in Python !

Overview

drawing

Monty, Mongo tinified. MongoDB implemented in Python !

Build Status Coverage Status Version Join the chat at https://gitter.im/montydb-hq/community

Inspired by TinyDB and it's extension TinyMongo.

MontyDB is:

  • A tiny version of MongoDB, against to MongoDB 4.0.11
  • Written in pure Python, testing on Python 2.7, 3.6, 3.7, 3.8, PyPy*
  • Literally serverless.
  • Similar to mongomock, but a bit more than that.

All those implemented functions and operators, should behaved just like you were working with MongoDB. Even raising error for same cause.

Install

pip install montydb

Optinal Requirements
  • lmdb (for LMDB storage lightning)

  • pymongo (for bson)

    bson is opt-out by default even it's installed, set env var MONTY_ENABLE_BSON=1 to enable it.

Example Code

>>> from montydb import MontyClient
>>> col = MontyClient(":memory:").db.test
>>> col.insert_many([{"stock": "A", "qty": 6}, {"stock": "A", "qty": 2}])

>>> cur = col.find({"stock": "A", "qty": {"$gt": 4}})
>>> next(cur)
{'_id': ObjectId('5ad34e537e8dd45d9c61a456'), 'stock': 'A', 'qty': 6}

Development

  • You may visit Projects' TODO to see what's going on.
  • You may visit This Issue to see what's been implemented and what's not.

Storage Engine Configurations

The configuration process only required on repository creation or modification.

Currently, one repository can only assign one storage engine.

  • Memory

Memory storage does not need nor have any configuration, nothing saved to disk.

>>> from montydb import MontyClient
>>> client = MontyClient(":memory:")
  • FlatFile

FlatFile is the default on-disk storage engine.

>>> from montydb import MontyClient
>>> client = MontyClient("/db/repo")

FlatFile config:

[flatfile]
cache_modified: 0  # how many document CRUD cached before flush to disk.
  • LMDB (Lightning Memory-Mapped Database)

LMDB is NOT the default on-disk storage, need configuration first before get client.

Newly implemented.

>>> from montydb import set_storage, MontyClient
>>> set_storage("/db/repo", storage="lightning")
>>> client = MontyClient("/db/repo")

LMDB config:

[lightning]
map_size: 10485760  # Maximum size database may grow to.
  • SQLite

SQLite is NOT the default on-disk storage, need configuration first before get client.

Pre-existing sqlite storage file which saved by montydb<=1.3.0 is not read/writeable after montydb==2.0.0.

>>> from montydb import set_storage, MontyClient
>>> set_storage("/db/repo", storage="sqlite")
>>> client = MontyClient("/db/repo")

SQLite config:

[sqlite]
journal_mode: WAL

SQLite write concern:

>>> client = MontyClient("/db/repo",
>>>                      synchronous=1,
>>>                      automatic_index=False,
>>>                      busy_timeout=5000)

MontyDB URI

You could prefix the repository path with montydb URI scheme.

  >>> client = MontyClient("montydb:///db/repo")

Utilities

Pymongo bson may required.

  • montyimport

    Imports content from an Extended JSON file into a MontyCollection instance. The JSON file could be generated from montyexport or mongoexport.

    >>> from montydb import open_repo, utils
    >>> with open_repo("foo/bar"):
    >>>     utils.montyimport("db", "col", "/path/dump.json")
    >>>
  • montyexport

    Produces a JSON export of data stored in a MontyCollection instance. The JSON file could be loaded by montyimport or mongoimport.

    >>> from montydb import open_repo, utils
    >>> with open_repo("foo/bar"):
    >>>     utils.montyexport("db", "col", "/data/dump.json")
    >>>
  • montyrestore

    Loads a binary database dump into a MontyCollection instance. The BSON file could be generated from montydump or mongodump.

    >>> from montydb import open_repo, utils
    >>> with open_repo("foo/bar"):
    >>>     utils.montyrestore("db", "col", "/path/dump.bson")
    >>>
  • montydump

    Creates a binary export from a MontyCollection instance. The BSON file could be loaded by montyrestore or mongorestore.

    >>> from montydb import open_repo, utils
    >>> with open_repo("foo/bar"):
    >>>     utils.montydump("db", "col", "/data/dump.bson")
    >>>
  • MongoQueryRecorder

    Record MongoDB query results in a period of time. Requires to access databse profiler.

    This works via filtering the database profile data and reproduce the queries of find and distinct commands.

    >>> from pymongo import MongoClient
    >>> from montydb.utils import MongoQueryRecorder
    >>> client = MongoClient()
    >>> recorder = MongoQueryRecorder(client["mydb"])
    >>> recorder.start()
    >>> # Make some queries or run the App...
    >>> recorder.stop()
    >>> recorder.extract()
    {<collection_1>: [<doc_1>, <doc_2>, ...], ...}
  • MontyList

    Experimental, a subclass of list, combined the common CRUD methods from Mongo's Collection and Cursor.

    >>> from montydb.utils import MontyList
    >>> mtl = MontyList([1, 2, {"a": 1}, {"a": 5}, {"a": 8}])
    >>> mtl.find({"a": {"$gt": 3}})
    MontyList([{'a': 5}, {'a': 8}])

Why I did this ?

Mainly for personal skill practicing and fun. I work in VFX industry, some of my production needs (mostly edge-case) requires to run in a limited environment (e.g. outsourced render farms), which may have problem to run or connect a MongoDB instance. And I found this project really helps.


drawing

Comments
  • uses ast.literal_eval() over eval()

    uses ast.literal_eval() over eval()

    static security checking of codebase using Bandit revealed use of unsecure eval() function.

    >> Issue: [B307:blacklist] Use of possibly insecure function - consider using safer ast.literal_eval.
       Severity: Medium   Confidence: High
       Location: montydb/types/_nobson.py:163
       More Info: https://bandit.readthedocs.io/en/latest/blacklists/blacklist_calls.html#b307-eval
    162                     if not _encoder.key_is_keyword:
    163                         key = eval(candidate)
    164                         if not isinstance(key, cls._string_types):
    

    Have implemented ast.literal_eval() in its place

    opened by madeinoz67 3
  • Dropping Python 3.4, 3.5 and adding 3.7 to CI

    Dropping Python 3.4, 3.5 and adding 3.7 to CI

    Dropping Python 3.4, 3.5 tests

    In some test cases, for example:

    • test/test_engine/test_find.py

      • test_find_2
      • test_find_3
    • tests/test_engine/test_update/test_update.py

      • test_update_positional_filtered_near_conflict
      • test_update_positional_filtered_has_conflict_1
    • tests/test_engine/test_update/test_update_pull.py

      • test_update_pull_6
      • test_update_pull_7

    They often failed randomly due to the dict key order in run-time. I think, unless changing those test case documents into OrderedDict, or can not ensure the key order input into monty and mongo were the same ( which may cause different output ).

    Since this is not the issue of montydb's functionality, dropping them for good.

    Involving Python 3.7

    Well, it's 2019.

    opened by davidlatwe 3
  • Update base.py

    Update base.py

    MutableMapping should be imported from collections.abc as said in documentation https://docs.python.org/3.9/library/collections.abc.html#collections.abc.MutableMapping

    Mainly we need it because its fix #65 (python3.10 compatibility)

    opened by bobuk 2
  • Positional operator issue

    Positional operator issue

    Doesn't work with positional operators. Made the same update_one with pymongo successfully. Don't sure if monty got this feature yet not, but as I can see it causing error.

    update_one({'users': {'$elemMatch': {'_id': id_}}}, {'$set': {'invoices.$.name': name}})

    montydb.erorrs.WriteError: The positional operator did not find the match needed from the query.

    bug 
    opened by rewiaca 2
  • bson.errors.InvalidBSON: objsize too large after update_one

    bson.errors.InvalidBSON: objsize too large after update_one

    Getting this error when making any operation after editing database. Using lmdb. Guessed it was after wrong update_one, but not sure about. Anyway, adding original code of editing db:

        record = {'free': 12313232, 'path': '/media/mnt/'}
        col = getattr(db, 'storages')
    
        record = {**models['storage'], **record}
        if col.count_documents({'path': record['path']}) > 0:
            col.update_one({'path': record['path']}, {'$set': {**record}})
        else:
            col.insert_one(record)
    

    Error text:

    Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/user/.local/lib/python3.8/site-packages/montydb/cursor.py", line 365, in next if len(self._data) or self._refresh(): File "/home/user/.local/lib/python3.8/site-packages/montydb/cursor.py", line 354, in _refresh self.__query() File "/home/user/.local/lib/python3.8/site-packages/montydb/cursor.py", line 311, in __query for doc in documents: File "/home/user/.local/lib/python3.8/site-packages/montydb/storage/lightning.py", line 253, in <genexpr> docs = (self._decode_doc(doc) for doc in self._conn.iter_docs()) File "/home/user/.local/lib/python3.8/site-packages/montydb/storage/__init__.py", line 227, in _decode_doc return bson.document_decode( File "/home/user/.local/lib/python3.8/site-packages/montydb/types/_bson.py", line 64, in document_decode return cls.BSON(doc).decode(codec_options) File "/home/user/.local/lib/python3.8/site-packages/bson/__init__.py", line 1258, in decode return decode(self, codec_options) File "/home/user/.local/lib/python3.8/site-packages/bson/__init__.py", line 970, in decode return _bson_to_dict(data, codec_options) bson.errors.InvalidBSON: objsize too large

    That's how db looks like in plain:

    $ cat db/storages.mdb @ @ ~60c9a9cf368c720edc2668a6{"path": "/mnt/hdd4", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a6"}}~60c9a9cf368c720edc2668a5{"path": "/boot/efi", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a5"}}y60c9a9cf368c720edc2668a4{"path": "/run", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a4"}}v60c9a9cf368c720edc2668a3{"path": "/", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a3"}} f*~s60c9a9cf368c720edc2668a3{"path": "/", "total": 9999, "used": 99, "free": 0, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a3"}}2~60c9a9cf368c720edc2668a6{"path": "/mnt/hdd4", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a6"}}~60c9a9cf368c720edc2668a5{"path": "/boot/efi", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a5"}}y60c9a9cf368c720edc2668a4{"path": "/run", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a4"} f*~sr60c9a9cf368c720edc2668a3{"path": "/", "total": 9999, "used": 99, "free": 0, "status": "busy", "_id": {"$oid": "60c9a9cf368c720edc2668a3"}}~60c9a9cf368c720edc2668a6{"path": "/mnt/hdd4", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a6"}}~60c9a9cf368c720edc2668a5{"path": "/boot/efi", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a5"}}y60c9a9cf368c720edc2668a4{"path": "/run", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a4"}}

    Or this:

    $ cat db/storages.mdb @ @ 0 documents 0 document 0̝φħ^sTb_id̝φħ^sTpath/media/user/ssd1totalusedfreestatusreadyr60c9a9cf368c720edc2668a3{"path": "/", "total": 9999, "used": 99, "free": 0, "status": "busy", "_id": {"$oid": "60c9a9cf368c720edc2668a3"}}~60c9a9cf368c720edc2668a6{"path": "/mnt/hdd4", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a6"}}~60c9a9cf368c720edc2668a5{"path": "/boot/efi", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a5"}}y60c9a9cf368c720edc2668a4{"path": "/run", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a4"}} 0 documents̝φħ^sTb_id̝φħ^sTpath/media/user/ssd1totalusedfreestatusreadyr60c9a9cf368c720edc2668a3{"path": "/", "total": 9999, "used": 99, "free": 0, "status": "busy", "_id": {"$oid": "60c9a9cf368c720edc2668a3"}}~60c9a9cf368c720edc2668a6{"path": "/mnt/hdd4", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a6"}}~60c9a9cf368c720edc2668a5{"path": "/boot/efi", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a5"}}y60c9a9cf368c720edc2668a4{"path": "/run", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a4"} 0 0̝φħ^sTb_id̝φħ^sTpath/media/user/ssd1totalusedfreestatusreadyr60c9a9cf368c720edc2668a3{"path": "/", "total": 9999, "used": 99, "free": 0, "status": "busy", "_id": {"$oid": "60c9a9cf368c720edc2668a3"}}~60c9a9cf368c720edc2668a6{"path": "/mnt/hdd4", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a6"}}~60c9a9cf368c720edc2668a5{"path": "/boot/efi", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a5"}}y60c9a9cf368c720edc2668a4{"path": "/run", "total": 9999, "used": 99, "free": 9900, "status": "ready", "_id": {"$oid": "60c9a9cf368c720edc2668a4"}}

    Does it stores changes after updating?

    bug 
    opened by rewiaca 2
  • update_one and update_many creating extra records for flatfile

    update_one and update_many creating extra records for flatfile

    Performing updates with the flat file is giving me duplicate documents. I'm running Python 3.8.9 and have tried it with both the pip install montydb install and pip install montydb[bson] install. This problem is not occurring when in sqlite mode.

    Subsequent program runs after inserting a record are causing a duplicate document to be added with the same _id.

    It looks like the OrderedDict cache update at https://github.com/davidlatwe/montydb/blob/master/montydb/storage/flatfile.py#L79 is where the extra document is being added. Debugging the process shows that Python is adding a duplicate document because the keys in the ordered dict are actually different. One is an ObjectId object and the other is the binary serialized representation of that id. Here is a screenshot of the debugging output: debug output

    Here is the source code to reproduce. Note that you will have to run it twice because this only occurs on subsequent runs.

    from montydb import MontyClient, set_storage
    
    set_storage("./db/repo",  cache_modified=0)
    client =  MontyClient("./db/repo")
    coll = client.petsDB.pets
    
    if  len([x for x in coll.find({"pet":  "cat"})])  ==  0:
        coll.insert_one({"pet":"cat",  "domestic?":True, "climate":  ["polar",  "equatorial",  "mountain"]})
        
    coll.update_one({"pet":  "cat"},  {"$push":  {"climate":  "continental"}})
    # This should only ever print 1 on subsequent runs.
    print(len([x for x in coll.find({"pet":  "cat"})]))
    
    bug 
    opened by SEary342 2
  • Find by ObjectId

    Find by ObjectId

    Could not find by ObjectId. Code:

    from bson.objectid import ObjectId
    
    x = collection.find()
    i = list(x)[0]['_id']
    
    y = collection.find({'_id': ObjectId(i)})
    print(list(y))
    

    I see that database in mdb format has "_id": {"$oid": "id"} and I tried to find id in string with: {"_id.$oid": "string id"} but anyway, it would not work. Any suggestions? Thanks!

    bug 
    opened by rewiaca 2
  • `bytes` type unsupported

    `bytes` type unsupported

    Issue

    Cannot store bytes as values.

    Env

    Windows 10 Python 3.8.1 MontyDB 2.3.6

    Actual error

    >>> from montydb import MontyClient
    >>> col = MontyClient(":memory:").db.test
    >>> col.insert_one({'data': b'some bytes'})
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "C:\Users\user\.virtualenvs\name\lib\site-packages\montydb\collection.py", line 139, in insert_one
        result = self._storage.write_one(self, document)
      File "C:\Users\user\.virtualenvs\name\lib\site-packages\montydb\storage\__init__.py", line 45, in delegate
        return getattr(delegator, attr)(*args, **kwargs)
      File "C:\Users\user\.virtualenvs\name\lib\site-packages\montydb\storage\memory.py", line 120, in write_one
        self._col[b_id] = self._encode_doc(doc, check_keys)
      File "C:\Users\user\.virtualenvs\name\lib\site-packages\montydb\storage\__init__.py", line 183, in _encode_doc
        return bson.document_encode(
      File "C:\Users\user\.virtualenvs\name\lib\site-packages\montydb\types\_bson.py", line 236, in document_encode
        for s in _encoder.iterencode(doc):
      File "C:\Program Files\Python38\lib\json\encoder.py", line 431, in _iterencode
        yield from _iterencode_dict(o, _current_indent_level)
      File "C:\Program Files\Python38\lib\json\encoder.py", line 405, in _iterencode_dict
        yield from chunks
      File "C:\Program Files\Python38\lib\json\encoder.py", line 438, in _iterencode
        o = _default(o)
      File "C:\Users\user\.virtualenvs\name\lib\site-packages\montydb\types\_bson.py", line 222, in default
        return NoBSON.JSONEncoder.default(self, obj)
      File "C:\Program Files\Python38\lib\json\encoder.py", line 179, in default
        raise TypeError(f'Object of type {o.__class__.__name__} '
    TypeError: Object of type bytes is not JSON serializable
    

    Same with PyMongo

    >>> from pymongo import MongoClient
    >>> col = MongoClient('127.0.0.1').tests.test1
    >>> col.insert_one({'data': b'some bytes'})
    <pymongo.results.InsertOneResult object at 0x000002BBB52BC7C0>
    >>> next(col.find())
    {'_id': ObjectId('60bdaa528ff3727b58f514f7'), 'data': b'some bytes'}
    
    bug 
    opened by strayge 2
  • GitHub Actions: Add more flake8 tests

    GitHub Actions: Add more flake8 tests

    Instead of selecting a handful of vital tests to run, let’s run all flake8 tests ignoring only a handful of tests.

    flake8 . --ignore=E302,F401,F841,W605

    opened by cclauss 2
  • Inactive project?

    Inactive project?

    Hello guys, I find your project very interesting but there has not been any development for quite a while. Is this project not under development anymore? Kind regards

    opened by flome 2
  • $elemMatch in $elemMatch find nothing

    $elemMatch in $elemMatch find nothing

    Hi @davidlatwe, Just discovered that monty(montydb-2.3.10) can't handle query like:

    x = col.find({"mapping": {'$elemMatch': {'$elemMatch': {'$in': [https://accounts.google.com/o/oauth2/aut']}}}})

    for a array in array elements:

    "mapping": [ ["https://accounts.google.com/o/oauth2/auth", "client_id", "redirect_uri", "scope", "response_type"] ]

    Just doesn't find anything, unlike pymongo does

    bug 
    opened by rewiaca 1
  • Updating a document in an array leads to an error

    Updating a document in an array leads to an error

    Updating a document in an array leads to an error: https://docs.mongodb.com/manual/reference/operator/update/positional/#update-documents-in-an-array

      collection.update_one(
          filter={
              "order": order_number,
              "products.product_id": product_id,
          },
          update={
              "$set": {
                  "products.$.quantity": quantity
              }
          }
      )
    

    leads to an exception:

                else:
                    # Replace "$" into matched array element index
                    matched = fieldwalker.get_matched()
    >               position = matched.split(".")[0]
    E               AttributeError: 'NoneType' object has no attribute 'split'
    \field_walker.py:586: AttributeError
    
    bug 
    opened by thasler 0
  • $slice projection does not return other fields

    $slice projection does not return other fields

    When using the $slice projection together with an exclusion projection, the operation should return all the other fields in the document. https://docs.mongodb.com/manual/reference/operator/projection/slice/#behavior

    Monty DB is only returning the array that is sliced.

    bug 
    opened by thasler 1
  • Implement MongoDB aggregate

    Implement MongoDB aggregate

    NotImplementedError: 'MontyCollection.aggregate' is NOT implemented ! It would be awesome to have aggregate, this project is very cool non the less!

    epic feature 
    opened by hedrickw 0
  • Support for Mongoengine

    Support for Mongoengine

    Montydb with the sqlite backend provides multi-process operation, at least in my initial trials with just 2 processes writing to the database simultaneously. This is clearly an advantage over mongita, which also provides a file/mem clone of pyMongodb; however, doesn't provide multi-process support. But mongitadb does support Mongoengine which was achieved recently.

    Has anyone been able to use Montydb with Mongoengine?

    feature 
    opened by yamsu 2
Releases(2.4.0)
Owner
David Lai
Animation Production Pipeline TD, and Troubleshooter. "It's all good, man"
David Lai
A supercharged SQLite library for Python

SuperSQLite: a supercharged SQLite library for Python A feature-packed Python package and for utilizing SQLite in Python by Plasticity. It is intended

Plasticity 703 Dec 30, 2022
MySQL Operator for Kubernetes

MySQL Operator for Kubernetes The MYSQL Operator for Kubernetes is an Operator for Kubernetes managing MySQL InnoDB Cluster setups inside a Kubernetes

MySQL 462 Dec 24, 2022
Script em python para carregar os arquivos de cnpj dos dados públicos da Receita Federal em MYSQL.

cnpj-mysql Script em python para carregar os arquivos de cnpj dos dados públicos da Receita Federal em MYSQL. Dados públicos de cnpj no site da Receit

17 Dec 25, 2022
A library for python made by me,to make the use of MySQL easier and more pythonic

my_ezql A library for python made by me,to make the use of MySQL easier and more pythonic This library was made by Tony Hasson , a 25 year old student

3 Nov 19, 2021
A collection of awesome sqlite tools, scripts, books, etc

Awesome Series @ Planet Open Data World (Countries, Cities, Codes, ...) • Football (Clubs, Players, Stadiums, ...) • SQLite (Tools, Books, Schemas, ..

Planet Open Data 205 Dec 16, 2022
Kafka Connect JDBC Docker Image.

kafka-connect-jdbc This is a dockerized version of the Confluent JDBC database connector. Usage This image is running the connect-standalone command w

Marc Horlacher 1 Jan 05, 2022
Pystackql - Python wrapper for StackQL

pystackql - Python Library for StackQL Python wrapper for StackQL Usage from pys

StackQL Studios 6 Jul 01, 2022
Query multiple mongoDB database collections easily

leakscoop Perform queries across multiple MongoDB databases and collections, where the field names and the field content structure in each database ma

bagel 5 Jun 24, 2021
Python PostgreSQL database performance insights. Locks, index usage, buffer cache hit ratios, vacuum stats and more.

Python PG Extras Python port of Heroku PG Extras with several additions and improvements. The goal of this project is to provide powerful insights int

Paweł Urbanek 35 Nov 01, 2022
A database migrations tool for SQLAlchemy.

Alembic is a database migrations tool written by the author of SQLAlchemy. A migrations tool offers the following functionality: Can emit ALTER statem

SQLAlchemy 1.7k Jan 01, 2023
A framework based on tornado for easier development, scaling up and maintenance

turbo 中文文档 Turbo is a framework for fast building web site and RESTFul api, based on tornado. Easily scale up and maintain Rapid development for RESTF

133 Dec 06, 2022
A fast PostgreSQL Database Client Library for Python/asyncio.

asyncpg -- A fast PostgreSQL Database Client Library for Python/asyncio asyncpg is a database interface library designed specifically for PostgreSQL a

magicstack 5.8k Dec 31, 2022
Class to connect to XAMPP MySQL Database

MySQL-DB-Connection-Class Class to connect to XAMPP MySQL Database Basta fazer o download o mysql_connect.py e modificar os parâmetros que quiser. E d

Alexandre Pimentel 4 Jul 12, 2021
Python PostgreSQL adapter to stream results of multi-statement queries without a server-side cursor

streampq Stream results of multi-statement PostgreSQL queries from Python without server-side cursors. Has benefits over some other Python PostgreSQL

Department for International Trade 6 Oct 31, 2022
A selection of SQLite3 databases to practice querying from.

Dummy SQL Databases This is a collection of dummy SQLite3 databases, for learning and practicing SQL querying, generated with the VS Code extension Ge

1 Feb 26, 2022
Micro ODM for MongoDB

Beanie - is an asynchronous ODM for MongoDB, based on Motor and Pydantic. It uses an abstraction over Pydantic models and Motor collections to work wi

Roman 993 Jan 03, 2023
A Redis client library for Twisted Python

txRedis Asynchronous Redis client for Twisted Python. Install Install via pip. Usage examples can be found in the examples/ directory of this reposito

Dorian Raymer 127 Oct 23, 2022
MySQL database connector for Python (with Python 3 support)

mysqlclient This project is a fork of MySQLdb1. This project adds Python 3 support and fixed many bugs. PyPI: https://pypi.org/project/mysqlclient/ Gi

PyMySQL 2.2k Dec 25, 2022
SQL queries to collections

SQC SQL Queries to Collections Examples from sqc import sqc data = [ {"a": 1, "b": 1}, {"a": 2, "b": 1}, {"a": 3, "b": 2}, ] Simple filte

Alexander Volkovsky 0 Jul 06, 2022
dask-sql is a distributed SQL query engine in python using Dask

dask-sql is a distributed SQL query engine in Python. It allows you to query and transform your data using a mixture of common SQL operations and Python code and also scale up the calculation easily

Nils Braun 271 Dec 30, 2022