A pure Python Database Abstraction Layer

Related tags

ORMpydal
Overview

pyDAL

pyDAL is a pure Python Database Abstraction Layer.

It dynamically generates the SQL/noSQL in realtime using the specified dialect for the database backend, so that you do not have to write SQL code or learn different SQL dialects (the term SQL is used generically), and your code will be portable among different types of databases.

pyDAL comes from the original web2py's DAL, with the aim of being compatible with any Python program. pyDAL doesn't require web2py and can be used in any Python context.

pip version Build Status MS Build Status Coverage Status API Docs Status

Installation

You can install pyDAL using pip:

pip install pyDAL

Usage and Documentation

Here is a quick example:

>>> from pydal import DAL, Field
>>> db = DAL('sqlite://storage.db')
>>> db.define_table('thing', Field('name'))
>>> db.thing.insert(name='Chair')
>>> query = db.thing.name.startswith('C')
>>> rows = db(query).select()
>>> print rows[0].name
Chair
>>> db.commit()

The complete documentation is available on http://www.web2py.com/books/default/chapter/29/06/the-database-abstraction-layer

What's in the box?

A little taste of pyDAL features:

  • Transactions
  • Aggregates
  • Inner Joins
  • Outer Joins
  • Nested Selects

Which databases are supported?

pyDAL supports the following databases:

  • SQLite
  • MySQL
  • PostgreSQL
  • MSSQL
  • FireBird
  • Oracle
  • DB2
  • Ingres
  • Sybase
  • Informix
  • Teradata
  • Cubrid
  • SAPDB
  • IMAP
  • MongoDB

License

pyDAL is released under the BSD-3c License. For further details, please check the LICENSE file.

Comments
  • encoding error writing unicode string to postgre

    encoding error writing unicode string to postgre

    Taking the effort of posting here what is reported on web2py. I feel that those DAL bugs are never seen as a priority if not posted here.........

    https://github.com/web2py/web2py/issues/910

    bug 
    opened by niphlod 39
  • Problem with collision check in merge_tablemaps

    Problem with collision check in merge_tablemaps

    I'm struggling with this: https://github.com/web2py/pydal/blob/master/pydal/helpers/methods.py#L83

    ...as it requires identity of aliased Table instances, and hence fails if the same aliasing happens twice, i.e.:

    table.with_alias("alias") is not table.with_alias("alias")
    

    ...even though it's the same table, and the same alias, and consequently the same SQL.

    Why does it have to be this way - wouldn't it be good enough to ensure that the DAL name of both tables is the same (i.e. that they refer to the same original table)? Do they really have to be exactly the same Python object?

    opened by nursix 37
  • Joinable subselects

    Joinable subselects

    This changeset implements subselect objects that can be used in a join like a virtual table or passed to Expression.belongs(). Subselect objects can be created using Set.nested_select() which accepts the same arguments as regular select(). One additional argument, correlated, determines whether the subselect may reference tables from parent select (defaults to True which means referencing is permitted). Example usage:

    total = table1.field2.sum().with_alias('total')
    sub1 = db(table1.field2 != None).nested_select(table1.field1, total, groupby=table1.field1)
    sub1 = sub1.with_alias('tmp')
    result1 = db(table2.field1 == sub1.field1).select(table2.ALL, sub1.total)
    
    total = table1.field2.sum().with_alias('total')
    sub2 = db(table1.field2 != None).nested_select(total, groupby=table1.field1)
    result2 = db(table2.field2.belongs(sub2)).select(table2.ALL)
    

    I'm opening this pull request early for design discussion. Unit tests for Python 2.7, 3.4 & 3.5 and SQLite, MySQL and PostgreSQL are passing on my machine but there's still work to do. Comments and suggestions are appreciated.

    Design changes

    • Adapters and dialects must no longer resolve table names over and over through DAL.__getitem__(). They must pass Table/Select objects around.
    • New methods for class Table:
      • query_name() - Full table name for query FROM clause, including possible alias.
      • query_alias() - The appropriate string that identifies the table throughout a query (e.g. for building fully qualified field names).
    • Rows.__init__() now takes additional argument - list of Field/Expression objects corresponding to columns
    • BaseAdapter._select() now returns a Select object instead of string
    • BaseAdapter.tables() now returns a dict mapping table names to table objects instead of name list.
    • BaseAdapter.common_filter() now accepts table objects as well as table names.
    • many other BaseAdapter internal methods now take or return table objects instead of table names.
    new feature 
    opened by nextghost 36
  • Refactoring

    Refactoring

    As introduced on the group (https://groups.google.com/forum/#!topic/web2py-developers/zw6-bFpe96s) this is a major refactoring of the code base.

    Feedbacks and comments are welcome.

    Intentions and the rationale:

    • Stop using the BaseAdapter as a super-collector (sometimes taking place of SQLite – ed. note)
    • Better using inheritance between classes
    • Simplify the writing of new adapters (and increase the contributions)
    • De-uglify a lot of the code making it simpler to understand and debug
    • Allowing to easily implement performance optimizations

    Refactoring steps:

    • [x] BaseAdapter re-structuring
    • [x] Better SQLAdapter and NoSQLAdapter
    • [x] Base dialects structure
    • [x] Base parsers structure
    • [x] Base representers structure
    • [x] Migrations separation from base adapter
    • [x] SQLite and Spatialite adapters
    • [x] Postgresql adapter
    • [x] Postgresql 2/new adapter
    • [x] MySQL adapter
    • [x] MSSQL adapter
    • [x] jdbc versions of adapters
    • [x] Review of NoSQLAdapter
    • [x] MongoDB Adapter
    • [x] Other SQL adapters
    • [x] Other NoSQL adapters
    • [x] Final code cleaning

    Changes from 16.03:

    • Postgre adapter no longer parse datetime objects
    • MSSQL2Adapter no longer exists since was the same of MSSQLNAdapter (but the uri is still valid)
    • Includes fix for #296
    • BaseAdapter.connection and BaseAdapter.cursor are now thread safe
    • Removed _lastsql in adapters since not thread safe
    • Empty values for ObjectId fields in MongoAdapter now are stored (and returned) as None instead of a fake ObjectId converted to integer 0
    opened by gi0baro 36
  • Table name bugfixes

    Table name bugfixes

    This changeset fixes update()/delete() on aliased tables and bugs caused by using the wrong table/field name in various queries (especially in migrator).

    Changes in class Table

    • Table._ot has been renamed to Table._dalname and it is always set to the original internal table name.
    • Table._rname now defaults to quoted Table._dalname (except when the table is not associated with any database, only then it gets set to None).
    • Added Table._raw_rname which is the unquoted version of _rname (for generating extra identifers based on the backend table name, e.g. sequence names).
    • Table.sqlsafe renamed to sql_shortref.
    • Table.sqlsafe_alias renamed to sql_fullref.

    Example where they belong in a query: SELECT * FROM sql_fullref WHERE sql_shortref.id > 0

    Changes in class Field

    • Field.sqlsafe now raises exception when the field object is not associated with any table.
    • Deprecated Field.sqlsafe_name, use Field._rname instead.
    • Added property Field.longname which returns field reference string as 'tablename.fieldname' (as oposed to 'table_rname.field_rname' returned by sqlsafe).
    • Field.clone() now returns the new Field object without link to any table and with empty _rname if it needs to be requoted.
    • Added method Field.bind() which links the field to a table. When the field is already linked, it raises exception. It also sets empty Field._rname to quoted Field.name. This method is called internally by table/subselect constructor.
    • Added Field._raw_rname which is the unquoted version of _rname.

    Other changes

    • Removed BaseAdatper.alias(), the code was move to Table.with_alias() where it belongs + bugfixes.
    • SQLDialect.update() and SQLDialect.delete() now take a Table object instead of tablename. This change was necessary because MySQLDialect.delete() and also other dialects need to access both sql_shortref and sql_fullref when the table is aliased.
    • New method writing_alias() added to SQLDialect. The method simply returns table.sql_fullref. It was added so that dialects like SQLite which don't support UPDATE/DELETE on aliased tables can raise exception without reimplementing the whole update()/delete() methods.
    • IMAP adapter was updated to new API (passing Table objects instead of tablename). I forgot to do this in the subselect pull request. Beware that this change is untested.
    • Field._rname and Field._raw_rname will now be stored in pickled table metadata files.
    opened by nextghost 25
  • [WIP] Indexes support

    [WIP] Indexes support

    Don't merge this until I say is ready

    This implements basic indexes support as requested in #152. Intended workflow:

    # just fields
    db.mytable.create_index('index_name', db.mytable.myfield_a, db.mytable.myfield_b)
    # with expressions
    db.mytable.create_index('index_name', db.mytable.myfield_a, db.mytable.myfield_b.coalesce(None))
    # with additional parameters (where is supported only by postgres)
    db.mytable.create_index('index_name', db.mytable.myfield_a, where=(db.mytable.myfield_b == False), unique=True)
    
    db.mytable.drop_index('index_name')
    

    @niphlod @mdipierro @ilvalle @michele-comitini feedbacks are welcome.

    in progress 
    opened by gi0baro 20
  • sqlform.factory issue

    sqlform.factory issue

    It seems to me that sqlform factory is broken. Git bisect traced it down to commit @03dc3a8d4cddf3c1fd92cdab62d923d64f777776

    Traceback (most recent call last):
      File "/home/marin/web2py/gluon/restricted.py", line 227, in restricted
        exec ccode in environment
      File "/home/marin/web2py/applications/move/controllers/user.py", line 142, in <module>
      File "/home/marin/web2py/gluon/globals.py", line 412, in <lambda>
        self._caller = lambda f: f()
      File "/home/marin/web2py/applications/move/controllers/user.py", line 98, in login
        form = SQLFORM.factory(Field('email', 'string'), Field('password', 'string')).process(keepvalues=True)
      File "/home/marin/web2py/gluon/html.py", line 2301, in process
        self.validate(**kwargs)
      File "/home/marin/web2py/gluon/html.py", line 2238, in validate
        if self.accepts(**kwargs):
      File "/home/marin/web2py/gluon/sqlhtml.py", line 1688, in accepts
        self.vars.id = self.table.insert(**fields)
      File "/home/marin/web2py/gluon/packages/dal/pydal/objects.py", line 706, in insert
        ret = self._db._adapter.insert(self, self._listify(fields))
      File "/home/marin/web2py/gluon/packages/dal/pydal/adapters/base.py", line 737, in insert
        raise e
    ValueError: INSERT INTO no_table(password,email) VALUES ('password','[email protected]');
    
    opened by mpranjic 19
  • Init adapters properly even when reusing idle connection from pool

    Init adapters properly even when reusing idle connection from pool

    Some adapters are not properly initialized when they reuse idle connection from pool. In case of PostgreSQL, this actually results in unnecessary migrations of json fields back and forth between TEXT and JSON SQL types because the adapter will not check whether it can use dialects with native JSON support.

    This pull request introduces new after_reconnect() adapter method which will be executed every time the connection is assigned to a new adapter object, no matter whether the connection was freshly created or reused from connection pool.

    opened by nextghost 18
  • Add full Oracle testing to Travis, fix the adapter

    Add full Oracle testing to Travis, fix the adapter

    Since so many errors and fundamental issues seem to have crept into the Oracle adapter over time, I took on the task of setting up the automated Travis tests to include Oracle.

    Note: using Oracle on Travis requires a download of the free Express Edition Oracle platform during container build, since it cannot be redistributed, so to make it work (via this handy project) you need to add two environment variables to Travis directly: a username / password for a (free) oracle.com account that has accepted their license agreement for this product. I can email that username / password, and once added to the main pydal Travis account, the build will succeed. See the successful build here.

    Once I started running the tests on Oracle, a considerable number of fundamentally breaking issues cropped up everywhere--including everything from broken python syntax to badly handled sql generation and types--but after many adjustments and improvements to the adapter, dialect, parser, etc, the entire test suite now runs successfully (except for a handful of tests that are not appropriate for Oracle, like the Time type it doesn't support, which I toggled off for this adapter in tests/sql.py).

    The biggest challenge for Oracle was properly handling case-sensitive table names and field names (when not using entity_quoting=False), which requires that they be quoted correctly everywhere. I did need to add a few additional switches to the main migrator.py file to make those additional quotes possible in places where it didn't currently invoke dialect.quote on an rname. Those I wrapped in an if-oracle condition for now, but perhaps it would be best in the long run to automatically invoke the quote function on rnames for these additional cases for all adapters.

    Everything now works smoothly--even complex things like correct results paging for all the test instances, with far greater stability than the prior Oracle query-paging syntax--and I'm hoping that moving it into the automated tests can finally make Oracle a first-tier citizen again for pydal, given that it had drifted into severe problems over time.

    opened by jasonphillips 16
  • fix upload fileds with uploadfs/pyfilesystem.  Added test.

    fix upload fileds with uploadfs/pyfilesystem. Added test.

    @mdipierro @gi0baro The upload code in present pyDAL is uncovered. I made a test to correct a bug that shows with uploadfs=pyfilesystem. This bug affects current web2py too.

    opened by michele-comitini 16
  • Row.__getattr__ raises AttributeError when trying to access '_extra'

    Row.__getattr__ raises AttributeError when trying to access '_extra'

    Just spotted. I have this Row returned from pyDAL:

    <Row {'users': {'id': 26470719833643260735021266671L}, '_extra': {'registration_id': '', 'last_name': 'Barillari', 'first_name': 'Giovanni', 'reset_password_key': '', 'password': '***', 'registration_key': 'b40bde36-258a-48fb-b565-ec22cabc0586', 'email': '***'}}>
    

    In v15.03 was working as expected. In current master the problem seems caused by the difference behavior between Row.__getitem__ and Row.__getattr__. @ilvalle maybe a consequence of your refactoring with BasicStorage? I propose to add a:

    __getattr__ = __getitem__
    

    in Row class.

    bug 
    opened by gi0baro 16
  • non-database upload field fails to build filename

    non-database upload field fails to build filename

    Details can be found here -> https://groups.google.com/g/py4web/c/f2BGHP_1-F8

    In short, if you have an upload field that isn't based on a database field, it fails to build the filename.

    I will submit a PR.

    opened by jpsteil 0
  • pool_size and leaking connections on PostgreSQL

    pool_size and leaking connections on PostgreSQL

    Number of PostgreSQL connections from apps keeps growing even to hundreds.

    All connections are entering Idle state after COMMIT or ROLLBACK query. It seems that the connection pool does not pick them properly, but keeps opening new ones. Not sure if any connections are ever closed. As soon as MAX_CONNECTIONS set by PostgreSQL is reached, the db starts refusing further connections, crashing the app. Problem occurs regardless of the pool_size (tried 0 and 10).

    Experienced for pydal used by py4web, not sure if it is py4web-specific.

    The problem disappears if I use pool_size=0 and the custom DAL fixture:

    class ClosingDAL(pydal.DAL, Fixture):
    
      reconnect_on_request = True
    
      def on_request(self, context):
        if self.reconnect_on_request:
          self._adapter.reconnect()
        threadsafevariable.ThreadSafeVariable.restore(ICECUBE)
    
      def __close_silently(self):
        #to ignore AttributeError: '_thread._local' object has no attribute '_pydal_db_instances_'
        try: self.close()
        except Exception as e: print(f"[DAL] closing error {e}")
    
      def on_error(self, context):
        self.rollback()
        self.__close_silently()
    
      def on_success(self, context):
        self.commit()
        self.__close_silently()
    

    For further investigation: Since DAL from core.py performs either commit() or rollback() only, which part of pydal is responsible for closing idling connections above pool_size?

    Related discussion thread: https://groups.google.com/g/py4web/c/6pyLn6MsHus

    opened by zejdad 0
  • New google datastore

    New google datastore

    Tested only with this:

    from pydal import DAL, Field
    db = DAL("google:datastore")
    db.define_table("thing", Field("name"), Field("age", "integer"), Field('price','float'), Field('active', 'boolean'))
    print(db.thing.insert(name="table", age=1, price=3.14, active=True))
    rows = db(db.thing.name=="table").select()
    print(str(rows))
    print(db(db.thing.name=="table").count())
    rows = db(db.thing.age==1).select()
    print(str(rows))
    id = rows[-1].id
    row = db.thing(id)
    print(row)
    row.update_record(name="chair")
    row.delete_record()
    print(db.thing(id))
    rows = db(db.thing).select()
    print(str(rows))
    db(db.thing.name=="chair").delete()
    
    opened by mdipierro 4
  • Upgrade to python3 datastore adapter. Help

    Upgrade to python3 datastore adapter. Help

    Hi, Pydal works well with SQL databases in Google Appengine, but google:datastore and google:datastore+nbd don't work with python3. I have tried to migrate the GoogleDatastore(NoSQLAdapter) with no success till now.

    It's been published new guides to move from the deprecaed google libraries:

    https://github.com/GoogleCloudPlatform/appengine-python2-3-migration/tree/main/datastore

    https://codelabs.developers.google.com/codelabs/cloud-gae-python-migrate-2-cloudndb?utm_source=codelabs&utm_medium=et&utm_campaign=CDR_wes_aap-serverless_mgrcloudndb_201021&utm_content=-#0

    https://codelabs.developers.google.com/codelabs/cloud-gae-python-migrate-3-datastore#0

    https://github.com/googlecodelabs/migrate-python2-appengine

    I'm asking for some help to upgrade this pydal feature.

    opened by jparga 0
  • Deprecation imports google.appengine.api.rdbms in _gae and google adapter

    Deprecation imports google.appengine.api.rdbms in _gae and google adapter

    This warning appears when running in google app engine:

    Please remove any imports of google.appengine.api.rdbms. First Generation Cloud SQL instances have been shut down, and rdbms.py will be removed in a future release. See: https://cloud.google.com/sql/docs/mysql/deprecation-notice

    The rdbms library is imported in _gae.py: `# -- coding: utf-8 --

    try: from new import classobj except ImportError: classobj = type

    try: from google.appengine.ext import db as gae except ImportError: gae = None Key = None else: from google.appengine.ext import ndb from google.appengine.api import namespace_manager, rdbms from google.appengine.api.datastore_types import Key # for belongs on ID from google.appengine.ext.ndb.polymodel import PolyModel as NDBPolyModel`

    And it is used for the google:sql connection in google.py adpater:

    `if gae: from .._gae import ndb, rdbms, namespace_manager, classobj, NDBPolyModel from ..helpers.gae import NDBDecimalProperty

    class GoogleMigratorMixin(object): migrator_cls = InDBMigrator

    @adapters.register_for("google:sql") class GoogleSQL(GoogleMigratorMixin, MySQL): uploads_in_blob = True REGEX_URI = "^(?P.*)/(?P.+)$"

    def _find_work_folder(self):
        super(GoogleSQL, self)._find_work_folder()
        if os.path.isabs(self.folder) and self.folder.startswith(os.getcwd()):
            self.folder = os.path.relpath(self.folder, os.getcwd())
    
    def _initialize_(self):
        super(GoogleSQL, self)._initialize_()
        self.folder = self.folder or pjoin(
            "$HOME",
            THREAD_LOCAL._pydal_folder_.split(os.sep + "applications" + os.sep, 1)[1],
        )
        ruri = self.uri.split("://", 1)[1]
        m = re.match(self.REGEX_URI, ruri)
        if not m:
            raise SyntaxError("Invalid URI string in DAL")
        self.driver_args["instance"] = self.credential_decoder(m.group("instance"))
        self.dbstring = self.credential_decoder(m.group("db"))
        self.createdb = self.adapter_args.get("createdb", True)
        if not self.createdb:
            self.driver_args["database"] = self.dbstring
    
    def find_driver(self):
        self.driver = "google"
    
    def connector(self):
        return rdbms.connect(**self.driver_args)
    

    `

    https://cloud.google.com/appengine/docs/standard/python/refdocs/modules/google/appengine/api/rdbms#connect

    opened by jparga 0
  • NotNullViolation on PostgreSQL migration of populated table with a notnull field

    NotNullViolation on PostgreSQL migration of populated table with a notnull field

    Hi,

    db.define_table('tablename, Field('fieldname, notnull=True)) triggers psycopg2.errors.NotNullViolation: column "fieldname__tmp" of relation "tablename" contains null values on PostgreSQL migrations when there is already some data in the table. It gives sense because py4web creates notnull fieldname__tmp with null values (supposed to be replaced by fieldname data in the next step).

    Setting default to something else than DEFAULT serves as a workaround.

    On PostgreSQL I would suggest setting notnull=False for temporary fields (*__tmp) even if the original field is notnull=True, unless there is a non-null default value specified.

    David

    opened by zejdad 0
Releases(v20200321.1)
  • v20200321.1(Mar 21, 2020)

  • v18.08(Aug 6, 2018)

  • v17.11(Nov 13, 2017)

  • v17.08(Aug 28, 2017)

  • v17.07(Jul 4, 2017)

    July 2017 release

    • Various bugfixes
    • Field.set_attributes now returns the instance
    • [PostgreSQL] Added jsonb type and serialization/parsing support
    • Added unix socket support in MySQL and PostgreSQL adapters
    • [GCP] Added MySQL and PostgreSQL support
    Source code(tar.gz)
    Source code(zip)
  • v17.03(Mar 9, 2017)

  • v17.01(Jan 31, 2017)

  • v16.11(Nov 10, 2016)

  • v16.09(Sep 28, 2016)

    September 2016 release

    • [MongoDB] Enabled query(field==list:reference)
    • [PostgreSQL] Several bugfixes
    • Improved portalocker behaviour on py3
    Source code(tar.gz)
    Source code(zip)
  • v16.08(Aug 13, 2016)

  • v16.07(Jul 25, 2016)

  • v16.06.28(Jun 28, 2016)

  • v16.06.20(Jun 20, 2016)

  • v16.06.09(Jun 9, 2016)

    Bugfix release

    Changes since 16.06:

    • Fixed boolean parsing errors on Postgre introduced with 16.06
    • Fixed connection issues on multiprocessing environments with pre-fork
    • Added 'postgres3' adapter to use driver 'boolean' type on fields
    Source code(tar.gz)
    Source code(zip)
  • v16.06(Jun 6, 2016)

    June 2016 release

    • Major refactoring of the codebase
    • Improved Postgre adapter performance
    • [MSSQL] Fixed sql generation with orderby on MSSQL3 adapters
    • Connection and cursors are now thread safe
    • [Mongo] Empty values for ObjectId fields are now stored and parsed as None instead of a fake ObjectId(0)
    • Fixed multiple calls of initialization callbacks during connection
    • [Postgre] Added more extraction helpers on fields
    • Enabled entity quoting as default behavior
    • Added indexes creation and drop support on SQL adapters
    • Several bugfixes
    Source code(tar.gz)
    Source code(zip)
  • v16.03(Mar 23, 2016)

    March 2016 Release

    • Implemented faster SQLite logic in absence of db queris
    • PEP8 improvements
    • Added support for new relic (newrelic>=2.10.0.8)
    • Added support for outerscoped tablenames
    • Fixed Google Cloud SQL support
    • Fixed Oracle DB support
    • Serveral bugfixes
    Source code(tar.gz)
    Source code(zip)
  • v15.12(Dec 16, 2015)

    December 2015 Release

    • Added IPV6 address enclosed in brackets support for URI's host
    • [MongoDB] Implemented unique and notnull support for fields during insert
    • Several bugfixes
    Source code(tar.gz)
    Source code(zip)
  • v15.09(Sep 28, 2015)

    2015 September release

    • [MongoDB] Implemented orderby_on_limitby
    • [MongoDB] Implemented distinct for count
    • [MongoDB] Implemented select() with having parameter
    • [MongoDB] Implemented coalesce operations
    • Virtual fields are now ordered depending on definition
    • Allow usage of custom Row classes
    • Added .where method to Set and DAL
    • Several bugfixes
    Source code(tar.gz)
    Source code(zip)
  • v15.07(Jul 10, 2015)

    2015 July release

    • Added smart_query support for 'contains' on fields of type 'list:string'
    • Implemented correct escaping for 'LIKE' (see https://github.com/web2py/pydal/issues/212)
    • Added support for ondelete with fields of type 'list:reference' on MongoDBAdapter
    • Improved BasicStorage performance
    • Added arithmetic expressions support on MongoDBAdapter
    • Added aggregations support on MongoDBAdapter
    • Table.validate_and_insert and Table.validate_and_update methods now validates also empty fields
    • Added support for expression operators on MongoDBAdapter
    • Several bugfixes
    Source code(tar.gz)
    Source code(zip)
  • v15.05.29(May 29, 2015)

  • v15.05.26(May 26, 2015)

  • v15.05(May 23, 2015)

    2015 May release

    • Fixed True/False expressions in MSSQL
    • Introduced iterselect() and IterRows
    • Extended SQLCustomType to support widget & represent attributes
    • Updated MongoDBAdapter to support pymongo 3.0
    • Implemented JSON serialization for objects
    • Refactored many internal objects to improve performance
    • Added python 3.x support (experimental)
    • Several fixes and improvements to MongoDBAdapter
    • Implemented unicode handling in MSSQL (experimental) via mssql4n and mssql3n adapters
      Notes: These adapters will probably become the de-facto standard for MSSQL handling; any other adapter will continue to be supported just for legacy databases
    • Restricted table and field names to "valid" ones
      Notes: the "dotted-notation-friendly" syntax for names means anything:
      - alphanumeric
      - not starting with underscore or an integer
      rname attribute is intended to be used for anything else
    Source code(tar.gz)
    Source code(zip)
  • v15.03(Mar 24, 2015)

    2015 March release

    • Fixed redefine with lazy tables
    • Added tests for update_or_insert, bulk_insert, validate_and_update_or_insert
    • Enhanced connections open/close flow
    • Enhanced logging flow
    • Refactored google adapters: ndb is now used by default
    • Added default representation for reference fields
    • Fixed some caching issues when using pickle
    • Several improvements and fixes in MongoDBAdapter
    Source code(tar.gz)
    Source code(zip)
  • v15.02.27(Feb 27, 2015)

  • v15.02(Feb 9, 2015)

    2015 February release

    • Updated pg8000 support in PostgreSQLAdapter
    • Fixed ilike for Field type 'list:string' in PostgreSQLAdapter
    • Added case sensitive/insensitive tests for contains
    • Fixed expression evaluation on PostgreSQLAdapter
    • Fixed common_filter issue in _enable_record_versioning
    • Removed contrib drivers
    • Fixed uuid attribute of DAL class
    • Added caching tests
    Source code(tar.gz)
    Source code(zip)
  • v0.12.25(Dec 25, 2014)

Adds SQLAlchemy support to Flask

Flask-SQLAlchemy Flask-SQLAlchemy is an extension for Flask that adds support for SQLAlchemy to your application. It aims to simplify using SQLAlchemy

The Pallets Projects 3.9k Jan 09, 2023
A dataclasses-based ORM framework

dcorm A dataclasses-based ORM framework. [WIP] - Work in progress This framework is currently under development. A first release will be announced in

HOMEINFO - Digitale Informationssysteme GmbH 1 Dec 24, 2021
Sqlalchemy-databricks - SQLAlchemy dialect for Databricks

sqlalchemy-databricks A SQLAlchemy Dialect for Databricks using the officially s

Flynn 19 Nov 03, 2022
Pony Object Relational Mapper

Downloads Pony Object-Relational Mapper Pony is an advanced object-relational mapper. The most interesting feature of Pony is its ability to write que

3.1k Jan 01, 2023
Redis OM Python makes it easy to model Redis data in your Python applications.

Object mapping, and more, for Redis and Python Redis OM Python makes it easy to model Redis data in your Python applications. Redis OM Python | Redis

Redis 568 Jan 02, 2023
Beanie - is an Asynchronous Python object-document mapper (ODM) for MongoDB

Beanie - is an Asynchronous Python object-document mapper (ODM) for MongoDB, based on Motor and Pydantic.

Roman 993 Jan 03, 2023
A simple project to explore the number of GCs when doing basic ORM work.

Question: Does Python do extremely too many GCs for ORMs? YES, OMG YES. Check this out Python Default GC Settings: SQLAlchemy - 20,000 records in one

Michael Kennedy 26 Jun 05, 2022
The Python SQL Toolkit and Object Relational Mapper

SQLAlchemy The Python SQL Toolkit and Object Relational Mapper Introduction SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that giv

mike bayer 3.5k Dec 29, 2022
Rich Python data types for Redis

Created by Stephen McDonald Introduction HOT Redis is a wrapper library for the redis-py client. Rather than calling the Redis commands directly from

Stephen McDonald 281 Nov 10, 2022
A very simple CRUD class for SQLModel! ✨

Base SQLModel A very simple CRUD class for SQLModel! ✨ Inspired on: Full Stack FastAPI and PostgreSQL - Base Project Generator FastAPI Microservices I

Marcelo Trylesinski 40 Dec 14, 2022
A pythonic interface to Amazon's DynamoDB

PynamoDB A Pythonic interface for Amazon's DynamoDB. DynamoDB is a great NoSQL service provided by Amazon, but the API is verbose. PynamoDB presents y

2.1k Dec 30, 2022
A new ORM for Python specially for PostgreSQL

A new ORM for Python specially for PostgreSQL. Fully-typed for any query with Pydantic and auto-model generation, compatible with any sync or async driver

Yan Kurbatov 3 Apr 13, 2022
Twisted wrapper for asynchronous PostgreSQL connections

This is txpostgres is a library for accessing a PostgreSQL database from the Twisted framework. It builds upon asynchronous features of the Psycopg da

Jan Urbański 104 Apr 22, 2022
Python 3.6+ Asyncio PostgreSQL query builder and model

windyquery - A non-blocking Python PostgreSQL query builder Windyquery is a non-blocking PostgreSQL query builder with Asyncio. Installation $ pip ins

67 Sep 01, 2022
MongoEngine flask extension with WTF model forms support

Flask-MongoEngine Info: MongoEngine for Flask web applications. Repository: https://github.com/MongoEngine/flask-mongoengine About Flask-MongoEngine i

MongoEngine 815 Jan 03, 2023
A pure Python Database Abstraction Layer

pyDAL pyDAL is a pure Python Database Abstraction Layer. It dynamically generates the SQL/noSQL in realtime using the specified dialect for the databa

440 Nov 13, 2022
An async ORM. 🗃

ORM The orm package is an async ORM for Python, with support for Postgres, MySQL, and SQLite. ORM is built with: SQLAlchemy core for query building. d

Encode 1.7k Dec 28, 2022
Global base classes for Pyramid SQLAlchemy applications.

pyramid_basemodel pyramid_basemodel is a thin, low level package that provides an SQLAlchemy declarative Base and a thread local scoped Session that c

Grzegorz Śliwiński 15 Jan 03, 2023
Pydantic model support for Django ORM

Pydantic model support for Django ORM

Jordan Eremieff 318 Jan 03, 2023
A Python Library for Simple Models and Containers Persisted in Redis

Redisco Python Containers and Simple Models for Redis Description Redisco allows you to store objects in Redis. It is inspired by the Ruby library Ohm

sebastien requiem 436 Nov 10, 2022