A pythonic interface to Amazon's DynamoDB

Related tags

ORMpythonawsdynamodb
Overview

PynamoDB

A Pythonic interface for Amazon's DynamoDB.

DynamoDB is a great NoSQL service provided by Amazon, but the API is verbose. PynamoDB presents you with a simple, elegant API.

Useful links:

Installation

From PyPi:

$ pip install pynamodb

From GitHub:

$ pip install git+https://github.com/pynamodb/PynamoDB#egg=pynamodb

From conda-forge:

$ conda install -c conda-forge pynamodb

Upgrading

Warning

The behavior of 'UnicodeSetAttribute' has changed in backwards-incompatible ways as of the 1.6.0 and 3.0.1 releases of PynamoDB.

The following steps can be used to safely update PynamoDB assuming that the data stored in the item's UnicodeSetAttribute is not JSON. If JSON is being stored, these steps will not work and a custom migration plan is required. Be aware that values such as numeric strings (i.e. "123") are valid JSON.

When upgrading services that use PynamoDB with tables that contain UnicodeSetAttributes with a version < 1.6.0, first deploy version 1.5.4 to prepare the read path for the new serialization format.

Once all services that read from the tables have been deployed, then deploy version 2.2.0 and migrate your data using the provided convenience methods on the Model. (Note: these methods are only available in version 2.2.0)

def get_save_kwargs(item):
    # any conditional args needed to ensure data does not get overwritten
    # for example if your item has a `version` attribute
    {'version__eq': item.version}

# Re-serialize all UnicodeSetAttributes in the table by scanning all items.
# See documentation of fix_unicode_set_attributes for rate limiting options
# to avoid exceeding provisioned capacity.
Model.fix_unicode_set_attributes(get_save_kwargs)

# Verify the migration is complete
print("Migration Complete? " + Model.needs_unicode_set_fix())

Once all data has been migrated then upgrade to a version >= 3.0.1.

Basic Usage

Create a model that describes your DynamoDB table.

from pynamodb.models import Model
from pynamodb.attributes import UnicodeAttribute

class UserModel(Model):
    """
    A DynamoDB User
    """
    class Meta:
        table_name = "dynamodb-user"
    email = UnicodeAttribute(null=True)
    first_name = UnicodeAttribute(range_key=True)
    last_name = UnicodeAttribute(hash_key=True)

PynamoDB allows you to create the table if needed (it must exist before you can use it!):

UserModel.create_table(read_capacity_units=1, write_capacity_units=1)

Create a new user:

user = UserModel("John", "Denver")
user.email = "[email protected]"
user.save()

Now, search your table for all users with a last name of 'Denver' and whose first name begins with 'J':

for user in UserModel.query("Denver", UserModel.first_name.startswith("J")):
    print(user.first_name)

Examples of ways to query your table with filter conditions:

for user in UserModel.query("Denver", UserModel.email=="[email protected]"):
    print(user.first_name)
for user in UserModel.query("Denver", UserModel.email=="[email protected]"):
    print(user.first_name)

Retrieve an existing user:

try:
    user = UserModel.get("John", "Denver")
    print(user)
except UserModel.DoesNotExist:
    print("User does not exist")

Advanced Usage

Want to use indexes? No problem:

from pynamodb.models import Model
from pynamodb.indexes import GlobalSecondaryIndex, AllProjection
from pynamodb.attributes import NumberAttribute, UnicodeAttribute

class ViewIndex(GlobalSecondaryIndex):
    class Meta:
        read_capacity_units = 2
        write_capacity_units = 1
        projection = AllProjection()
    view = NumberAttribute(default=0, hash_key=True)

class TestModel(Model):
    class Meta:
        table_name = "TestModel"
    forum = UnicodeAttribute(hash_key=True)
    thread = UnicodeAttribute(range_key=True)
    view = NumberAttribute(default=0)
    view_index = ViewIndex()

Now query the index for all items with 0 views:

for item in TestModel.view_index.query(0):
    print("Item queried from index: {0}".format(item))

It's really that simple.

Want to use DynamoDB local? Just add a host name attribute and specify your local server.

from pynamodb.models import Model
from pynamodb.attributes import UnicodeAttribute

class UserModel(Model):
    """
    A DynamoDB User
    """
    class Meta:
        table_name = "dynamodb-user"
        host = "http://localhost:8000"
    email = UnicodeAttribute(null=True)
    first_name = UnicodeAttribute(range_key=True)
    last_name = UnicodeAttribute(hash_key=True)

Want to enable streams on a table? Just add a stream_view_type name attribute and specify the type of data you'd like to stream.

from pynamodb.models import Model
from pynamodb.attributes import UnicodeAttribute
from pynamodb.constants import STREAM_NEW_AND_OLD_IMAGE

class AnimalModel(Model):
    """
    A DynamoDB Animal
    """
    class Meta:
        table_name = "dynamodb-user"
        host = "http://localhost:8000"
        stream_view_type = STREAM_NEW_AND_OLD_IMAGE
    type = UnicodeAttribute(null=True)
    name = UnicodeAttribute(range_key=True)
    id = UnicodeAttribute(hash_key=True)

Features

  • Python >= 3.6 support
  • An ORM-like interface with query and scan filters
  • Compatible with DynamoDB Local
  • Supports the entire DynamoDB API
  • Support for Unicode, Binary, JSON, Number, Set, and UTC Datetime attributes
  • Support for Global and Local Secondary Indexes
  • Provides iterators for working with queries, scans, that are automatically paginated
  • Automatic pagination for bulk operations
  • Complex queries
  • Batch operations with automatic pagination
  • Iterators for working with Query and Scan operations
Comments
  • Support for On Demand mode and Global Secondary Indexes

    Support for On Demand mode and Global Secondary Indexes

    The current code has incomplete support for the On Demand billing mode of DynamoDB.

    The notes around the launch state that the indexes inherit the On Demand mode from their tables.

    https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/

    Indexes created on a table using on-demand mode inherit the same scalability and billing model. You don’t need to specify throughput capacity settings for indexes, and you pay by their use. If you don’t have read/write traffic to a table using on-demand mode and its indexes, you only pay for the data storage.

    But the code currently throws an error AttributeError: type object 'Meta' has no attribute 'read_capacity_units' when trying to make a Global Secondary Index on a table using the On Demand mode.

    bug 
    opened by techdragon 25
  • Multiple attrs update

    Multiple attrs update

    this allows to update multiple attributes at once using a new function Model.update (code based on the existing method Model.update_item()

    This fixes #157 and #195

    opened by yedpodtrzitko 21
  • Given key conditions were not unique

    Given key conditions were not unique

    Hello all,

    Im trying to update_or_create a document which works most of the times

    Table:

    class Report(BaseEntity):
        class Meta(BaseEntity.Meta):
            table_name = 'Reports'
    
        SomeId = UnicodeAttribute(hash_key=True)
        Time = NumberAttribute(range_key=True)  # Time as UTC Unix timestamp
        IncAttr = NumberAttribute()
    
    

    Here is the update command which sometimes failed: Report(SomeId="id", Time=get_current_time_report()).update(actions=[Report.IncAttr.add(1)])

    Somehow, in the database there are two documents with the same SomeId and Time

    Is it some case of a race condition? Anyone have any idea?

    Thanks.

    opened by amirvaza 19
  • v5.2.0 breaking change? AttributeError: type object 'Meta' has no attribute 'read_capacity_units'

    v5.2.0 breaking change? AttributeError: type object 'Meta' has no attribute 'read_capacity_units'

    The release of 5.2.0 broke GlobalSecondaryIndex syntax that worked on 5.1.0. Requesting to add a note about it in release notes for 5.2.0

    Broken syntax in 5.2.0, but working in 5.1.0

    class UserRangeKey(GlobalSecondaryIndex):
        class Meta:
            index_name = "user-range-key-index"
            projection = AllProjection()
    
        user_range_key = UnicodeAttribute(hash_key=True)
    

    Valid syntax in 5.2.0

    class UserRangeKey(GlobalSecondaryIndex):
        class Meta:
            index_name = "user-range-key-index"
            projection = AllProjection()
            billing_mode = "PAY_PER_REQUEST"
    
        user_range_key = UnicodeAttribute(hash_key=True)
    

    Lines it breaks in is in PR #996

    opened by lehmat 18
  • _fast_parse_utc_date_string fails on pre year 1000

    _fast_parse_utc_date_string fails on pre year 1000

    Python datetime supports dates back to year 0, but _fast_parse_utc_date_string forces datetimes to have four digit years. This resulted in the following error:

    ValueError: Datetime string '539-02-20T08:36:49.000000+0000' does not match format '%Y-%m-%dT%H:%M:%S.%f%z'

    _fast_parse_utc_date_string should fall back to a library call if it fails.

    Edit:

    My bad, looks like datetime.datetime.strptime('539-02-20T08:36:49.000000+0000', '%Y-%m-%dT%H:%M:%S.%f%z') also fails, meaning isoformat() doesn't produce parseable datetimes. :/

    bug 
    opened by l0b0 17
  • Initial DescribeTable before GetItem

    Initial DescribeTable before GetItem

    While profiling my application, I realized that there is an initial DescribeTable call sent to DynamoDB before a first GetItem call. Is there a way to remove that DescribeTable call? It adds a lot of latency to the application running on AWS Lambda.

    Below is a complete example that can be run locally with DynamoDB Local. The first execution will create the table and add a first item. The second execution will call GetItem four times. Before the first GetItem call, a DescribeTable is added. Package wrapt is required.

    $ python test_describe_table.py
      0.05 ms - Calling DescribeTable
    429.72 ms - Calling DescribeTable
    440.19 ms - Calling CreateTable
    571.44 ms - Calling DescribeTable
    583.39 ms - Calling DescribeTable
    600.91 ms - Calling PutItem
    
    $ python test_describe_table.py
      0.04 ms - Calling DescribeTable
     57.30 ms - Calling GetItem
    100.75 ms - Calling GetItem
    114.38 ms - Calling GetItem
    129.07 ms - Calling GetItem
    142.16 ms - Calling PutItem
    
    from random import randint
    from time import time
    from wrapt import wrap_function_wrapper
    
    from pynamodb.models import Model
    from pynamodb.attributes import NumberAttribute, UnicodeAttribute
    from pynamodb.exceptions import TableDoesNotExist
    
    
    class TestTable(Model):
        class Meta:
            table_name = 'TestTable'
            host = 'http://localhost:8000'
    
        key = UnicodeAttribute(hash_key=True, attr_name='k')
        value = NumberAttribute(default=0, attr_name='v')
    
    
    def patch_dynamodb():
        wrap_function_wrapper(
            'pynamodb.connection.base',
            'Connection.dispatch',
            xray_traced_pynamodb,
        )
    
    
    def xray_traced_pynamodb(wrapped, instance, args, kwargs):
        print('{:>6.2f} ms - Calling {}'.format(1000 * (time() - start_time), args[0]))
        return wrapped(*args, **kwargs)
    
    
    if __name__ == '__main__':
        start_time = time()
        patch_dynamodb()
        key = 'test-key'
        try:
            item = TestTable.get(key)
            item = TestTable.get(key)
            item = TestTable.get(key)
            item = TestTable.get(key)
        except TableDoesNotExist:
            TestTable.create_table(read_capacity_units=1, write_capacity_units=1, wait=True)
            item = TestTable(key, value=randint(0, 100))
            item.save()
    
    opened by levesquejf 17
  • Using Retries using headers set in settings file rather than manual retries

    Using Retries using headers set in settings file rather than manual retries

    The default headers point to envoy. Removing the earlier retries at PynamoDB layer.

    The settings can be overridden by setting the environment variable. Default max retries is 3. Will retry on 5xx, and connect-failure.

    @danielhochman @mattklein123

    opened by anandswaminathan 16
  • Remove PROVISIONED_THROUGHPUT objects for GSIs on PAY_PER_REQUEST tables

    Remove PROVISIONED_THROUGHPUT objects for GSIs on PAY_PER_REQUEST tables

    Fixes below error + makes GSIs inherit the billing mode of the parent table correctly

    Traceback (most recent call last):
      File "test_ddb.py", line 25, in <module>
        TestModel.create_table(wait=True)
      File "/Users/ed.holland/repos/PynamoDB/pynamodb/models.py", line 732, in create_table
        **schema
      File "/Users/ed.holland/repos/PynamoDB/pynamodb/connection/table.py", line 283, in create_table
        billing_mode=billing_mode
      File "/Users/ed.holland/repos/PynamoDB/pynamodb/connection/base.py", line 631, in create_table
        raise TableError("Failed to create table: {}".format(e), e)
    pynamodb.exceptions.TableError: Failed to create table: An error occurred (ValidationException) on request (78S7ALQ39N25306KJDNBE18LIBVV4KQNSO5AEMVJF66Q9ASUAAJG) on table (test_table) when calling the CreateTable operation: One or more parameter values were invalid: ProvisionedThroughput should not be specified for index: name_index when BillingMode is PAY_PER_REQUEST
    

    Resolves #629 and #568

    opened by edholland 13
  • Accomplishing upsert operations

    Accomplishing upsert operations

    Imagining a table with UTCDateTimeAttributes named created_at and updated_at, we want an item's created_at field to be set to the datetime the item was first created, and updated_at to be updated only during any future changes to the item.

    Is it possible to create a new item if it doesn't exist, else if the item already exists we only update the updated_at field? All of this in one single request (else we might as well send two requests and handle the logic locally).

    At first glance the UpdateItem looked promising:

    The UpdateItem DynamoDB operations allows you to create or modify attributes of an item using an update expression.

    But this seems to only work for existing items. The conditional will fail if the item doesn't yet exist. Is this understanding correct, and is there a way of accomplishing item creation or updating in the same request?

    opened by mijolabs 12
  • Question about custom attribute

    Question about custom attribute

    I have a use case to save an attribute with a name "dynamic_map" and value is {"STATE": ["live", "published"]}, "BRANCH": {"live"} which key is a dynamic UnicodeAttribute and value is a ListAttribute holding UnicodeAttribute. Does anyone have any clue I can achieve it? Thank you.

    opened by eherozhao 12
  • How do I update programatically / dynamically in version >4.0 ?

    How do I update programatically / dynamically in version >4.0 ?

    I'm trying to update a module I use to pynamodb 4.0, but I don't understand how to convert one particular block into the new version now that update does not include attributes. Basically, the function takes a dict of keys/values to be updated in the DDB item, and formats them to be updated. In looking at the docs, I can't see a straightforward way to do that. Is there an equivalent option in PynamoDB 4?

    The code in question is this:

        alarm = Alarm.get(hash_key, range_key)
        update_attributes = {
            attribute: dict(value=value, action="PUT")
            for attribute, value in attributes.items()
            if value
        }
        try:
            data = alarm.update(attributes=update_attributes)
            LOG.info(json.dumps({"Alarm Update Response": data}))
        except UpdateError as error:
            LOG.error(error)
    
    opened by flyinbutrs 12
  • The test problems using localstack

    The test problems using localstack

    Issue

    • i am configuring test envrionments using pynamodb + localstack
    • the model's connection failed when i configure host parameters as local host
    • i checked my docker fixture
    • but it still works well outside of testing environments
    • so i wonder is there a solution

    cf

    • there is a similiar issue i've researched but i couldn't find solutions to this specific problems

    errors

    E   pynamodb.exceptions.TableError: Failed to delete table: Connection was closed before we received a valid response from endpoint URL: "http://localhost:4569/".
    

    code

    # pynamodb table schme
    class ProductTable(Base):
        class Meta:
            table_name = "product-table"
            read_capacity_units = 5
            host = "http://localhost:4569"
            region = "ap-northeast-2"
            write_capacity_units = 5
    
        id = UnicodeAttribute(range_key=True)
        name = UnicodeAttribute(hash_key=True, null=False)
        description = UnicodeAttribute(null=False)
        created_at = NumberAttribute(null=False)
        updated_at = NumberAttribute(null=False)
    
    #conftest.py
    import uuid
    from chalice.test import Client
    import app as chalice_app
    from typing import Generator, Any
    import pytest
    import docker as libdocker
    from chalice import Chalice
    import warnings
    from chalicelib.adapters.repositories.entity.products import ProductTable
    
    @pytest.fixture(scope="session", autouse=True)
    def app() -> Chalice:
        return chalice_app
    
    @pytest.fixture(scope="session", autouse=True)
    def chalice_client(app):
        with Client(app) as client:
            yield client
    
    @pytest.fixture(scope="session")
    def docker() -> Generator[libdocker.APIClient, None, None]:
        with libdocker.APIClient(version="auto") as client:
            yield client
    
    @pytest.fixture(scope="session", autouse=True)
    def dynamo_server(docker: libdocker.APIClient) -> Generator[Any, None, None]:
        warnings.filterwarnings("ignore", category=DeprecationWarning)
        container = docker.create_container(
            image="localstack/localstack:0.11.3",
            name=f"test-localstack-{uuid.uuid4()}",
            detach=True,
            ports=[
                "5555",
                "4569"
            ],
            environment=[
                "DATA_DIR=/tmp/localstack/data",
                "DEBUG=1",
                "DEFAULT_REGION=ap-northeast-2",
                "LAMBDA_EXECUTOR=docker-reuse",
                "PORT_WEB_UI=5555",
                "HOSTNAME=localstack"
    
            ],
            volumes=["/var/run/docker.sock:/var/run/docker.sock",
                     "localstack:/tmp/localstack/data"],
            host_config=docker.create_host_config(port_bindings={"5555": "5555", "4569": "4569"}),
        )
        docker.start(container=container["Id"])
        try:
            yield container
        finally:
            docker.kill(container["Id"])
            docker.remove_container(container["Id"])
    
    @pytest.fixture(scope="session", autouse=True)
    def apply_migrations(dynamo_server: None) -> None:
        # https://github.com/pynamodb/PynamoDB/issues/569
        try:
            ProductTable.create_table(wait=True)  #<-- error occurred
        except Exception as e:
            raise e
        finally:
            ProductTable.delete_table()
    
    opened by jaeyoung0509 4
  • Incostistent log and documentation.

    Incostistent log and documentation.

    https://github.com/pynamodb/PynamoDB/blob/b3330a43ee8f9c203a32834f080dde42ab4cf8c7/pynamodb/attributes.py#L886

    expected format is %Y-%m-%dT%H:%M:%S.%f+0000 (seen here https://github.com/pynamodb/PynamoDB/blob/b3330a43ee8f9c203a32834f080dde42ab4cf8c7/pynamodb/attributes.py#L882-L886 ) but logs and documentation show that required format is %Y-%m-%dT%H:%M:%S.%fZ

    bug 
    opened by vamsis-adapa 3
  • KeyError: 'attribute_name'

    KeyError: 'attribute_name'

    Hi, I had a question regarding this commit https://github.com/pynamodb/PynamoDB/pull/1091/commits/a11576098199909c33a6cefdead4358fd1b324e7

    In our code base, we were using PynamoDB and the change in this commit broke our pipeline since we are using attribute_name and getting key_error. Please see below for the error message. We now have a chicken and egg problem since we need the commit in our pipelines but we also need a code change to get in before the commit.

    (attr_def for attr_def in tbl_prop["attribute_definitions"] if attr_def.attribute_name == index_attribute_definition["attribute_name"]), None
    KeyError: 'attribute_name'
    

    Wondering what would be the fix for this ? Thanks

    opened by sdzimiri 6
  • Does transactWrite support adding more than 25 items?

    Does transactWrite support adding more than 25 items?

    I was wondering whether transactWrite supports more than 25 items per operation. I know that boto3 API does not support it, but I am not sure if pynamoDB has an underlying mechanism in transactWrite that supports this

    opened by bogdanvaduva9 1
  • batch_get() returns 'NoneType' object is not iterable

    batch_get() returns 'NoneType' object is not iterable

    OBJECTIVE

    The batch_get() function return 'NoneType' object is not iterable ERROR https://github.com/pynamodb/PynamoDB/blob/43a303bea99033348274c3990c6ab71810b76758/pynamodb/models.py#L382

    RCA https://github.com/pynamodb/PynamoDB/blob/43a303bea99033348274c3990c6ab71810b76758/pynamodb/models.py#L1026 returns NONE, and assign NONE to https://github.com/pynamodb/PynamoDB/blob/43a303bea99033348274c3990c6ab71810b76758/pynamodb/models.py#L376 which ultimately fails to iterate on NONE. Results to raising 'NoneType' object is not iterable

    SOLUTION

    Update https://github.com/pynamodb/PynamoDB/blob/43a303bea99033348274c3990c6ab71810b76758/pynamodb/models.py#L382 to for batch_item in page or []:

    opened by sachinsharmanykaa 0
  • Update Function Doesn't Throw DoesNotExist Error

    Update Function Doesn't Throw DoesNotExist Error

    The documentation says that the update() function should throw a DoesNotExist error, but it always seems to throw an UpdateError. Here's a link to the method documentation: https://pynamodb.readthedocs.io/en/latest/api.html#pynamodb.models.Model.update.

    Would it be possible to get this looked into? Thanks!

    bug 
    opened by brian-t-otts 1
Releases(5.3.4)
  • 5.3.4(Dec 8, 2022)

    What's Changed

    • Propagate null_check to maps in lists by @ikonst in https://github.com/pynamodb/PynamoDB/pull/1128

    Full Changelog: https://github.com/pynamodb/PynamoDB/compare/5.3.3...5.3.4

    Source code(tar.gz)
    Source code(zip)
  • 4.4.0(Dec 7, 2022)

    What's Changed

    • Workaround for _convert_to_request_dict change (#1083) by @ikonst in https://github.com/pynamodb/PynamoDB/pull/1130

    Full Changelog: https://github.com/pynamodb/PynamoDB/compare/4.3.3...4.4.0

    Source code(tar.gz)
    Source code(zip)
  • 5.3.3(Nov 27, 2022)

    What's Changed

    • Allow handling an exception raised when retrieving the first item by @ikonst in https://github.com/pynamodb/PynamoDB/pull/1121

    Full Changelog: https://github.com/pynamodb/PynamoDB/compare/5.3.2...5.3.3

    Source code(tar.gz)
    Source code(zip)
  • 5.3.2(Nov 18, 2022)

    What's Changed

    • Do not package typing_tests by @musicinmybrain in https://github.com/pynamodb/PynamoDB/pull/1118

    New Contributors

    • @musicinmybrain made their first contribution in https://github.com/pynamodb/PynamoDB/pull/1118

    Full Changelog: https://github.com/pynamodb/PynamoDB/compare/5.3.1...5.3.2

    Source code(tar.gz)
    Source code(zip)
  • 5.3.1(Nov 18, 2022)

    What's Changed

    • Connection: call describe_table as needed by @ikonst in https://github.com/pynamodb/PynamoDB/pull/1114
    • Upgrade to mypy 0.950 by @ikonst in https://github.com/pynamodb/PynamoDB/pull/1116

    Full Changelog: https://github.com/pynamodb/PynamoDB/compare/5.3.0...5.3.1

    Source code(tar.gz)
    Source code(zip)
  • 5.3.0(Nov 3, 2022)

    What's Changed

    • Do not call DescribeTable for models by @ikonst in https://github.com/pynamodb/PynamoDB/pull/1095

    Full Changelog: https://github.com/pynamodb/PynamoDB/compare/5.2.3...5.3.0

    Source code(tar.gz)
    Source code(zip)
  • 5.2.3(Oct 25, 2022)

    What's Changed

    Update for botocore 1.28 private API change (#1087) which caused the following exception:

    TypeError: Cannot mix str and non-str arguments
    

    Full Changelog: https://github.com/pynamodb/PynamoDB/compare/5.2.2...5.2.3

    Source code(tar.gz)
    Source code(zip)
  • 5.2.2(Oct 25, 2022)

    Backporting #1083 update for a botocore 1.28 private API change which caused the following exception:

    TypeError: _convert_to_request_dict() missing 1 required positional argument: 'endpoint_url'
    

    Full Changelog: https://github.com/pynamodb/PynamoDB/compare/5.2.1...5.2.2

    Source code(tar.gz)
    Source code(zip)
  • 5.2.1(Feb 9, 2022)

  • 5.2.0(Jan 4, 2022)

  • 5.1.0(Jun 29, 2021)

    • Introduce DynamicMapAttribute to enable partially defining attributes on a MapAttribute (#868)
    • Quality of life improvements: Type annotations, better comment, more resilient test (#934, #936, #948)
    • Fix type annotation of is_in conditional expression (#947)
    • Null errors should include full attribute path (#915)
    • Fix for serializing and deserializing dates prior to year 1000 (#949)
    Source code(tar.gz)
    Source code(zip)
  • 5.0.2(Feb 12, 2021)

    It is more efficient not to serialize attributes you don't need, and it also avoids tripping null-checks on unrelated attributes when doing updates or deletes.

    Source code(tar.gz)
    Source code(zip)
  • 5.0.0(Jan 27, 2021)

  • 5.0.0b4(Nov 5, 2020)

  • 5.0.0b3(Oct 21, 2020)

  • 5.0.0b2(Oct 12, 2020)

  • 5.0.0b1(Sep 14, 2020)

    This is a beta release for a major release with breaking changes. Please read the release notes carefully and report any bugs encountered.

    Source code(tar.gz)
    Source code(zip)
  • 4.3.3(Aug 14, 2020)

  • 4.3.2(Apr 22, 2020)

  • 4.3.1(Jan 25, 2020)

  • 4.3.0(Jan 23, 2020)

  • 4.2.0(Oct 30, 2019)

  • 4.1.0(Oct 17, 2019)

    This is a backwards compatible, minor release.

    • In the Model's Meta, you may now provide an AWS session token, which is mostly useful for assumed roles (#700):
      sts_client = boto3.client("sts")
      role_object = sts_client.assume_role(RoleArn=role_arn, RoleSessionName="role_name", DurationSeconds=BOTO3_CLIENT_DURATION)
      role_credentials = role_object["Credentials"]
      
      class MyModel(Model):
        class Meta:
          table_name = "table_name"
          aws_access_key_id = role_credentials["AccessKeyId"]
          aws_secret_access_key = role_credentials["SecretAccessKey"]
          aws_session_token = role_credentials["SessionToken"]
      
        hash = UnicodeAttribute(hash_key=True)
        range = UnicodeAttribute(range_key=True)
      
    • Fix warning about inspect.getargspec (#701)
    • Fix provisioning GSIs when using pay-per-request billing (#690)
    • Suppress Python 3 exception chaining when "re-raising" botocore errors as PynamoDB model exceptions (#705)
    Source code(tar.gz)
    Source code(zip)
  • 4.0.0(Aug 13, 2019)

  • 4.0.0b3(Jul 9, 2019)

    This is a beta release for a major release with breaking changes. Please read the release notes carefully and report any bugs encountered.

    Source code(tar.gz)
    Source code(zip)
  • 4.0.0b2(Jul 3, 2019)

    This is a beta release for a major release with breaking changes. Please read the release notes carefully and report any bugs encountered.

    Source code(tar.gz)
    Source code(zip)
  • 3.4.0(Jun 14, 2019)

A new ORM for Python specially for PostgreSQL

A new ORM for Python specially for PostgreSQL. Fully-typed for any query with Pydantic and auto-model generation, compatible with any sync or async driver

Yan Kurbatov 3 Apr 13, 2022
Python 3.6+ Asyncio PostgreSQL query builder and model

windyquery - A non-blocking Python PostgreSQL query builder Windyquery is a non-blocking PostgreSQL query builder with Asyncio. Installation $ pip ins

67 Sep 01, 2022
SQLModel is a library for interacting with SQL databases from Python code, with Python objects.

SQLModel is a library for interacting with SQL databases from Python code, with Python objects. It is designed to be intuitive, easy to use, highly compatible, and robust.

Sebastián Ramírez 9.1k Dec 31, 2022
A single model for shaping, creating, accessing, storing data within a Database

'db' within pydantic - A single model for shaping, creating, accessing, storing data within a Database Key Features Integrated Redis Caching Support A

Joshua Jamison 178 Dec 16, 2022
Easy-to-use data handling for SQL data stores with support for implicit table creation, bulk loading, and transactions.

dataset: databases for lazy people In short, dataset makes reading and writing data in databases as simple as reading and writing JSON files. Read the

Friedrich Lindenberg 4.2k Dec 26, 2022
Object mapper for Amazon's DynamoDB

Flywheel Build: Documentation: http://flywheel.readthedocs.org/ Downloads: http://pypi.python.org/pypi/flywheel Source: https://github.com/stevearc/fl

Steven Arcangeli 128 Dec 31, 2022
The Orator ORM provides a simple yet beautiful ActiveRecord implementation.

Orator The Orator ORM provides a simple yet beautiful ActiveRecord implementation. It is inspired by the database part of the Laravel framework, but l

Sébastien Eustace 1.4k Jan 01, 2023
MongoEngine flask extension with WTF model forms support

Flask-MongoEngine Info: MongoEngine for Flask web applications. Repository: https://github.com/MongoEngine/flask-mongoengine About Flask-MongoEngine i

MongoEngine 815 Jan 03, 2023
A simple project to explore the number of GCs when doing basic ORM work.

Question: Does Python do extremely too many GCs for ORMs? YES, OMG YES. Check this out Python Default GC Settings: SQLAlchemy - 20,000 records in one

Michael Kennedy 26 Jun 05, 2022
Python helpers for using SQLAlchemy with Tornado.

tornado-sqlalchemy Python helpers for using SQLAlchemy with Tornado. Installation $ pip install tornado-sqlalchemy In case you prefer installing from

Siddhant Goel 122 Aug 23, 2022
a small, expressive orm -- supports postgresql, mysql and sqlite

peewee Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use. a small, expressive ORM p

Charles Leifer 9.7k Jan 08, 2023
A Python Object-Document-Mapper for working with MongoDB

MongoEngine Info: MongoEngine is an ORM-like layer on top of PyMongo. Repository: https://github.com/MongoEngine/mongoengine Author: Harry Marr (http:

MongoEngine 3.9k Dec 30, 2022
Piccolo - A fast, user friendly ORM and query builder which supports asyncio.

A fast, user friendly ORM and query builder which supports asyncio.

919 Jan 04, 2023
Solrorm : A sort-of solr ORM for python

solrorm : A sort-of solr ORM for python solrpy - deprecated solrorm - currently in dev Usage Cores The first step to interact with solr using solrorm

Aj 1 Nov 21, 2021
Adds SQLAlchemy support to Flask

Flask-SQLAlchemy Flask-SQLAlchemy is an extension for Flask that adds support for SQLAlchemy to your application. It aims to simplify using SQLAlchemy

The Pallets Projects 3.9k Jan 09, 2023
Sqlalchemy-databricks - SQLAlchemy dialect for Databricks

sqlalchemy-databricks A SQLAlchemy Dialect for Databricks using the officially s

Flynn 19 Nov 03, 2022
ORM for Python for PostgreSQL.

New generation (or genius) ORM for Python for PostgreSQL. Fully-typed for any query with Pydantic and auto-model generation, compatible with any sync or async driver

Yan Kurbatov 3 Apr 13, 2022
Pydantic model support for Django ORM

Pydantic model support for Django ORM

Jordan Eremieff 318 Jan 03, 2023
Bringing Async Capabilities to django ORM

Bringing Async Capabilities to django ORM

Skander BM 119 Dec 01, 2022
The Python SQL Toolkit and Object Relational Mapper

SQLAlchemy The Python SQL Toolkit and Object Relational Mapper Introduction SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that giv

mike bayer 3.5k Dec 29, 2022