Asyncio cache manager for redis, memcached and memory

Overview

aiocache

Asyncio cache supporting multiple backends (memory, redis and memcached).

https://travis-ci.org/argaen/aiocache.svg?branch=master https://api.codacy.com/project/badge/Grade/96f772e38e63489ca884dbaf6e9fb7fd

This library aims for simplicity over specialization. All caches contain the same minimum interface which consists on the following functions:

  • add: Only adds key/value if key does not exist.
  • get: Retrieve value identified by key.
  • set: Sets key/value.
  • multi_get: Retrieves multiple key/values.
  • multi_set: Sets multiple key/values.
  • exists: Returns True if key exists False otherwise.
  • increment: Increment the value stored in the given key.
  • delete: Deletes key and returns number of deleted items.
  • clear: Clears the items stored.
  • raw: Executes the specified command using the underlying client.

Installing

  • pip install aiocache
  • pip install aiocache[redis]
  • pip install aiocache[memcached]
  • pip install aiocache[redis,memcached]
  • pip install aiocache[msgpack]

Usage

Using a cache is as simple as

>>> import asyncio
>>> loop = asyncio.get_event_loop()
>>> from aiocache import Cache
>>> cache = Cache(Cache.MEMORY) # Here you can also use Cache.REDIS and Cache.MEMCACHED, default is Cache.MEMORY
>>> loop.run_until_complete(cache.set('key', 'value'))
True
>>> loop.run_until_complete(cache.get('key'))
'value'

Or as a decorator

import asyncio

from collections import namedtuple

from aiocache import cached, Cache
from aiocache.serializers import PickleSerializer
# With this we can store python objects in backends like Redis!

Result = namedtuple('Result', "content, status")


@cached(
    ttl=10, cache=Cache.REDIS, key="key", serializer=PickleSerializer(), port=6379, namespace="main")
async def cached_call():
    print("Sleeping for three seconds zzzz.....")
    await asyncio.sleep(3)
    return Result("content", 200)


def run():
    loop = asyncio.get_event_loop()
    loop.run_until_complete(cached_call())
    loop.run_until_complete(cached_call())
    loop.run_until_complete(cached_call())
    cache = Cache(Cache.REDIS, endpoint="127.0.0.1", port=6379, namespace="main")
    loop.run_until_complete(cache.delete("key"))

if __name__ == "__main__":
    run()

The recommended approach to instantiate a new cache is using the Cache constructor. However you can also instantiate directly using aiocache.RedisCache, aiocache.SimpleMemoryCache or aiocache.MemcachedCache.

You can also setup cache aliases so its easy to reuse configurations

import asyncio

from aiocache import caches

# You can use either classes or strings for referencing classes
caches.set_config({
    'default': {
        'cache': "aiocache.SimpleMemoryCache",
        'serializer': {
            'class': "aiocache.serializers.StringSerializer"
        }
    },
    'redis_alt': {
        'cache': "aiocache.RedisCache",
        'endpoint': "127.0.0.1",
        'port': 6379,
        'timeout': 1,
        'serializer': {
            'class': "aiocache.serializers.PickleSerializer"
        },
        'plugins': [
            {'class': "aiocache.plugins.HitMissRatioPlugin"},
            {'class': "aiocache.plugins.TimingPlugin"}
        ]
    }
})


async def default_cache():
    cache = caches.get('default')   # This always returns the SAME instance
    await cache.set("key", "value")
    assert await cache.get("key") == "value"


async def alt_cache():
    cache = caches.create('redis_alt')   # This creates a NEW instance on every call
    await cache.set("key", "value")
    assert await cache.get("key") == "value"


def test_alias():
    loop = asyncio.get_event_loop()
    loop.run_until_complete(default_cache())
    loop.run_until_complete(alt_cache())

    loop.run_until_complete(caches.get('redis_alt').delete("key"))


if __name__ == "__main__":
    test_alias()

How does it work

Aiocache provides 3 main entities:

  • backends: Allow you specify which backend you want to use for your cache. Currently supporting: SimpleMemoryCache, RedisCache using aioredis and MemCache using aiomcache.
  • serializers: Serialize and deserialize the data between your code and the backends. This allows you to save any Python object into your cache. Currently supporting: StringSerializer, PickleSerializer, JsonSerializer, and MsgPackSerializer. But you can also build custom ones.
  • plugins: Implement a hooks system that allows to execute extra behavior before and after of each command.
If you are missing an implementation of backend, serializer or plugin you think it could be interesting for the package, do not hesitate to open a new issue.

docs/images/architecture.png

Those 3 entities combine during some of the cache operations to apply the desired command (backend), data transformation (serializer) and pre/post hooks (plugins). To have a better vision of what happens, here you can check how set function works in aiocache:

docs/images/set_operation_flow.png

Amazing examples

In examples folder you can check different use cases:

Documentation

Comments
  • [Bugfix] Add redis-py >= 4.2.0 support

    [Bugfix] Add redis-py >= 4.2.0 support

    2022.05.12: switched backend from aioredistoredis-py >= 4.2.0`.


    Update:

    • To test with aioredis 2.0.0 automatically. Travis and tox configurations need to be updated.
    • Fixes aio-libs/aiocache#543

    What do these changes do?

    aioredis 2.0.0 released with breaking API change. Update current backend implementation of airedis/backends/redis.py.

    Breaking aioredis changes encounter during making this PR. aio-libs/aioredis-py#1109

    Are there changes in behavior for the user?

    Nope. Lower level redis backend change. No public API affected.

    Related issue number

    Checklist

    • [x] I think the code is well written
    • [x] Unit tests for the changes exist
    • [ ] Documentation reflects the changes
    • [ ] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> (e.g. 588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the PR
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: Fix issue with non-ascii contents in doctest text files.
    opened by laggardkernel 25
  • aiomcache concurrency issue

    aiomcache concurrency issue

    I'm having a strange issue with wait_for(fut, timeout, *, loop=None) + aiocache on memcache.

    We're storing values using aiocache.MemcachedCache and most methods of aiocache are decorated with @API.timeout which uses await asyncio.wait_for(fn(self, *args, **kwargs), timeout) (with a default timeout of 5 (seconds)).

    When load testing our application, we see that with a big load, the asyncio loop clogs up and some requests to memcache raise asyncio.TimeoutError which is perfectly acceptable. The issue is that when we stop the load and allow for the loop to catch up, if we make any new request, all the memcache connections will fail with a class 'concurrent.futures._base.TimeoutError'. In other words, if we ever get a TimeoutError the application cache is completely broken and the only way to repair the application is the kill it and restart it, which is unacceptable. It seems as though the whole aiocache connection pool is closed and I don't find where this happens and how to prevent it.

    I've tried the following:

    • Remove uvloop (just in case).
    • Wrapped the asyncio.wait_for() in a shield() function so it won't cancel the associated Task, no difference
    • Tried catching the following error types: asyncio.CancelledError, TimeoutError, asyncio.futures.TimeoutError, asyncio.TimeoutError or global Exception with no success, it seems my catching of the error is too late

    The only thing that helps is increasing the connection pool size (2 by default to 500 for example) but even with a big connection pool, if we have a TimeoutError, we hit the same issue and the whole pool spins into everlasting errors. And finally, if I remove the timeout by setting it to 0 or None, the library doesn't use asyncio.wait_for() but a simple await fn() and even though we have some slowness under load, there is no TimeoutError and the application always works. But waiting too long for cache is not a good idea, so I'd really like to use the timeout feature.

    If anyone has any idea how to tackle this, I'd love to hear your input !

    The versions involved:

    • python 3.5.3
    • uvloop 0.8.0
    • aiohttp 2.0.7
    • aiocache 0.3.3
    • aiomcache 0.5.1

    I'm currently writing a small testcase to see if I can easily reproduce the issue. I'll post it here when it's done.

    needs investigation 
    opened by achedeuzot 18
  • make SimpleMemoryBackend store state in instance instead of class

    make SimpleMemoryBackend store state in instance instead of class

    What do these changes do?

    Make SimpleMemoryBackend store state in instance instead of class. While current behaviour matches other caches (same connection args == same cache; zero connection args in memory case), current design seems bad since it does not allow extending and customizing memory cache in future.

    Tests adjusted accordingly.

    Are there changes in behavior for the user?

    Yes, if user for some reason expected different SimpleMemoryBackend objects to have shared storage. This is probably a breaking change for some weird setups.

    Related issue number

    Resolves: #531

    Resolves: #479 There is a PR aimed to fix it: https://github.com/aio-libs/aiocache/pull/523 But using namespace seem to not be sufficient. Instance local cache must be used for consistent behaviour.

    Checklist

    • [x] I think the code is well written
    • [x] Unit tests for the changes exist
    • [ ] Documentation reflects the changes
    • [ ] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> (e.g. 588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the PR
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: Fix issue with non-ascii contents in doctest text files.
    opened by Fogapod 12
  • Better support for choice of endpoint for aioredis

    Better support for choice of endpoint for aioredis

    Hello,

    sorry for the delay. As discussed in https://github.com/argaen/aiocache/issues/426#issuecomment-443124618

    aiocache only seemed to allow for setting up the address and port and not the unix socket. I believe the issue is simple:

    aioredis offers 3 choices from documentation:

    • a Redis URI — "redis://host:6379/0?encoding=utf-8"; "redis://:[email protected]:6379/0?encoding=utf-8";
    • a (host, port) tuple — ('localhost', 6379);
    • a unix domain socket path string — "/path/to/redis.sock".

    but aiocache only supports one option: self._pool = await aioredis.create_pool((self.endpoint, self.port), **kwargs)

    I have tried removing the tuple with the port, and the unix connection worked so it should not be very difficult to fix this.

    I am happy to submit a PR with the change needed to support all 3 options. The easiest way would probably be to offer exactly the same inputs as aioredis, but that would break the scheme with the other backends. Maybe we can assume that if the port is not specified or if the endpoint contains a slash, it is not the tuple option and pass the string only. Do you have a safer idea?

    Thank you!

    feature 
    opened by John-Gee 9
  • Multi-processing aiocache

    Multi-processing aiocache

    Hello,

    this very well may not be an issue but a misconfiguration on my own. I'd appreciate help if that's the case.

    I'm using aiocache and aiohttp with Redis, all on the same host. I have decorated a wrapper around aiohttp.get as such:

    @cached(ttl=604800, cache=RedisCache, serializer=PickleSerializer(),
            port=6379, timeout=0)
    async def get_page(url):
        async with session.get(url) as resp:
            dostuff()
    

    My problem is that I call this get_page function from different processes in a processpool, all with their own event loop and either aiocache or redis seems to not like that as I get:

    2018-11-28 20:03:44,266 aiocache.decorators ERROR Couldn't retrieve get_page('https://www.site.com/')[], unexpected error Traceback (most recent call last): File "/usr/lib/python3.7/site-packages/aiocache/decorators.py", line 124, in get_from_cache value = await self.cache.get(key) File "/usr/lib/python3.7/site-packages/aiocache/base.py", line 61, in _enabled return await func(*args, **kwargs) File "/usr/lib/python3.7/site-packages/aiocache/base.py", line 44, in _timeout return await func(self, *args, **kwargs) File "/usr/lib/python3.7/site-packages/aiocache/base.py", line 75, in _plugins ret = await func(self, *args, **kwargs) File "/usr/lib/python3.7/site-packages/aiocache/base.py", line 192, in get value = loads(await self._get(ns_key, encoding=self.serializer.encoding, _conn=_conn)) File "/usr/lib/python3.7/site-packages/aiocache/backends/redis.py", line 24, in wrapper return await func(self, *args, _conn=_conn, **kwargs) File "/usr/lib/python3.7/site-packages/aiocache/backends/redis.py", line 100, in _get return await _conn.get(key, encoding=encoding) RuntimeError: Task <Task pending coro=<func() running at file.py:88>> got Future attached to a different loop.

    Here's how I setup each new loop in the sub processes:

        loop = asyncio.new_event_loop()
        asyncio.set_event_loop(loop)
        session = aiohttp.ClientSession()
        tasks = []
        tasks.append(asyncio.ensure_future(dostuff2_that_calls_get_page(),
                                           loop=loop))
        loop.run_until_complete(asyncio.gather(*tasks, loop=loop))
        loop.run_until_complete(session.close())
    

    Thank you!

    opened by John-Gee 9
  • Resource Cleanup

    Resource Cleanup

    Maybe I am using it wrong, but when I tag functions with the cached decorator for a redis cache like so:

    @cached(ttl=ONE_HOUR, cache=RedisCache, key="rssfeed",
            serializer=PickleSerializer(), port=6379, namespace="main",
            endpoint="192.168.1.19")
    async def f(....):
      .....
    

    I get the following errors upon exit:

    2017-05-16 11:32:20,999 ERROR asyncio(0) | Task was destroyed but it is pending!
    task:  wait_for=()]> cb=[Future.set_result()]>
    

    It seems that the redis connections are not being properly handled on exit. Is there currently a way to deal with this?

    I am running a sanic app with python3.6 if that makes a difference.

    Thanks! Awesome project by the way.

    enhancement 
    opened by Quinny 9
  • Give option to the user to NOT await for `cache.set` in decorators

    Give option to the user to NOT await for `cache.set` in decorators

    During normal work I get timeouts messages like this:

    Traceback (most recent call last):
      File "/home/suor/projects/aiocache/aiocache/decorators.py", line 108, in set_in_cache
        await self.cache.set(key, value, ttl=self.ttl)
      File "/home/suor/projects/aiocache/aiocache/base.py", line 58, in _enabled
        return await func(*args, **kwargs)
      File "/home/suor/projects/aiocache/aiocache/base.py", line 43, in _timeout
        return await asyncio.wait_for(func(self, *args, **kwargs), timeout)
      File "/usr/lib/python3.6/asyncio/tasks.py", line 362, in wait_for
        raise futures.TimeoutError()
    concurrent.futures._base.TimeoutError
    

    This is not stopping anything like exception should. I using cache cia cached decorator:

    def filecache(basedir):
        # Import from here since these are optional dependencies
        from aiocache import cached
        from aiocache.serializers import PickleSerializer
        from aiofilecache import FileCache
    
        return cached(cache=FileCache, serializer=PickleSerializer(),
                      basedir=basedir)
    
    @filecache('some_dir')
    @fetch(url, ...):
        # ...
        return response
    

    aiofilecache is a simple file cache, you ca see here.

    feature 
    opened by Suor 8
  • The default behaviour for in-memory cache should not include serialization

    The default behaviour for in-memory cache should not include serialization

    The basic use case for aiocache is just wrapping some async method with decorator to use in-memory caching. Like that:

    @cached(ttl=600)
    async def get_smth_async(id):
      ...
    

    Obviously, user doesn't expect any serialization in this case. It is redundant and makes additional performance impact. So default serializer should do nothing, like DefaultSerializer in previous versions. Currently JsonSerializer is used as default.

    For not in-memory use cases user should explicitly specify what type of serialization he needs.

    May be different default serializers should be used for different cache types.

    feature discussion 
    opened by eu9ene 8
  • Feature #428/cache from url

    Feature #428/cache from url

    This PR adds the Cache.from_url method to instantiate caches from a specific resource url.

    Couple of things:

    • password is passed through queryparams. According to IANA proposal password is passed as the user info param. However user doesn't make sense because Redis authentication only goes with password.
    • maybe it would make sense for redis to have the db as a path param instead of having it as a query param. However its the only one so I think its better to have it as queryparam like all the other things you can configure that then get propagated to the underlying client

    Docs: screenshot from 2019-01-05 21-53-47

    Closes #428

    opened by argaen 7
  • Adding maxsize in the decorator

    Adding maxsize in the decorator

    lru_tools (and alru_tools) support a maxsize option in the decorator: @alru_cache(maxsize=32)

    Seeing that a pr was merged with the notion of max_keys per cache: https://github.com/argaen/aiocache/pull/17, how hard would it be to implement a maxsize argument for the decorator?

    I can see this being useful to have a simple way to prevent the in-memory cache from getting too big.

    feature 
    opened by Maxyme 7
  • Cancel the previous ttl timer if exists when setting a new value in the in-memory cache

    Cancel the previous ttl timer if exists when setting a new value in the in-memory cache

    This PR is branched out from the v0.10.0 version bump commit because there is some change on master (e.g. https://github.com/argaen/aiocache/commit/634348f40ce8caa01c7c35010acf32d8c3e17ba6) that is not backward compatible with 0.10.0.

    opened by minhtule 7
  • Decorator namespace key

    Decorator namespace key

    What do these changes do?

    Keys with namespaces now work for decorators get/set

    • Include build_key(key, namespace) in decorators.cached:
      • get_from_cache()
      • set_in_cache()

    Are there changes in behavior for the user?

    • Keys with namespaces now work for the @cached() decorator

    Related issue number

    • https://github.com/aio-libs/aiocache/issues/569

    Checklist

    • [X ] I think the code is well written
    • [X ] Unit tests for the changes exist
    • [X ] Documentation reflects the changes
    • [ ] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> (e.g. 588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the PR
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: Fix issue with non-ascii contents in doctest text files.
    opened by pshafer-als 0
  • Wrap functions in class with decorator

    Wrap functions in class with decorator

    The decorator should be simplified to a simple function, which should then return an object which wraps the passed function. Something along the lines of:

    class Wrapper:
        def __init__(self, func):
            self.func = func
    
        def __call__(self, *args, **kwargs):
            # Get from cache...
            # Or call self.func(*args, **kwargs)
    
    def cached(func):
        return Wrapper(self.func)
    

    This simplifies the logic, and makes it easy to add methods on to our function (related: #538). For example, to call the function while forcing the cache to update could look something like:

    @cached
    def foo(...):
        ...
    
    # Force update
    foo.refresh(...)
    
    opened by Dreamsorcerer 0
  • Lifecycle management of decorators

    Lifecycle management of decorators

    Cache objects should be closed at the end of an application lifecycle, with await Cache.close(), or using async with.

    The current design of decorators creates a new cache instance with each decorator and no attempt to close it is made, thus failing to manage this lifecycle management properly.

    e.g.

    @cached(...)  # New cache is created here, but never closed.
    def foo(): ...
    

    We also want to consider using aiojobs for managing some background tasks, but this additionally requires being created within a running loop, something which is unlikely when a decorator is called.


    One solution I can think of, is to explicitly manage the caches, and pass them to the decorators. This may also need a .start() method to initiate the cache later. e.g.

    cache = Cache(...)
    
    @cached(cache)
    def foo(): ...
    
    async def main():
        # Initialise application
        cache.start()
        # Run application
        ...
        # Cleanup application
        await cache.close()
    

    Or more succinctly:

    async def main():
        async with cache:
            # Run application
    
    opened by Dreamsorcerer 1
  • Make caches Generic

    Make caches Generic

    We should consider making BaseCache Generic, so we can provide better type safety when relevant.

    Code could then look something like this, with mypy checking:

    cache: Cache[str] = Cache(...)
    await cache.get("foo")  # -> str
    await cache.set("foo", "bar")  # OK
    await cache.set("foo", 45)  # Error: Expected str
    

    Existing typing behaviour can be reproduced by annotating it with Cache[Any].

    opened by Dreamsorcerer 0
Releases(0.11.1)
  • 0.11.1(Oct 14, 2020)

  • 0.11.0(Oct 14, 2020)

    • Support str for timeout and ttl #454 - Manuel Miranda
    • Add aiocache_wait_for_write decorator param #448 - Manuel Miranda
    • Extend and improve usage of Cache class #446 - Manuel Miranda
    • Add caches.add functionality #440 - Manuel Miranda
    • Use raw msgpack attribute for loads #439 - Manuel Miranda
    • Add docs regarding plugin timeouts and multicached #438 - Manuel Miranda
    • Fix typehints in lock.py #434 - Aviv
    • Use pytest_configure instead of pytest_namespace #436 - Manuel Miranda
    • Add Cache class factory #430 - Manuel Miranda
    Source code(tar.gz)
    Source code(zip)
  • 0.10.1(Nov 15, 2018)

    • Cancel the previous ttl timer if exists when setting a new value in the in-memory cache #424 - Minh Tu Le

    • Add python 3.7 to CI, now its supported! #420 - Manuel Miranda

    • Add function as parameter for key_builder #417 - Manuel Miranda

    • Always use name when getting logger #412 - Mansur Mamkin

    • Format code with black #410 - Manuel Miranda

    Source code(tar.gz)
    Source code(zip)
  • 0.10.0(Jun 18, 2018)

    • Cache can be disabled in decorated functions using cache_read and cache_write #404 - Josep Cugat

    • Cache constructor can receive now default ttl #405 - Josep Cugat

    Source code(tar.gz)
    Source code(zip)
  • 0.9.1(Apr 26, 2018)

  • 0.9.0(Apr 24, 2018)

    • Bug #389/propagate redlock exceptions #394 - Manuel Miranda aexit was returning whether asyncio Event was removed or not. In some cases this was avoiding the context manager to propagate exceptions happening inside. Now its not returning anything and will raise always any exception raised from inside

    • Fix sphinx build #392 - Manuel Miranda Also add extra step in build pipeline to avoid future errors.

    • Update alias config when config already exists #383 - Josep Cugat

    • Ensure serializers are instances #379 - Manuel Miranda

    • Add MsgPackSerializer #370 - Adam Hopkins

    • Add create_connection_timeout for redis>=1.0.0 when creating connections #368 - tmarques82

    • Fixed spelling error in serializers.py #371 - Jared Shields

    Source code(tar.gz)
    Source code(zip)
  • 0.8.0(Nov 8, 2017)

    • Add pypy support in build pipeline #359 - Manuel Miranda

    • Fix multicached bug when using keys as an arg rather than kwarg #356 - Manuel Miranda

    • Reuse cache when using decorators with alias #355 - Manuel Miranda

    • Cache available from function.cache object for decorated functions #354 - Manuel Miranda

    • aioredis and aiomcache are now optional dependencies #337 - Jair Henrique

    • Generate wheel package on release #338 - Jair Henrique

    • Add key_builder param to caches to customize keys #315 - Manuel Miranda

    Source code(tar.gz)
    Source code(zip)
  • 0.7.2(Jul 30, 2017)

  • 0.7.1(Jul 15, 2017)

  • 0.7.0(Jul 1, 2017)

    • Upgrade to aioredis 0.3.3. - Manuel Miranda

    • Get CMD now returns values that evaluate to False correctly #282 - Manuel Miranda

    • New locks public API exposed #279 - Manuel Miranda Users can now use aiocache.lock.RedLock and aiocache.lock.OptimisticLock

    • Memory now uses new NullSerializer #273 - Manuel Miranda Memory is a special case and doesn't need a serializer because anything can be stored in memory. Created a new NullSerializer that does nothing which is the default that SimpleMemoryCache will use now.

    • Multi_cached can use args for key_from_attr #271 - Manuel Miranda _before only params defined in kwargs where working due to the behavior defined in get_args_dict function. This has now been fixed and it behaves as expected.

    • Removed cached key_from_attr #274 - Manuel Miranda To reproduce the same behavior, use the new key_builder attr

    • Removed settings module. - Manuel Miranda

    Source code(tar.gz)
    Source code(zip)
  • 0.6.0(Jun 5, 2017)

    New

    • Cached supports stampede locking #249 - Manuel Miranda

    • Memory redlock implementation #241 - Manuel Miranda

    • Memcached redlock implementation #240 - Manuel Miranda

    • Redis redlock implementation #235 - Manuel Miranda

    • Add close function to clean up resources #236 - Quinn Perfetto

      Call await cache.close() to close a pool and its connections

    • caches.create works without alias #253 - Manuel Miranda

    Changes

    • Decorators use JsonSerializer by default now #258 - Manuel Miranda

      Also renamed DefaultSerializer to StringSerializer

    • Decorators use single connection #257 - Manuel Miranda

      Decorators (except cached_stampede) now use a single connection for each function call. This means connection doesn't go back to the pool after each cache call. Since the cache instance is the same for a decorated function, this means that the pool size must be high if there is big expected concurrency for that given function

    • Change close to clear for redis #239 - Manuel Miranda

      clear will free connections but will allow the user to still use the cache if needed (same behavior for aiomcache and ofc memory)

    Source code(tar.gz)
    Source code(zip)
Owner
aio-libs
The set of asyncio-based libraries built with high quality
aio-libs
Python disk-backed cache (Django-compatible). Faster than Redis and Memcached. Pure-Python.

DiskCache is an Apache2 licensed disk and file backed cache library, written in pure-Python, and compatible with Django.

Grant Jenks 1.7k Jan 05, 2023
No effort, no worry, maximum performance.

Django Cachalot Caches your Django ORM queries and automatically invalidates them. Documentation: http://django-cachalot.readthedocs.io Table of Conte

NoriPyt 976 Dec 28, 2022
No effort, no worry, maximum performance.

Django Cachalot Caches your Django ORM queries and automatically invalidates them. Documentation: http://django-cachalot.readthedocs.io Table of Conte

NoriPyt 979 Jan 03, 2023
A decorator for caching properties in classes.

cached-property A decorator for caching properties in classes. Why? Makes caching of time or computational expensive properties quick and easy. Becaus

Daniel Roy Greenfeld 658 Dec 01, 2022
Automatic caching and invalidation for Django models through the ORM.

Cache Machine Cache Machine provides automatic caching and invalidation for Django models through the ORM. For full docs, see https://cache-machine.re

846 Nov 26, 2022
RecRoom Library Cache Tool

RecRoom Library Cache Tool A handy tool to deal with the Library cache file. Features Parse Library cache Remove Library cache Parsing The script pars

Jesse 5 Jul 09, 2022
Persistent, stale-free, local and cross-machine caching for Python functions.

Persistent, stale-free, local and cross-machine caching for Python functions.

Shay Palachy 420 Dec 22, 2022
A Python wrapper around the libmemcached interface from TangentOrg.

pylibmc is a Python client for memcached written in C. See the documentation at sendapatch.se/projects/pylibmc/ for more information. New in version 1

Ludvig Ericson 458 Dec 30, 2022
Persistent caching for python functions

Cashier Persistent caching for python functions Simply add a decorator to a python function and cache the results for future use. Extremely handy when

Anoop Thomas Mathew 82 Mar 04, 2022
An implementation of memoization technique for Django

django-memoize django-memoize is an implementation of memoization technique for Django. You can think of it as a cache for function or method results.

Unhaggle 118 Dec 09, 2022
PyCache - simple key:value server written with Python

PyCache simple key:value server written with Python and client is here run server python -m pycache.server or from pycache.server import start_server

chick_0 0 Nov 01, 2022
Caching for HTTPX

Caching for HTTPX. Note: Early development / alpha, use at your own risk. This package adds caching functionality to HTTPX Adapted from Eric Larson's

Johannes 51 Dec 04, 2022
WSGI middleware for sessions and caching

Cache and Session Library About Beaker is a web session and general caching library that includes WSGI middleware for use in web applications. As a ge

Ben Bangert 500 Dec 29, 2022
Peerix is a peer-to-peer binary cache for nix derivations

Peerix Peerix is a peer-to-peer binary cache for nix derivations. Every participating node can pull derivations from each other instances' respective

92 Dec 13, 2022
A caching extension for Flask

Flask-Caching Adds easy cache support to Flask. This is a fork of the Flask-Cache extension. Flask-Caching also includes the cache module from werkzeu

Peter Justin 774 Jan 02, 2023
A slick ORM cache with automatic granular event-driven invalidation.

Cacheops A slick app that supports automatic or manual queryset caching and automatic granular event-driven invalidation. It uses redis as backend for

Alexander Schepanovski 1.7k Dec 30, 2022
Robust, highly tunable and easy-to-integrate in-memory cache solution written in pure Python, with no dependencies.

Omoide Cache Caching doesn't need to be hard anymore. With just a few lines of code Omoide Cache will instantly bring your Python services to the next

Leo Ertuna 2 Aug 14, 2022
An ORM cache for Django.

Django ORMCache A cache manager mixin that provides some caching of objects for the ORM. Installation / Setup / Usage TODO Testing Run the tests with:

Educreations, Inc 15 Nov 27, 2022
CacheControl is a port of the caching algorithms in httplib2 for use with requests session object.

CacheControl CacheControl is a port of the caching algorithms in httplib2 for use with requests session object. It was written because httplib2's bett

Eric Larson 409 Dec 04, 2022
johnny cache django caching framework

Johnny Cache is a caching framework for django applications. It works with the django caching abstraction, but was developed specifically with the use

Jason Moiron 304 Nov 07, 2022