Simple job queues for Python

Overview

RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.

RQ requires Redis >= 3.0.0.

Build status PyPI Coverage

Full documentation can be found here.

Support RQ

If you find RQ useful, please consider supporting this project via Tidelift.

Getting started

First, run a Redis server, of course:

$ redis-server

To put jobs on queues, you don't have to do anything special, just define your typically lengthy or blocking function:

import requests

def count_words_at_url(url):
    """Just an example function that's called async."""
    resp = requests.get(url)
    return len(resp.text.split())

You do use the excellent requests package, don't you?

Then, create an RQ queue:

from redis import Redis
from rq import Queue

queue = Queue(connection=Redis())

And enqueue the function call:

from my_module import count_words_at_url
job = queue.enqueue(count_words_at_url, 'http://nvie.com')

Scheduling jobs are also similarly easy:

# Schedule job to run at 9:15, October 10th
job = queue.enqueue_at(datetime(2019, 10, 8, 9, 15), say_hello)

# Schedule job to run in 10 seconds
job = queue.enqueue_in(timedelta(seconds=10), say_hello)

Retrying failed jobs is also supported:

from rq import Retry

# Retry up to 3 times, failed job will be requeued immediately
queue.enqueue(say_hello, retry=Retry(max=3))

# Retry up to 3 times, with configurable intervals between retries
queue.enqueue(say_hello, retry=Retry(max=3, interval=[10, 30, 60]))

For a more complete example, refer to the docs. But this is the essence.

The worker

To start executing enqueued function calls in the background, start a worker from your project's directory:

$ rq worker --with-scheduler
*** Listening for work on default
Got count_words_at_url('http://nvie.com') from default
Job result = 818
*** Listening for work on default

That's about it.

Installation

Simply use the following command to install the latest released version:

pip install rq

If you want the cutting edge version (that may well be broken), use this:

pip install -e git+https://github.com/nvie/[email protected]#egg=rq

Related Projects

Check out these below repos which might be useful in your rq based project.

Project history

This project has been inspired by the good parts of Celery, Resque and this snippet, and has been created as a lightweight alternative to the heaviness of Celery or other AMQP-based queueing implementations.

Comments
  • Suggestion: multi-job dependency

    Suggestion: multi-job dependency

    I suggest extending the current dependency feature to allow a job to depend on multiple jobs, not just one, so that it can be run only when all those jobs it depends on have succeeded.

    opened by jchia 84
  • triggering shutdown by setting a redis flag

    triggering shutdown by setting a redis flag

    Problem Statement We use rq on heroku and their way to the shutdown the worker that runs rq does not ensure a safe shutdown.

    From Heroku docs: https://devcenter.heroku.com/articles/dynos

    When a worker is stoped by ps:scaling or pushing out a new release or any heroku operation.
    The dyno manager sends the working SIGTERM
    
    The process then has 10 seconds to shut down gracefully.
    
    If the process is still alive -- then SIGKILL is then sent
    

    So if a job takes more then 10 secs (which many of our jobs do) -- we are going to be out of luck. The job will be killed in a potentially unsafe point

    So we needed an approach where we could set a flag on redis, that the rq worker can ping.

    I introduced the key: 'rq:worker:pause_work' Which any process can set. If the worker sees that it has been set -- it hops into a pause loop, until the key is deleted.

    In Worker#work at the top of the while True: block

        try:
            before_state = None
            notified = False
    
            while Worker.paused() and not self.stopped:
    
                if burst:
                    self.log.warn('Paused in burst mode -- exiting.')
                    self.log.warn('Note:  There could still be unperformed jobs on the queue')
                    raise StopRequested
    
                if not notified:
                    self.log.warn('Stopping on pause request REALLY.')
                    before_state = self.get_state()
                    self.set_state('paused')
                    notified = True
                time.sleep(1)
        except StopRequested:
            break
    
    opened by jtushman 53
  • The future of RQ and Sentry

    The future of RQ and Sentry

    opened by untitaker 42
  • Ability to cancel running jobs?

    Ability to cancel running jobs?

    (This isn't an issue but a question, but I didn't see a mention of a mailing list, so posting here)

    I really want to switch from Celery to rq in Re:dash, but we need the ability to cancel already running job -- actually kill the work horse process and not only remove from the task from registries.

    I was thinking of implementing this by having my own worker executer with another thread running alongside waiting for cancel commands (by monitoring some Redis key). If it gets a cancel command it will call the stop command of the worker.

    Has anyone implemented something like this already? Will something like this be interesting as a contribution?

    opened by arikfr 41
  • jobs lost on hard shutdown

    jobs lost on hard shutdown

    Using Heroku, I discovered jobs are getting lost if the worker is terminated through scaling or through other auto-management such as too much memory usage.

    An example from the heroku logs: 2012-11-10T06:25:12+00:00 app[worker.1]: [2012-11-10 06:25] DEBUG: worker: Registering birth of worker ec196e3c-15cc-4056-a54c-22793c11402f.2 2012-11-10T06:25:12+00:00 app[worker.1]: [2012-11-10 06:25] INFO: worker: RQ worker started, version 0.3.2 2012-11-10T06:25:12+00:00 app[worker.1]: [2012-11-10 06:25] INFO: worker: 2012-11-10T06:25:12+00:00 app[worker.1]: [2012-11-10 06:25] INFO: worker: *** Listening on high, default, low... 2012-11-10T06:25:12+00:00 app[worker.1]: [2012-11-10 06:25] INFO: worker: default: project.management.commands.test_rq.func() (a1eb90ea-1c59-49a8-ae11-a089df09c096) 2012-11-10T06:25:25+00:00 app[worker.1]: [2012-11-10 06:25] DEBUG: worker: Got signal SIGTERM. 2012-11-10T06:25:25+00:00 app[worker.1]: [2012-11-10 06:25] WARNING: worker: Warm shut down requested. 2012-11-10T06:25:25+00:00 app[worker.1]: [2012-11-10 06:25] DEBUG: worker: Stopping after current horse is finished. Press Ctrl+C again for a cold shutdown. 2012-11-10T06:25:25+00:00 app[worker.1]: [2012-11-10 06:25] INFO: worker: Stopping on request. 2012-11-10T06:25:25+00:00 app[worker.1]: [2012-11-10 06:25] DEBUG: worker: Registering death

    Viewing the FailedQueue shows no jobs.

    Ideally, the solution to this problem would be to move the job to a 'WorkingQueue' instance for that queue (eg 'default' -> 'working:default') using BRPOPLPUSH before returning the job from Redis and then remove it from the WorkingQueue instance once the job completes or is moved to FailedQueue. Sadly BRPOPLPUSH doesn't help with the current BLPOP behaviour of listening on multiple queues.

    You could reduce the risk however by pushing the job immediately onto the WorkingQueue instance stack after BLPOP. In the event a worker experiences a hard shutdown mid job, when the next worker is fired up it would check the WorkingQueue for it's queue's jobs and check if any have expired and move them to the failed queue if they have.

    opened by bretth 34
  • Sentry error reporting doesn't quite work

    Sentry error reporting doesn't quite work

    In rqworker.py the main process creates a sentry client and passes it to register_sentry. However when an exception is raised in the worker process, the client inherited from parent doesn't work any more. No error is logged into sentry.

    opened by wh5a 32
  • Succeeded jobs mysteriously moved to FailedJobRegistry

    Succeeded jobs mysteriously moved to FailedJobRegistry

    Once in a while I get jobs that completed successfully moved to FailedJobRegistry. The job terminates correctly, as shown in the logs:

    16:44:39 default: Job OK (5a33fdd2-54bf-466a-9c6c-5aea8c37be76)
    16:44:39 Result is kept for 600 seconds
    

    But then after a while I see the job has been moved to FailedJobRegistry. Looking at the queues with rq-dashboard, I see this terse message:

    Moved to FailedJobRegistry at 2021-07-04 16:46:12.479190
    

    But nothing else (don't even know where rq-dashboard gets that message from). No other information in the logs. As I said, this happens only for a minority of jobs, but it does happen. If it helps, it happened both with rq 1.8.1 and rq 1.9.0. Could it be a failure in the rq<->redis communication so the successful termination of the job isn't properly written to redis? Looking at the rq code, I see that the move happens in StartedJobRegistry.cleanup() in rq/registry.py. From what I understand, rq thinks that the job is "expired" (based on redis score) so moves it to the failed registry.

    opened by waldner 29
  • Unreliable under heavy loads

    Unreliable under heavy loads

    Hi - while I'm very much liking RQ overall, unfortunely it seems to become unreliable under some curcumstances, specifically if I stress it by spawning a large number of workers (via a Condor cluster) with a queue of short runtime jobs.

    The symptom is that the queue is empty - q.is_empty()==True and rqinfo and the dashboard agree. However iterating the jobs for is_queued/started/finished/failed doesn't agree, for instance some jobs still report is_queued==True. Since I'm testing num_queued + num_started == 0 to know when the whole grid job is complete, this is a problem.

    I'd guess this is the communication with the Redis server timing out and not being retried, thus leaving this inconsistency? We've had problems like this with Python's socket library - despite it claiming to not have a timeout, the underlying C socket API returns ETIMEDOUT and the library throws an exception rather than loop on this condition. Note that I've configured the Redis server with enough fds to honour it's default of maxclients=10000.

    Would very much like to use RQ over alternatives, but unreliability is a show-stopper. Any ideas what could be causing this behaviour, and can it be fixed?

    opened by mark-99 29
  • Regression: SSLConnection `__init__() got an unexpected keyword argument 'ssl'`

    Regression: SSLConnection `__init__() got an unexpected keyword argument 'ssl'`

    Version ==1.5.2

    With rq version ==1.5.2 I am running into an __init__() got an unexpected keyword argument 'ssl'. (with rq version ==1.5.1 the error does not appear)

    Traceback (most recent call last):
      File "<>/site-packages/redis/connection.py", line 1185, in get_connection
        connection = self._available_connections.pop()
    IndexError: pop from empty list
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "<>/site-packages/rq/worker.py", line 931, in perform_job
        self.prepare_job_execution(job, heartbeat_ttl)
      File "<>/site-packages/rq/worker.py", line 826, in prepare_job_execution
        pipeline.execute()
      File "<>/site-packages/redis/client.py", line 4013, in execute
        self.shard_hint)
      File "<>/site-packages/redis/connection.py", line 1187, in get_connection
        connection = self.make_connection()
      File "<>/site-packages/redis/connection.py", line 1227, in make_connection
        return self.connection_class(**self.connection_kwargs)
      File "<>/site-packages/redis/connection.py", line 828, in __init__
        super(SSLConnection, self).__init__(**kwargs)
    TypeError: __init__() got an unexpected keyword argument 'ssl'
    

    Unfortunately I am not able to provide a concise code fragment to reproduce the error. It looks like a regression from https://github.com/rq/rq/pull/1327 where the ssl keyword and re-use of SSLConnection were introduced: https://github.com/rq/rq/commit/56e756f512979b0151b539c9473ceadb01afbabf

    opened by Fabma 28
  • Add CLI `rq` to empty queues and requeue failed jobs

    Add CLI `rq` to empty queues and requeue failed jobs

    Usage: rq [OPTIONS] COMMAND [ARGS]...
    
    Options:
      -u, --url TEXT  URL describing Redis connection details.
      --help          Show this message and exit.
    
    Commands:
      empty    Empty queues, default: empty failed queue $...
      requeue  Requeue all failed jobs in failed queue
    

    e.g.

    $ rq empty
    2 jobs removed from failed queue
    
    $ rq -u 'redis://localhost:6379/0' empty default high
    10 jobs removed from default queue
    2 jobs removed from high queue
    
    $ rq requeue
    Requeuing 4 failed jobs......
    Requeue over with 0 jobs requeuing failed
    
    opened by zhangliyong 28
  • Some Jobs not removed from redis

    Some Jobs not removed from redis

    I have a number of jobs (a minority) that, once completed, are not removed from redis. Usually, I see the key eg rq:job:a48172a7-4e25-447e-9ac2-8b3c208bbf3c, with TTL: -1 and Type: Hash, and the only field it has is last_heartbeat and its value is a timestamp (eg 2021-04-20T00:03:44.049309Z). Since I run many jobs, these ghost jobs tend to pile up in redis and I have to periodically clean them up manually. I can't identify anything special about these jobs, other instances of the same function are correctly removed, and this seems to happen randomly. I don't know where to start or where to look to debug this. Any help will be appreciated.

    opened by waldner 27
  • Exceptions Being Printed to Worker Log Stream twice

    Exceptions Being Printed to Worker Log Stream twice

    When an exception is raised by the enqueue() callback function, the stack trace is printed to the worker logs twice. This is a problem as we are importing these into our logging backend.

    For example, passing this function:

    def run_script(script):
        raise Exception("FAILURE")
    

    into this enqueue call:

    queue = Queue(queue_name.lower(), connection=redis_client) 
    queue.enqueue(hw.run_script)
    

    will produce the following log stream on the worker:

    2022-12-17 12:51:12   File "/usr/local/lib/python3.6/site-packages/rq/worker.py", line 1075, in perform_job
    2022-12-17 12:51:12     rv = job.perform()
    2022-12-17 12:51:12   File "/usr/local/lib/python3.6/site-packages/rq/job.py", line 854, in perform
    2022-12-17 12:51:12     self._result = self._execute()
    2022-12-17 12:51:12   File "/usr/local/lib/python3.6/site-packages/rq/job.py", line 877, in _execute
    2022-12-17 12:51:12     result = self.func(*self.args, **self.kwargs)
    2022-12-17 12:51:12   File "./hydra_wrapper.py", line 16, in run_script
    2022-12-17 12:51:12     raise Exception("ERROR")
    2022-12-17 12:51:12 Exception: ERROR
    2022-12-17 12:51:12 Traceback (most recent call last):
    2022-12-17 12:51:12   File "/usr/local/lib/python3.6/site-packages/rq/worker.py", line 1075, in perform_job
    2022-12-17 12:51:12     rv = job.perform()
    2022-12-17 12:51:12   File "/usr/local/lib/python3.6/site-packages/rq/job.py", line 854, in perform
    2022-12-17 12:51:12     self._result = self._execute()
    2022-12-17 12:51:12   File "/usr/local/lib/python3.6/site-packages/rq/job.py", line 877, in _execute
    2022-12-17 12:51:12     result = self.func(*self.args, **self.kwargs)
    2022-12-17 12:51:12   File "./hydra_wrapper.py", line 16, in run_script
    2022-12-17 12:51:12     raise Exception("ERROR")
    2022-12-17 12:51:12 Exception: ERROR
    

    How can I prevent this duplication of the stack trace? My first thought was that the failure triggered a retry, but I couldn't find documentation suggesting there's a default retry of one.

    opened by eswolinsky3241 0
  • Add `at_front` to enqueue_at

    Add `at_front` to enqueue_at

    Hi @selwin,

    I noticed that you have omitted the at_front argument from enqueue_at. There are cases where you may only have one worker and have 40 tasks that take a total of about 15 minutes to complete, but you also have other tasks that need to be run more frequently (e.g. every 10 minutes). In these cases, it would be helpful to have the option to specify that the task should be placed at the front of the queue when it is enqueued with enqueue_at.

    see here: image

    I understand that there are currently some differences and limitations between the way enqueue and enqueue_at work. However, it may be possible to implement the at_front option at a low level for enqueue_at, and this could also be ported to rq-scheduler. While this is just a suggestion for an enhancement, I think it would be really useful to have.

    opened by gabriels1234 2
  • Add rq-scheduler missing functions to rq

    Add rq-scheduler missing functions to rq

    Hi! The --with-scheduler option is a great feature. and it has helped a lot in cases where the main scheduler fails.

    I use django-rq-scheduler (the new repo) that relies on rq-scheduler for a few main functionalities. It could easily fallback to vanilla rq if some of the rq-scheduler functions are ported over rq. (such as cron(), contains(), etc... )

    The end goal is to avoid having a separate container (extra $$$) to run the scheduler only.

    Thanks!

    opened by gabriels1234 1
  • How to pass output of dependency to dependent task?

    How to pass output of dependency to dependent task?

    Is it possible to pass task dependency result to its successor? If so, how? I am aware of this possibility but I was wondering if there were plans to auto-detect dependencies based on task inputs.

    opened by janvainer 2
  • Enhance worker termination logic

    Enhance worker termination logic

    • use wait4 instead of waitpid to get rusage. This can be used by handlers to detect OOM for example (our use-case)
    • allow custom handling of workhorse terminated situation
    • extract signal on termination (if any)
    opened by ronlut 1
Releases(v1.11.1)
  • v1.11.1(Sep 25, 2022)

    • queue.enqueue_many() now supports on_success and on on_failure arguments. Thanks @y4n9squared!
    • You can now pass enqueue_at_front to Dependency() objects to put dependent jobs at the front when they are enqueued. Thanks @jtfidje!
    • Fixed a bug where workers may wrongly acquire scheduler locks. Thanks @milesjwinter!
    • Jobs should not be enqueued if any one of it's dependencies is canceled. Thanks @selwin!
    • Fixed a bug when handling jobs that have been stopped. Thanks @ronlut!
    • Fixed a bug in handling Redis connections that don't allow SETNAME command. Thanks @yilmaz-burak!
    Source code(tar.gz)
    Source code(zip)
  • v1.11(Jul 31, 2022)

    • This will be the last RQ version that supports Python 3.5.
    • Allow jobs to be enqueued even when their dependencies fail via Dependency(allow_failure=True). Thanks @mattchan-tencent, @caffeinatedMike and @selwin!
    • When stopped jobs are deleted, they should also be removed from FailedJobRegistry. Thanks @selwin!
    • job.requeue() now supports at_front() argument. Thanks @buroa!
    • Added ssl support for sentinel connections. Thanks @nevious!
    • SimpleWorker now works better on Windows. Thanks @caffeinatedMike!
    • Added on_failure and on_success arguments to @job decorator. Thanks @nepta1998!
    • Fixed a bug in dependency handling. Thanks @th3hamm0r!
    • Minor fixes and optimizations by @xavfernandez, @olaure, @kusaku.
    Source code(tar.gz)
    Source code(zip)
  • v1.10.1(Dec 7, 2021)

    • Failure callbacks are now properly called when job is run synchronously. Thanks @ericman93!
    • Fixes a bug that could cause job keys to be left over when result_ttl=0. Thanks @selwin!
    • Allow ssl_cert_reqs argument to be passed to Redis. Thanks @mgcdanny!
    • Better compatibility with Python 3.10. Thanks @rpkak!
    • job.cancel() should also remove itself from registries. Thanks @joshcoden!
    • Pubsub threads are now launched in daemon mode. Thanks @mik3y!
    Source code(tar.gz)
    Source code(zip)
  • v1.10(Sep 9, 2021)

    • You can now enqueue jobs from CLI. Docs here. Thanks @rpkak!
    • Added a new CanceledJobRegistry to keep track of canceled jobs. Thanks @selwin!
    • Added custom serializer support to various places in RQ. Thanks @joshcoden!
    • cancel_job(job_id, enqueue_dependents=True) allows you to cancel a job while enqueueing its dependents. Thanks @joshcoden!
    • Added job.get_meta() to fetch fresh meta value directly from Redis. Thanks @aparcar!
    • Fixes a race condition that could cause jobs to be incorrectly added to FailedJobRegistry. Thanks @selwin!
    • Requeueing a job now clears job.exc_info. Thanks @selwin!
    • Repo infrastructure improvements by @rpkak.
    • Other minor fixes by @cesarferradas and @bbayles.
    Source code(tar.gz)
    Source code(zip)
  • v1.9.0(Jun 30, 2021)

    • Added success and failure callbacks. You can now do queue.enqueue(foo, on_success=do_this, on_failure=do_that). Thanks @selwin!
    • Added queue.enqueue_many() to enqueue many jobs in one go. Thanks @joshcoden!
    • Various improvements to CLI commands. Thanks @rpkak!
    • Minor logging improvements. Thanks @clavigne and @natbusa!
    Source code(tar.gz)
    Source code(zip)
  • v1.8.1(May 17, 2021)

    • Jobs that fail due to hard shutdowns are now retried. Thanks @selwin!
    • Scheduler now works with custom serializers. Thanks @alella!
    • Added support for click 8.0. Thanks @rpkak!
    • Enqueueing static methods are now supported. Thanks @pwws!
    • Job exceptions no longer get printed twice. Thanks @petrem!
    Source code(tar.gz)
    Source code(zip)
  • v1.8.0(May 17, 2021)

    • You can now declare multiple job dependencies. Thanks @skieffer and @thomasmatecki for laying the groundwork for multi dependency support in RQ.
    • Added RoundRobinWorker and RandomWorker classes to control how jobs are dequeued from multiple queues. Thanks @bielcardona!
    • Added --serializer option to rq worker CLI. Thanks @f0cker!
    • Added support for running asyncio tasks. Thanks @MyrikLD!
    • Added a new STOPPED job status so that you can differentiate between failed and manually stopped jobs. Thanks @dralley!
    • Fixed a serialization bug when used with job dependency feature. Thanks @jtfidje!
    • clean_worker_registry() now works in batches of 1,000 jobs to prevent modifying too many keys at once. Thanks @AxeOfMen and @TheSneak!
    • Workers will now wait and try to reconnect in case of Redis connection errors. Thanks @Asrst!
    Source code(tar.gz)
    Source code(zip)
  • v1.7.0(Nov 29, 2020)

    • Added job.worker_name attribute that tells you which worker is executing a job. Thanks @selwin!
    • Added send_stop_job_command() that tells a worker to stop executing a job. Thanks @selwin!
    • Added JSONSerializer as an alternative to the default pickle based serializer. Thanks @JackBoreczky!
    • Fixes RQScheduler running on Redis with ssl=True. Thanks @BobReid!
    Source code(tar.gz)
    Source code(zip)
  • v1.6.1(Nov 8, 2020)

  • v1.6.0(Nov 8, 2020)

    • Workers now listen to external commands via pubsub. The first two features taking advantage of this infrastructure are send_shutdown_command() and send_kill_horse_command(). Thanks @selwin!
    • Added job.last_heartbeat property that's periodically updated when job is running. Thanks @theambient!
    • Now horses are killed by their parent group. This helps in cleanly killing all related processes if job uses multiprocessing. Thanks @theambient!
    • Fixed scheduler usage with Redis connections that uses custom parser classes. Thanks @selwin!
    • Scheduler now enqueue jobs in batches to prevent lock timeouts. Thanks @nikkonrom!
    • Scheduler now follows RQ worker's logging configuration. Thanks @christopher-dG!
    Source code(tar.gz)
    Source code(zip)
  • v1.5.2(Sep 10, 2020)

    • Scheduler now uses the class of connection that's used. Thanks @pacahon!
    • Fixes a bug that puts retried jobs in FailedJobRegistry. Thanks @selwin!
    • Fixed a deprecated import. Thanks @elmaghallawy!
    Source code(tar.gz)
    Source code(zip)
  • v1.5.1(Aug 21, 2020)

    • Fixes for Redis server version parsing. Thanks @selwin!
    • Retries can now be set through @job decorator. Thanks @nerok!
    • Log messages below logging.ERROR is now sent to stdout. Thanks @selwin!
    • Better logger name for RQScheduler. Thanks @atainter!
    • Better handling of exceptions thrown by horses. Thanks @theambient!
    Source code(tar.gz)
    Source code(zip)
  • v1.5.0(Jul 26, 2020)

    • Failed jobs can now be retried. Thanks @selwin!
    • Fixed scheduler on Python > 3.8.0. Thanks @selwin!
    • RQ is now aware of which version of Redis server it's running on. Thanks @aparcar!
    • RQ now uses hset() on redis-py >= 3.5.0. Thanks @aparcar!
    • Fix incorrect worker timeout calculation in SimpleWorker.execute_job(). Thanks @davidmurray!
    • Make horse handling logic more robust. Thanks @wevsty!
    Source code(tar.gz)
    Source code(zip)
  • v1.4.3(Jun 28, 2020)

    • Added job.get_position() and queue.get_job_position(). Thanks @aparcar!
    • Longer TTLs for worker keys to prevent them from expiring inside the worker lifecycle. Thanks @selwin!
    • Long job args/kwargs are now truncated during logging. Thanks @JhonnyBn!
    • job.requeue() now returns the modified job. Thanks @ericatkin!
    Source code(tar.gz)
    Source code(zip)
  • v1.4.2(May 26, 2020)

    • Reverted changes to hmset command which causes workers on Redis server < 4 to crash. Thanks @selwin!
    • Merged in more groundwork to enable jobs with multiple dependencies. Thanks @thomasmatecki!
    Source code(tar.gz)
    Source code(zip)
  • v1.4.1(May 16, 2020)

    • Default serializer now uses pickle.HIGHEST_PROTOCOL for backward compatibility reasons. Thanks @bbayles!
    • Avoid deprecation warnings on redis-py >= 3.5.0. Thanks @bbayles!
    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(May 13, 2020)

    • Custom serializer is now supported. Thanks @solababs!
    • delay() now accepts job_id argument. Thanks @grayshirt!
    • Fixed a bug that may cause early termination of scheduled or requeued jobs. Thanks @rmartin48!
    • When a job is scheduled, always add queue name to a set containing active RQ queue names. Thanks @mdawar!
    • Added --sentry-ca-certs and --sentry-debug parameters to rq worker CLI. Thanks @kichawa!
    • Jobs cleaned up by StartedJobRegistry are given an exception info. Thanks @selwin!
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Mar 9, 2020)

    • Support for infinite job timeout. Thanks @theY4Kman!
    • Added __main__ file so you can now do python -m rq.cli. Thanks @bbayles!
    • Fixes an issue that may cause zombie processes. Thanks @wevsty!
    • job_id is now passed to logger during failed jobs. Thanks @smaccona!
    • queue.enqueue_at() and queue.enqueue_in() now supports explicit args and kwargs function invocation. Thanks @selwin!
    Source code(tar.gz)
    Source code(zip)
  • v1.2.2(Feb 3, 2020)

  • v1.2.1(Jan 31, 2020)

    • enqueue_at() and enqueue_in() now sets job status to scheduled. Thanks @coolhacker170597!
    • Failed jobs data are now automatically expired by Redis. Thanks @selwin!
    • Fixes RQScheduler logging configuration. Thanks @FlorianPerucki!
    Source code(tar.gz)
    Source code(zip)
  • v1.0(Apr 6, 2019)

    Backward incompatible changes:

    • job.status has been removed. Use job.get_status() and job.set_status() instead. Thanks @selwin!

    • FailedQueue has been replaced with FailedJobRegistry:

      • get_failed_queue() function has been removed. Please use FailedJobRegistry(queue=queue) instead.
      • move_to_failed_queue() has been removed.
      • RQ now provides a mechanism to automatically cleanup failed jobs. By default, failed jobs are kept for 1 year.
      • Thanks @selwin!
    • RQ's custom job exception handling mechanism has also changed slightly:

      • RQ's default exception handling mechanism (moving jobs to FailedJobRegistry) can be disabled by doing Worker(disable_default_exception_handler=True).
      • Custom exception handlers are no longer executed in reverse order.
      • Thanks @selwin!
    • Worker names are now randomized. Thanks @selwin!

    • timeout argument on queue.enqueue() has been deprecated in favor of job_timeout. Thanks @selwin!

    • Sentry integration has been reworked:

      • RQ now uses the new sentry-sdk in place of the deprecated Raven library
      • RQ will look for the more explicit RQ_SENTRY_DSN environment variable instead of SENTRY_DSN before instantiating Sentry integration
      • Thanks @selwin!
    • Fixed Worker.total_working_time accounting bug. Thanks @selwin!

    Source code(tar.gz)
    Source code(zip)
Asynchronous tasks in Python with Celery + RabbitMQ + Redis

python-asynchronous-tasks Setup & Installation Create a virtual environment and install the dependencies: $ python -m venv venv $ source env/bin/activ

Valon Januzaj 40 Dec 03, 2022
Distributed Task Queue (development branch)

Version: 5.1.0b1 (singularity) Web: https://docs.celeryproject.org/en/stable/index.html Download: https://pypi.org/project/celery/ Source: https://git

Celery 20.7k Jan 01, 2023
Clearly see and debug your celery cluster in real time!

Clearly see and debug your celery cluster in real time! Do you use celery, and monitor your tasks with flower? You'll probably like Clearly! 👍 Clearl

Rogério Sampaio de Almeida 364 Jan 02, 2023
Queuing with django celery and rabbitmq

queuing-with-django-celery-and-rabbitmq Install Python 3.6 or above sudo apt-get install python3.6 Install RabbitMQ sudo apt-get install rabbitmq-ser

1 Dec 22, 2021
a little task queue for python

a lightweight alternative. huey is: a task queue (2019-04-01: version 2.0 released) written in python (2.7+, 3.4+) clean and simple API redis, sqlite,

Charles Leifer 4.3k Jan 08, 2023
Flower is a web based tool for monitoring and administrating Celery clusters.

Real-time monitor and web admin for Celery distributed task queue

Mher Movsisyan 5.5k Jan 02, 2023
SAQ (Simple Async Queue) is a simple and performant job queueing framework built on top of asyncio and redis

SAQ SAQ (Simple Async Queue) is a simple and performant job queueing framework built on top of asyncio and redis. It can be used for processing backgr

Toby Mao 117 Dec 30, 2022
Py_extract is a simple, light-weight python library to handle some extraction tasks using less lines of code

py_extract Py_extract is a simple, light-weight python library to handle some extraction tasks using less lines of code. Still in Development Stage! I

I'm Not A Bot #Left_TG 7 Nov 07, 2021
A simple app that provides django integration for RQ (Redis Queue)

Django-RQ Django integration with RQ, a Redis based Python queuing library. Django-RQ is a simple app that allows you to configure your queues in djan

RQ 1.6k Dec 28, 2022
Asynchronous serverless task queue with timed leasing of tasks

Asynchronous serverless task queue with timed leasing of tasks. Threaded implementations for SQS and local filesystem.

24 Dec 14, 2022
OpenQueue is a experimental CS: GO match system written in asyncio python.

What is OpenQueue OpenQueue is a experimental CS: GO match system written in asyncio python. Please star! This project was a lot of work & still has a

OpenQueue 10 May 13, 2022
Simple job queues for Python

Hypothesis Hypothesis is a family of testing libraries which let you write tests parametrized by a source of examples. A Hypothesis implementation the

RQ 8.7k Jan 07, 2023
Add you own metrics to your celery backend

Add you own metrics to your celery backend

Gandi 1 Dec 16, 2022
Redis-backed message queue implementation that can hook into a discord bot written with hikari-lightbulb.

Redis-backed FIFO message queue implementation that can hook into a discord bot written with hikari-lightbulb. This is eventually intended to be the backend communication between a bot and a web dash

thomm.o 7 Dec 05, 2022
A multiprocessing distributed task queue for Django

A multiprocessing distributed task queue for Django Features Multiprocessing worker pool Asynchronous tasks Scheduled, cron and repeated tasks Signed

Ilan Steemers 1.7k Jan 03, 2023
Full featured redis cache backend for Django.

Redis cache backend for Django This is a Jazzband project. By contributing you agree to abide by the Contributor Code of Conduct and follow the guidel

Jazzband 2.5k Jan 03, 2023
A fully-featured e-commerce application powered by Django

kobbyshop - Django Ecommerce App A fully featured e-commerce application powered by Django. Sections Project Description Features Technology Setup Scr

Kwabena Yeboah 2 Feb 15, 2022
Sync Laravel queue with Python. Provides an interface for communication between Laravel and Python.

Python Laravel Queue Queue sync between Python and Laravel using Redis driver. You can process jobs dispatched from Laravel in Python. NOTE: This pack

Sinan Bekar 3 Oct 01, 2022
Accept queue automatically on League of Legends.

Accept queue automatically on League of Legends. I was inspired by the lucassmonn code accept-queue-lol-telegram, and I modify it according to my need

2 Sep 06, 2022
Pyramid configuration with celery integration. Allows you to use pyramid .ini files to configure celery and have your pyramid configuration inside celery tasks.

Getting Started Include pyramid_celery either by setting your includes in your .ini, or by calling config.include('pyramid_celery'): pyramid.includes

John Anderson 102 Dec 02, 2022