Skip to content

Rewriting the periodic task service.

Dan Knox edited this page Apr 17, 2016 · 5 revisions

Status

Implemented in the master branch, on track for 1.0.

Current state

The current solution is very complex. The current state and timestamps for the periodic tasks is stored in a database table (either SQL or MongoDB). Every celeryd worker server is responsible for triggering these tasks, and thus we have to carefully make sure that race conditions don’t occur. This is solved by table locking and drifting the servers apart in time (to decrease the probability of collisions).

Each of these servers is polling the database table every second, which is very bad.

Proposed solution

A new service is introduced: celerybeat, it can either be started by a separate command (manage.py celerybeat) or running
celeryd daemon with the -B option. It is a centralized service for triggering periodic tasks, there’s no state stored in any database, it’s just a clock service sending messages when the tasks are scheduled. It saves the current state to disk so it can be resumed when the server restarts.

Positive

  • No polling
  • No need for multiple backend code for the different systems, because no database is used.
  • No locking
  • Simplicity

Caveats

  • One have to make sure the service is only running on one machine.
  • Can’t dynamically register new tasks, but feature could be added later (e.g. by sending a message to the service) if there’s a need.

Trying the code

Celerybeat is currently in the master branch (0.9.x) http://github.com/ask/celery/tree/master