Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Celery workers are getting terminated automatically after some time with message " Canceling task consumer. " #8880

Open
14 of 18 tasks
rehbarkhan opened this issue Feb 28, 2024 · 2 comments

Comments

@rehbarkhan
Copy link

rehbarkhan commented Feb 28, 2024

Checklist

  • I have verified that the issue exists against the main branch of Celery.
  • This has already been asked to the discussions forum first.
  • I have read the relevant section in the
    contribution guide
    on reporting bugs.
  • I have checked the issues list
    for similar or identical bug reports.
  • I have checked the pull requests list
    for existing proposed fixes.
  • I have checked the commit log
    to find out if the bug was already fixed in the main branch.
  • I have included all related issues and possible duplicate issues
    in this issue (If there are none, check this box anyway).

Mandatory Debugging Information

  • I have included the output of celery -A proj report in the issue.
    (if you are not able to do this, then at least specify the Celery
    version affected).
  • I have verified that the issue exists against the main branch of Celery.
  • I have included the contents of pip freeze in the issue.
  • I have included all the versions of all the external dependencies required
    to reproduce this bug.

Optional Debugging Information

  • I have tried reproducing the issue on more than one Python version
    and/or implementation.
  • I have tried reproducing the issue on more than one message broker and/or
    result backend.
  • I have tried reproducing the issue on more than one version of the message
    broker and/or result backend.
  • I have tried reproducing the issue on more than one operating system.
  • I have tried reproducing the issue on more than one workers pool.
  • I have tried reproducing the issue with autoscaling, retries,
    ETA/Countdown & rate limits disabled.
  • I have tried reproducing the issue after downgrading
    and/or upgrading Celery and its dependencies.

Related Issues and Possible Duplicates

Related Issues

  • None

Possible Duplicates

  • None

Environment & Settings

5.3.6 (emerald-rush)

celery report Output:

software -> celery:5.3.6 (emerald-rush) kombu:5.3.5 py:3.11.5
            billiard:4.2.0 py-amqp:5.2.0
platform -> system:Linux arch:64bit, ELF
            kernel version:5.10.192-183.736.amzn2.x86_64 imp:CPython
loader   -> celery.loaders.app.AppLoader
settings -> transport:amqp results:django-db

timezone: 'Etc/UTC'
task_compression: 'gzip'
task_serializer: 'json'
task_track_started: True
task_acks_late: True
task_reject_on_worker_lost: False
task_remote_tracebacks: True
result_backend: 'django-db'
result_serializer: 'json'
result_compression: 'gzip'
result_expires: datetime.timedelta(days=1)
result_extended: True
task_queues:
    (<unbound Queue default -> <unbound Exchange default(direct)> -> default>,
 <unbound Queue etl -> <unbound Exchange etl(direct)> -> etl>,
 <unbound Queue report -> <unbound Exchange report(direct)> -> report>)
task_default_queue: 'default'
task_default_exchange: 'default'
task_default_exchange_type: 'direct'
task_default_routing_key: '********'
task_default_delivery_mode: 'transient'
worker_prefetch_multiplier: 4
worker_lost_wait: 1800
worker_send_task_events: True
task_send_sent_event: True

Steps to Reproduce

Required Dependencies

  • Minimal Python Version: N/A or Unknown
  • Minimal Celery Version: N/A or Unknown
  • Minimal Kombu Version: N/A or Unknown
  • Minimal Broker Version: N/A or Unknown
  • Minimal Result Backend Version: N/A or Unknown
  • Minimal OS and/or Kernel Version: N/A or Unknown
  • Minimal Broker Client Version: N/A or Unknown
  • Minimal Result Backend Client Version: N/A or Unknown

Python Packages

pip freeze Output:

amqp==5.2.0
celery==5.3.6
Django==4.2.9
django-celery-beat==2.5.0
django-celery-results @ file:///tmp/devtools/wheels/django_celery_results-2.5.1-py3-none-any.whl#sha256=3209e274392ff792088d31a98eedfa4ed3c37c89ed16310e0f0acd8f6d1eea93
kombu==5.3.5</summary>

Other Dependencies

N/A

Minimally Reproducible Test Case

Expected Behavior

Celery worker should always be alive even if its idle or processing the data.

Actual Behavior

At first Celery workers start executing as expected but after 30 to 40 mins it is getting offline and after checking the logs got to know that the MainProcess is getting cancelled automatically. It doesn't matter whether I send any data to queue or not but it is just stopping but ForkPoolWorker are working even though it is marked as offline worker in Flower.

[2024-02-28 08:02:57,145: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2024-02-28 08:02:58,058: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2024-02-28 08:03:02,146: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2024-02-28 08:03:03,058: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2024-02-28 08:03:07,084: DEBUG/MainProcess] Canceling task consumer...
[2024-02-28 08:14:29,082: WARNING/ForkPoolWorker-4] <some data>
[2024-02-28 08:14:29,082: WARNING/ForkPoolWorker-3] <some data>
[2024-02-28 08:14:29,082: WARNING/ForkPoolWorker-2] <some data>
[2024-02-28 08:14:29,082: INFO/ForkPoolWorker-3] <some data>
[2024-02-28 08:14:29,082: INFO/ForkPoolWorker-4] <some data>
[2024-02-28 08:14:29,082: INFO/ForkPoolWorker-1] <some data>
[2024-02-28 08:14:29,082: INFO/ForkPoolWorker-2] <some data>

Note I have masked the data with some data

I am using the same setup in my localhost machine , there it is working as aspected

RabbitMQ Details:
RabbitMQ 3.12.10
Erlang 26.2.2

@salitaba
Copy link

salitaba commented Feb 29, 2024

Does this comment helps you ? @rehbarkhan

@rehbarkhan
Copy link
Author

Does this comment helps you ? @rehbarkhan

I have tried this method but no luck.
I have upgraded my project from Django 2.2 to Django 4.2 and also upgrade the celery, django-celery-results and django-celery-beat accordingly. After the upgrade I am getting this error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants