-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Worker instance keep restarting after upgrade to 4.0.0-dev #28445
Comments
To address the issue of worker instances continuously restarting after upgrading to Superset version 4.0.0, consider the following steps:
These steps should help identify and resolve the issue with worker instances restarting. If the problem persists, further investigation into Superset's change logs for version 4.0.0 may be necessary to identify any breaking changes or additional migration steps required.
|
It's an issue with Liveness probe :
|
After running 'celery -A superset.tasks.celery_app:app inspect ping -d celery@$HOSTNAME' manually in worker pod, response was Error:
|
After done some digging, it appears to be related to the Celery version issue with Redis. #28471 |
After upgrade to celery 5.4, flower can see workers listed, but the ping for liveness within worker still fails with no nodes replied. |
LivenessProbe still fails with 'Error: No nodes replied within time constraint' as result from "celery -A superset.tasks.celery_app:app inspect ping -d celery@$HOSTNAME" |
Bug description
After upgrade to 4.0.0, the workers keeps restarting
How to reproduce the bug
Install helm with this value.yaml
Screenshots/recordings
Logs from the worker pod
Superset version
4.0.0
Python version
3.10
Node version
16
Browser
Chrome
Additional context
No response
Checklist
The text was updated successfully, but these errors were encountered: