You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We expecting that after reaching 3600 seconds since starting messenger will finish work with exit code "0" and pod will gracefully reload out worker - and in most cases it happens as it should, but sometimes worker freezes out without any exit code
The latest what we see in STDOUT is: {"message":"Worker stopped due to time limit of 3600s exceeded","context":{},"level":200,"level_name":"INFO","channel":"messenger","datetime":"2024-04-10T08:39:37.697238+00:00","extra":{}}
after that time of latest logs will stay as it forever. Meanwhile kubernates still treating pod with this idle state as normally working.
How to reproduce
Issue normally happened rare, that's why it is hard to provide exact steps to reproduce issue. To reproduce need to run pod with provided configuration and wait when worker will be go to this "idle" state. It can take few hours or even days, issue have sporadical nature
Possible Solution
We have two possible strategy to resolve that issue:
Use messenger with supervisord
Handle WorkerStoppedEvent and through exception on time limit to force pod recreating
Both solutions not ideal
Additional Context
Is it really important to manage workers with supervisor, or it is just recommendation? What motivation behind this recommendation from official symfony docs?
The text was updated successfully, but these errors were encountered:
Is it really important to manage workers with supervisor, or it is just recommendation? What motivation behind this recommendation from official symfony docs?
It's just an example. You can manage your worker however you want.
Symfony version(s) affected
7.0
Description
We are using symfony messenger directly through kubernates command, without supervisor, example of configuration:
We expecting that after reaching 3600 seconds since starting messenger will finish work with exit code "0" and pod will gracefully reload out worker - and in most cases it happens as it should, but sometimes worker freezes out without any exit code
The latest what we see in STDOUT is:
{"message":"Worker stopped due to time limit of 3600s exceeded","context":{},"level":200,"level_name":"INFO","channel":"messenger","datetime":"2024-04-10T08:39:37.697238+00:00","extra":{}}
after that time of latest logs will stay as it forever. Meanwhile kubernates still treating pod with this idle state as normally working.
How to reproduce
Issue normally happened rare, that's why it is hard to provide exact steps to reproduce issue. To reproduce need to run pod with provided configuration and wait when worker will be go to this "idle" state. It can take few hours or even days, issue have sporadical nature
Possible Solution
We have two possible strategy to resolve that issue:
Both solutions not ideal
Additional Context
Is it really important to manage workers with supervisor, or it is just recommendation? What motivation behind this recommendation from official symfony docs?
The text was updated successfully, but these errors were encountered: