Pipeline run status stuck on running with qcluster timeout or ctrl-c #594
Labels
bug
Something isn't working
enhancement
New feature or request
help wanted
Extra attention is needed
low priority
Issue is not of immediate concern.
Projects
If a run is killed or hits the qcluster timeout then the status of the run is left as 'running'. This really should be switched to 'error'.
It's possible to catch such events using signal handling, for example:
However when I tried this approach if it is called on a part when the Dask multiprocessing is taking place it really didn't like it and still crashes out. I could not find a way to either gracefully wait for the children to finish or kill them early. If outside of dask then it works ok, so it might be along the right track.
How feasible it is to do this I'm not sure but it's also possible to just accept this behaviour and require admins to sort the run out.
Side note: Django-q is also a bit of a pain because as it stands right now, a run will always be retried at least once if it times out. So you could argue that leaving it as running is beneficial in this case as it will just exit on the second run attempt.
The text was updated successfully, but these errors were encountered: