Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it intended that any error from a handler makes Server.handle_stream close the comm? #5483

Open
gjoseph92 opened this issue Oct 29, 2021 · 1 comment

Comments

@gjoseph92
Copy link
Collaborator

In #5480, an unexpected error (#5482) in a stream handler caused the entire stream to close. It turns out we didn't properly reconnect when the worker<->scheduler stream closed (#5481), so things got into a broken state and deadlocked.

I'm curious though if it makes sense for handle_stream to close the whole stream in this case though. You can see here that there are a couple unhandled error cases that would cause the comm to close:

handler = self.stream_handlers[op]
if is_coroutine_function(handler):
self.loop.add_callback(handler, **merge(extra, msg))
await gen.sleep(0)
else:
handler(**merge(extra, msg))

  • stream_handlers[op] does not exist (message requesting an invalid op).
  • handler(**merge(extra, msg)) raises an error (what happened here).
  • Interestingly, if handler is async, the stream will stay open even if handler fails whenever it runs in the future. Why the inconsistency with synchronous?

I'm not quite sure what the protocol is meant to be here. Is closing the comm the only way we have to tell the sender that something went wrong, so we're using it as a signal? Or do we believe that after any failure, we can't trust subsequent messages to be valid, so we should give up and wait to restart the connection if desired?

@gjoseph92 gjoseph92 changed the title Should Server.handle_stream suppress exceptions from handlers? Is it intended that any error from a handler makes Server.handle_stream close the comm? Oct 29, 2021
@fjetter
Copy link
Member

fjetter commented Nov 2, 2021

Interestingly, if handler is async, the stream will stay open even if handler fails whenever it runs in the future. Why the inconsistency with synchronous?

Long known problem. While trying to fix this I ran into a bazillion other smaller inconsistencies and that escalated into an unmergable PR, see #4734. Also very interesting in this space is #5443

I'm not quite sure what the protocol is meant to be here. Is closing the comm the only way we have to tell the sender that something went wrong, so we're using it as a signal? Or do we believe that after any failure, we can't trust subsequent messages to be valid, so we should give up and wait to restart the connection if desired?

I believe closing the stream is a radical but safe way to deal with this. I'm not sure how else you'd like to handle that exception since you do not know what the exception is and therefore cannot implement a clean exception handler. However, what's even more important than whether or not we close the stream is how the reraised exception is handled since the handle_stream is always wrapped

try:
await self.handle_stream(
comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
)
except Exception as e:
logger.exception(e)
raise
finally:
if self.reconnect and self.status in RUNNING:
logger.info("Connection to scheduler broken. Reconnecting...")
self.loop.add_callback(self.heartbeat)
else:
await self.close(report=False)

try:
await self.handle_stream(comm=comm, extra={"client": client})
finally:
self.remove_client(client=client)
logger.debug("Finished handling client %s", client)

try:
await self.handle_stream(comm=comm, extra={"worker": worker})
finally:
if worker in self.stream_comms:
worker_comm.abort()
await self.remove_worker(address=worker)

What they all have in common is that they retrigger a removal of the involved server which in turn may reconnect. I would argue the stream handling is fine. If any problems pop up, that's related to the reconnects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants