Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shield send "http.response.start" from cancellation (BaseHTTPMiddleware) #1710

Closed
wants to merge 5 commits into from

Conversation

acjh
Copy link

@acjh acjh commented Jun 27, 2022

Fixes #1634

await recv_stream.receive() will raise anyio.EndOfStream if request is disconnected, due to:

  • task_group.cancel_scope.cancel() in StreamingResponse.__call__.<locals>.wrap and
  • cancellation check in await checkpoint() of MemoryObjectSendStream.send,

and then RuntimeError: No response returned. will be raised in BaseHTTPMiddleware.

Let's shield send "http.response.start" from cancellation, since the message is ready to be sent to the receiver.

This is an alternative implementation of #1706 in BaseHTTPMiddleware instead of StreamingResponse.
We should not force the shielding in StreamingResponse, since the cancellation check is an intended feature of MemoryObjectSendStream. BaseHTTPMiddleware, which uses both, should be responsible for the compatibility.

@adriangb
Copy link
Member

adriangb commented Jun 27, 2022

I tried my patch to base.py from #1706 (comment) on this branch and the current test (in this branch, not the patch) also passes (I can confirm the test fails on master so it's not just that it passes on any branch). I do think your solution is nicer in some sense, but does it fix any edge cases that the raise an exception solution doesn't?

Co-authored-by: Adrian Garcia Badaracco <1755071+adriangb@users.noreply.github.com>
@acjh
Copy link
Author

acjh commented Jun 27, 2022

I don't think this solution fixes any additional edge cases.

Both solutions pretty much target this specific edge case, where StreamingResponse does listen_for_disconnect and handles the disconnection in a way that is not compatible with how MemorySendStream is used in BaseHTTPMiddleware.

@adriangb
Copy link
Member

If there are no pros to doing cancellation, I would prefer to just raise an exception, but that's just my opinion and I'm biased, so I'd like others to weigh in.

Otherwise, this PR looks good and I'd be happy to move forward with it, thank you for the contribution and your persistence @acjh !

@acjh
Copy link
Author

acjh commented Jun 27, 2022

If there are no pros to doing cancellation

I believe you mean not doing cancellation. More accurately, it's temporarily shielding from cancellation.

I would prefer to just raise an exception

I'm totally fine with closing this PR in favour of that, btw. 😄

This PR simply handles the compatibility between StreamingResponse and usage of anyio memory streams.
Raising an exception takes control of the flow by forcing exception handling, ignoring the streams' behaviour.

@adriangb
Copy link
Member

If there are no pros to doing cancellation

I believe you mean not doing cancellation. More accurately, it's temporarily shielding from cancellation.

Yep sorry for being unclear

I would prefer to just raise an exception

I'm totally fine with closing this PR in favour of that, btw. 😄

This PR simply handles the compatibility between StreamingResponse and usage of anyio memory streams. Raising an exception takes control of the flow by forcing exception handling, ignoring the streams' behaviour.

Let's just wait to see what others think, I'm also fine with either 😄

@acjh
Copy link
Author

acjh commented Jun 27, 2022

Sure. Thank you for your patience and prompt responses!

@scottherlihy
Copy link

@acjh @adriangb from my perspective I do not think an exception should be thrown because I don't view this as an error condition and I'm curious what your thoughts are. We are encountering this while utilizing a callback url pattern with long running tasks. The client is posting a request, it includes a callback url where to send the response, as such the client disconnects immediately. We have all the information we need to send the response and don't need an active client connection yet this error indeterminately causes dropping of responses via the error documented above. We can certainly catch and handle an error thrown, but I don't think there should be an expectation that the client needs to be connected through the lifetime of the process. Does this make sense or is there something I'm missing?

@acjh
Copy link
Author

acjh commented Jun 30, 2022

Are you using background tasks? This PR does not fix #1438.

@scottherlihy
Copy link

@acjh we are, I will read more about that other ticket, thank you

@Kludex Kludex closed this in #1715 Sep 24, 2022
mmcfarland added a commit to microsoft/planetary-computer-apis that referenced this pull request Oct 24, 2022
The 0.21 release resolves a frequent error on our fastapi version.

See:
encode/starlette#1710
encode/starlette#1715
mmcfarland added a commit to microsoft/planetary-computer-apis that referenced this pull request Oct 25, 2022
* Temporarily use fork for starlette 0.21 release

The 0.21 release resolves a frequent error on our fastapi version.

See:
encode/starlette#1710
encode/starlette#1715

* Disable FTP as function app deploy option

Security controls

* Trace request attributes before invoking middleware

If an exception is raised in subsequent middlewares, added trace
attributes will still be logged to Azure. This allows us to find
requests that fail in the logs.

* Make config cache thread safe

cachetools cache is not thread safe and there were frequent exceptions
logged indicating that cache updates during async calls were failing
with key errors similar to those described in:

tkem/cachetools#80

Add a lock per table instance synchronizes cache updates across threads
in.

* Lint

* Changelog
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Seemingly random error RuntimeError: No Reponse returned.
3 participants