Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

anyio.BrokenResourceError when using BaseHTTPMiddleware #1284

Closed
2 tasks done
rad-pat opened this issue Sep 9, 2021 · 7 comments
Closed
2 tasks done

anyio.BrokenResourceError when using BaseHTTPMiddleware #1284

rad-pat opened this issue Sep 9, 2021 · 7 comments

Comments

@rad-pat
Copy link

rad-pat commented Sep 9, 2021

Checklist

  • The bug is reproducible against the latest release and/or master.
  • There are no similar issues or pull requests to fix it yet.

Describe the bug

An anyio.BrokenResourceError is observed when using a very simple custom Middleware base on BaseHTTPMiddleware.

To reproduce

Server

import uvicorn
from starlette.applications import Starlette
from starlette.middleware import Middleware
from starlette.middleware.base import BaseHTTPMiddleware
from starlette.responses import Response
from starlette.routing import Route
from starlette.requests import Request

class MyMiddleware(BaseHTTPMiddleware):
    async def dispatch(
        self, request: Request, call_next
    ) -> Response:
        return await call_next(request)

async def ping(request):
    return Response("Some Text")

app = Starlette(
    debug=False,
    routes=[
        Route('/ping', ping, methods=['GET', 'POST']),
    ],
    middleware=[
        Middleware(MyMiddleware),
    ]
)

uvicorn.run(app, host='0.0.0.0', port=8000, access_log=True)

Client

from http.client import HTTPConnection

def ping(x):
    con = HTTPConnection('0.0.0.0', 8000)
    con.request('GET', f'/ping?id={x}')

z = [
    ping(y)
    for y in range(10)
]

Expected behavior

No exceptions, responses returned

Actual behavior

An anyio.BrokenResourceError is raised

Debugging material

Traceback ``` ERROR: Exception in ASGI application Traceback (most recent call last): File "/home/plaid/py39/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 371, in run_asgi result = await app(self.scope, self.receive, self.send) File "/home/plaid/py39/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__ return await self.app(scope, receive, send) File "/home/plaid/py39/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ await self.middleware_stack(scope, receive, send) File "/home/plaid/py39/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ raise exc File "/home/plaid/py39/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ await self.app(scope, receive, _send) File "/home/plaid/py39/lib/python3.9/site-packages/starlette/middleware/base.py", line 57, in __call__ task_group.cancel_scope.cancel() File "/home/plaid/py39/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 564, in __aexit__ raise exceptions[0] File "/home/plaid/py39/lib/python3.9/site-packages/starlette/middleware/base.py", line 30, in coro await self.app(scope, request.receive, send_stream.send) File "/home/plaid/py39/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ raise exc File "/home/plaid/py39/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) File "/home/plaid/py39/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__ await route.handle(scope, receive, send) File "/home/plaid/py39/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle await self.app(scope, receive, send) File "/home/plaid/py39/lib/python3.9/site-packages/starlette/routing.py", line 64, in app await response(scope, receive, send) File "/home/plaid/py39/lib/python3.9/site-packages/starlette/responses.py", line 139, in __call__ await send({"type": "http.response.body", "body": self.body}) File "/home/plaid/py39/lib/python3.9/site-packages/starlette/exceptions.py", line 68, in sender await send(message) File "/home/plaid/py39/lib/python3.9/site-packages/anyio/streams/memory.py", line 205, in send raise BrokenResourceError anyio.BrokenResourceError ```

Environment

  • OS: Ubuntu 18.04
  • Python version: 3.9.6
  • Starlette version: 0.16.0
@iamthen0ise
Copy link

It is not a Starlette issue — client abort connection immediatly after sending request, and Starlette has to write to closed socket.

This should work:

from http.client import HTTPConnection

def ping(x):
    con = HTTPConnection('0.0.0.0', 8000)
    con.request('GET', f'/ping?id={x}')
    con.getresponse()
    con.close()

z = [
    ping(y)
    for y in range(10)
]

@rad-pat
Copy link
Author

rad-pat commented Sep 10, 2021

If you remove the Middleware, or add other Middleware such as Middleware(SessionMiddleware, secret_key='123'), or downgrade to Starlette 0.14.* no exception is raised. Surely it should gracefully handle that a connection has been aborted and not generate a Traceback?

brakhane added a commit to brakhane/starlette that referenced this issue Oct 20, 2021
ASGI specifies that send is a no-op when the connection is closed.

HTTPMiddleware uses anyio streams which will raise
an exception when the connection is closed.

So we need to ignore the exception in our send function.

Fixes encode#1284
brakhane added a commit to brakhane/starlette that referenced this issue Oct 20, 2021
ASGI specifies that send is a no-op when the connection is closed.

HTTPMiddleware uses anyio streams which will raise
an exception when the connection is closed prematurely.

So we need to ignore the exception in our send function.

Fixes encode#1284
brakhane added a commit to brakhane/starlette that referenced this issue Oct 21, 2021
ASGI specifies that send is a no-op when the connection is closed.

HTTPMiddleware uses anyio streams which will raise
an exception when the connection is closed prematurely.

So we need to ignore the exception in our send function.

Fixes encode#1284
brakhane added a commit to brakhane/starlette that referenced this issue Oct 21, 2021
ASGI specifies that send is a no-op when the connection is closed.

HTTPMiddleware uses anyio streams which will raise
an exception when the connection is closed prematurely.

So we need to ignore the exception in our send function.

Fixes encode#1284
brakhane added a commit to brakhane/starlette that referenced this issue Oct 21, 2021
ASGI specifies that send is a no-op when the connection is closed.

HTTPMiddleware uses anyio streams which will raise
an exception when the connection is closed prematurely.

So we need to ignore the exception in our send function.

Fixes encode#1284
brakhane added a commit to brakhane/starlette that referenced this issue Oct 21, 2021
ASGI specifies that send is a no-op when the connection is closed.

HTTPMiddleware uses anyio streams which will raise
an exception when the connection is closed prematurely.

So we need to ignore the exception in our send function.

Fixes encode#1284
@Kludex
Copy link
Sponsor Member

Kludex commented Nov 1, 2021

I can't reproduce it on master after #1262 was merged. Can someone confirm the issue was solved?

@Kludex Kludex mentioned this issue Nov 1, 2021
2 tasks
@havardthom
Copy link

I was not able to reproduce on master branch either.

@Kludex
Copy link
Sponsor Member

Kludex commented Nov 2, 2021

Since I can't reproduce this anymore, I'm going to close it.

@xkortex
Copy link

xkortex commented Sep 13, 2022

I'm still seeing this in my sentry logs. It seems to occur when the server is overloaded, and the client times out / dies.
This is on anyio==3.6.1 fastapi==0.83.0 starlette=0.19.1 uvicorn==0.18.3.

I managed to reproduce it with a slightly modified server app:

app = Starlette(
    debug=True,
    routes=[
        Route('/ping', ping, methods=['GET', 'POST']),
    ],
    middleware=[
        Middleware(MyMiddleware),
        Middleware(MyMiddleware),
    ]
)

and this stressor script:

#!/usr/bin/env bash
URI="http://localhost:8000/ping"
COUNT="${1:-1}"
TIMEOUT="0.1"

do_it () {
curl -k \
 --max-time "${TIMEOUT}" \
 --compressed \
 -H "Accept-Encoding: gzip, compressed" \
 -H "Connection: close" \
 -H "Date: $(date +"%Y-%m-%dT%H:%M:%S%z")" \
 "${URI}" &
}


for i in $(seq "${COUNT}"); do
  do_it
done

Running ./broken_resource.sh 10 seems tolerable, as I increase the count, I start getting tracebacks. Locally, I see:

Traceback for local example
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/Users/mike/.virtualenvs/core38/lib/python3.8/site-packages/anyio/streams/memory.py", line 94, in receive
    return self.receive_nowait()
  File "/Users/mike/.virtualenvs/core38/lib/python3.8/site-packages/anyio/streams/memory.py", line 89, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/mike/.virtualenvs/core38/lib/python3.8/site-packages/starlette/middleware/base.py", line 43, in call_next
    message = await recv_stream.receive()
  File "/Users/mike/.virtualenvs/core38/lib/python3.8/site-packages/anyio/streams/memory.py", line 114, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/mike/.virtualenvs/core38/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/Users/mike/.virtualenvs/core38/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "/Users/mike/.virtualenvs/core38/lib/python3.8/site-packages/starlette/applications.py", line 124, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Users/mike/.virtualenvs/core38/lib/python3.8/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/Users/mike/.virtualenvs/core38/lib/python3.8/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/Users/mike/.virtualenvs/core38/lib/python3.8/site-packages/starlette/middleware/base.py", line 68, in __call__
    response = await self.dispatch_func(request, call_next)
  File "/Users/mike/ai/sandbox/mike/readai-sandbox/./readai_sandbox/server/repro_broken_resource.py", line 13, in dispatch
    return await call_next(request)
  File "/Users/mike/.virtualenvs/core38/lib/python3.8/site-packages/starlette/middleware/base.py", line 47, in call_next
    raise RuntimeError("No response returned.")
RuntimeError: No response returned.

but in my production application I'm still seeing the BrokenResourceError. I'm thinking that might be due to differences in middleware.

            if self._state.waiting_senders.pop(send_event, None):  # type: ignore[arg-type]
                raise BrokenResourceError

https://github.com/agronholm/anyio/blob/48efdec45e70a833cc939c1d2752f24e29d1bf0b/src/anyio/streams/memory.py#L220-L221

Traceback I see in production
WouldBlock: null
  File "anyio/streams/memory.py", line 209, in send
    self.send_nowait(item)
  File "anyio/streams/memory.py", line 202, in send_nowait
    raise WouldBlock

BrokenResourceError: null
  File "starlette/exceptions.py", line 93, in __call__
    raise exc
  File "starlette/exceptions.py", line 82, in __call__
    await self.app(scope, receive, sender)
  File "fastapi/middleware/asyncexitstack.py", line 21, in __call__
    raise e
  File "fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "starlette/routing.py", line 670, in __call__
    await route.handle(scope, receive, send)
  File "starlette/routing.py", line 266, in handle
    await self.app(scope, receive, send)
  File "starlette/routing.py", line 68, in app
    await response(scope, receive, send)
  File "starlette/responses.py", line 162, in __call__
    await send({"type": "http.response.body", "body": self.body})
  File "starlette/exceptions.py", line 79, in sender
    await send(message)
  File "anyio/streams/memory.py", line 221, in send
    raise BrokenResourceError

According to Sentry, the state of self at the time of the error is

MemoryObjectSendStream(_state=MemoryObjectStreamState(max_buffer_size=0, buffer=deque([]), open_send_channels=1, open_receive_channels=0, waiting_receivers=OrderedDict(), waiting_senders=OrderedDict()), _closed=False)

(I'm guessing this is just after .pop() is called, which is why waiting_senders is empty dict)
the ASGI message is {body: b'true', type: 'http.response.body'} (my endpoint in prod is just return True)
and the send_event is of type <anyio._backends._asyncio.Event object at 0x7f68dcc31be0>

Maybe this is AnyIO's problem at this point?
Possibly: agronholm/anyio#440

@laziest-coder
Copy link

@xkortex I am seeing same error in production environment, have you solved the issue? I tried to reproduce the issue on local machine with your bash script, but unfortunately that didn't work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants