httpx.ReadError (anyio.BrokenResourceError) when sending large request (target dependant) #3067
-
When I send a request with a large body (let's say something around 1 MiB), I conditionally receive For instance, it throws for import asyncio
import logging
from dotenv import load_dotenv
from httpx import Limits, Timeout, AsyncClient
load_dotenv()
logging.basicConfig(level=logging.DEBUG)
logging.getLogger("httpx").setLevel(logging.DEBUG)
model_id = "google/flan-t5-xl"
async def main():
client = AsyncClient(
# arbitrary limits (does not affect the result)
limits=Limits(max_connections=10, max_keepalive_connections=0),
timeout=Timeout(
timeout=100,
connect=100,
read=100,
pool=100,
write=100,
),
)
async with client:
mib_in_bytes = 1024**2
multiplier = 1 # for 0.1 the requests pass; server dependant
input = "A" * int(mib_in_bytes * multiplier)
await client.post(
"https://google.com", # TARGET SERVER
json={"input": input}
)
if __name__ == "__main__":
result = asyncio.run(main())
print("OK") Output
Additional Info:
In case Thanks for the library, by the way! We do like it at https://github.com/IBM/ibm-generative-ai 🚀 |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Hi @Tomas2D, I think I am seeing a similar issue with mine and trying to pull my hair out to figure what's wrong. Did you manage to get around this please? |
Beta Was this translation helpful? Give feedback.
The first thing to do here is simplify the example.
I started out by just removing the async, and checking I was getting the same results.
Then made the example as simple as possible.
Which is...
Or, using a different client to test the behaviour...
The behavior is similar in both cases... sometimes you'll get a neatly formed 405 response. Other times the server outright closes the connection. The one thing that we could do here is aim to improve the error messaging, and make it more clear that the server closed the connection.