New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
📝 Update docs for ORJSONResponse
with details about improving performance
#2615
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2615 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 537 537
Lines 13856 13856
=========================================
Hits 13856 13856
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
📝 Docs preview for commit 8b3b6c4 at: https://5ff7571877e9d83cd685c1ef--fastapi.netlify.app |
📝 Docs preview for commit 2071f19 at: https://601eb84f0aa75d20b894c8cf--fastapi.netlify.app |
…nel histogram bounds) See tiangolo/fastapi#2615
…nel histogram bounds) See tiangolo/fastapi#2615
ORJSONResponse
with details about improving performance
Awesome, thank you @falkben! ☕ I updated the text a little bit to explain the |
Update docs to mention to explicitly return a
Response
when thinking about performance. Currently the docs say to useorjson
for speed, but then demonstrate returning a dictionary, which in my experience can be very slow. The main speed improvement I have found is in returning Responses directly, which avoids the call tojsonable_encoder
.Below are results from some tests I did with 7MB+ json data. These are real world and include round trips to the database so to see such a huge effect is rather amazing to me.
To be clear, the difference here is just returning an explicit
ORJSONResponse
object, vs.JSONResponse
object vs. adict
and letting fastapi inspect it. The data returned are exactly the same in all cases.For more examples w/ code demonstrating the differences see here:
https://github.com/falkben/fastapi_experiments/blob/master/experiments/orjson_response.py
and some tests here:
https://github.com/falkben/fastapi_experiments/blob/master/experiments/test_orjson_response.py
In those tests I observed a 6x slowdown when returning a dictionary vs. returning a Response directly
This was also discussed in the following issue: #360
At some point it would be good to try to optimize
jsonable_encoder
but for now I think it would be good to at least tell people about the performance impact of returning large dictionaries.