New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrent requests not really parallel #1625
Comments
Since this is not the first time we receive performance related issues lately, I will try to reproduce the issue. Will keep you posted. |
Thanks! The code is from the duplexmedia/parallel-pagespeed composer package. |
Here are my tests: https://github.com/reproduce/guzzle/tree/issue1625 Using the example code above I couldn't reproduce the issue. Although there are some higher load results, my internet connection is pretty slow at the moment, so you might want to run the tests for yourself as well. You might want to install Blackfire on your server and see if there are some environmental things that block your requests, although you need the Premium version to see HTTP response times separately. |
Thank you very much for the extensive testing! I will look into the results and see if I can make more sense out of them. |
EDIT: Nevermind, I'm stupid. Vagrant was indeed causing the issues. |
Are these dedicated servers in the same network as in case of your first tests? I've just tried it in another network with around the same results. Can you please try out your code in a different network? Can you please try to spin up my reproduced code with blackfire? Maybe we can catch the issues there. |
Ah man, results get better by running stuff on a dedicated server, but not by that much. I'll try to spin up that benchmarking code. :) No, the first tests I've made came from my local machine through the company network (which is pretty good), the results I'm posting now come from a data center somewhere in Germany. So it's a different network. |
These are my results from our local network:
Notice that some results are around the 23s mark, while some others are at around 6s. |
The tests also show that cURL itself seems to be responsible since |
Actually it only shows for how long did cURL run, but that includes the network traffic as well. My example also includes around 25 URLs and 23s (that's what you measured) is not that bad result IMO, given what kind of service you call. Can you do some network profiling to see if it is actually the client responsible for the high load times in your application (not in the reproduction code)? Also, you could try my example with your URLs to see if there is a bottleneck there. |
Any news @NeoLegends ? As far as I can see it's either a cURL issue or for some reasons doing insights for your URLs take too long. |
Yeah, seems like problems with cURL or the sites I've tested. We went around the problem by issuing the requests from the browser. |
All right, closing then |
We're using guzzle to query Google's PageSpeed-API concurrently as per the docs via creating multiple asynchronous requests with
$client->getAsync(...)
and then waiting for the results viaPromise\settle(...)
.However, when firing lots of requests (30+) the server takes extremely long (about 5 minutes) to execute, while the same amount of requests fired locally from node takes only about 20s. 20s is what I would expect when all the requests are sent in parallel, since this is about the time the PageSpeed API takes for a full analysis of a reasonably large site. 5mins, however, is by far too much.
This is the code that creates the requests:
My guess is that either cURL multi handles aren't used the way I expect them to be used or that there is some hidden blocking in this code I don't know about. Any idea why this takes so long?
The text was updated successfully, but these errors were encountered: