-
Notifications
You must be signed in to change notification settings - Fork 8.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to json-iterator for v1 api. #3536
Conversation
@@ -586,6 +586,7 @@ func respond(w http.ResponseWriter, data interface{}) { | |||
w.Header().Set("Content-Type", "application/json") | |||
w.WriteHeader(http.StatusOK) | |||
|
|||
json := jsoniter.ConfigCompatibleWithStandardLibrary | |||
b, err := json.Marshal(&response{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One issue we had here too was serializing a byte slice and then writing it to w
rather than directly writing into w
. With multi-megabyte sized responses, that's probably quite a bit of slowdown. Can we fix this while we are at it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn't appear possible with the APIs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mh, I just checked the godoc and am a bit puzzled. The jsoniter.ConfigCompatibleWithStandardLibrary
variable is a Config
object. From all I can see this object does not have a Marshal
method as it is being used here and shouldn't compile. It seems Config.Froze
must be called to retrieve an object with the Marshal method. Is the vendored dependency possibly out of date?
The API
the froze method would return then also has a NewEncoder
method which we can use with the response writer to do it without the extra byte slice.
Did you do any manual comparisons on how much speed boost this gives us? |
See the PR description :) |
He, oops my bad. See my comment regarding streaming. Would be curious how much extra speed boost that could give. |
Streaming actually seems to increase CPU usage ~10% compared to marshall, which is a little surprising. |
@fabxc do you have any further comments? |
There are larger performance gains in prometheus/common#112 |
Have you performed an end-to-end benchmark for a fair comparison? |
@brian-brazil end-to-end meaning the http API response? The PR already includes data serializing an entire matrix response (which is pretty much end-to-end). I have pasted in my other issue on prom (#3601 (comment)) with a complete end-to-end number (3m -> 16s). |
@brian-brazil I've also submitted a PR (#3608) which has data on a full end-to-end API calls. |
dfb9231
to
3bc08cf
Compare
I've worked a bit more on this:
json-iterator/go#234 is the change to improve json-iterator, if that gets in we're looking at a total improvement of ~5.5x on JSON encoding time and an elimination of per-Point allocations. |
@brian-brazil I can't seem to find the code for the benchmark |
It's in the PR |
It seems that the structs involved have moved a decent amount since my last benchmarks. I've created a branch on my fork with an implementation of this in easyjson for comparison. To start off, I'll show the comparison of baseline and this json iter one:
Here are the compared results for baseline and easyjson:
And for simpler comparison here is the data comparing jsoniter to easyjson:
The benchmarks here show that there is ~21% performance improvement, and ~50% fewer allocs as well as ~93% fewer bytes allocated. I then pulled down your jsoniter patch and compared easyjson to jsoniter (with the float improvements):
With this wee see that it is ~13% faster. The tradeoff being that it allocates significantly more memory (~5x) albeit in fewer allocations. To be fair the easyjson implementation of mine here was done in ~30m, so there's definitely room for some improvement-- so at this point I'd say the performance between the 2 seems about the same (which is quite impressive!). The main difference remaining between the 2 is that jsoniter is still doing a full marshal and then write (meaning the marshal can't be canceled). IMO this can still be a large concern, as some responses take quite a while to marshal out. That being said, speeding it up dramatically like this will definitely alleviate that concern (even if it doesn't solve it outright). |
web/api/v1/api.go
Outdated
jsoniter.RegisterTypeEncoderFunc("promql.Point", MarshalPointJSON, MarshalPointJSONIsEmpty) | ||
} | ||
|
||
func MarshalPointJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@brian-brazil This marshal function seems terribly out of place. The struct is defined in another package (promql) and thats also where all the tests etc. reside. Leaving this function here means that these 2 packages have to be kept up-to-date (and that someone making changes in the promql package needs to know about testing over here). It seems that a better place to put this would be in the promql package (where the struct is defined).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't feel it appropriate to tie jsoniter into the promql package. This is not a struct that is going to change, as that would break all users.
@jacksontj I have discussed this with some of the other developers and while the gains of easyjson are nice, the maintenance overhead is just too high. json-iterator provides us basically with the same gains and no notable maintenance overhead. |
3bc08cf
to
563c767
Compare
Upstream has taken my PR (and then improved upon it), so I've updated my PR to re-vendor. I think this is now ready to be taken in. |
Do you think this is safe to land in 2.2? I think I would feel more comfortable with this being RCed as well. |
This makes queries ~15% faster and cuts cpu time spent on json encoding by ~40%.
Point has a non-standard marshalling, and is also where the vast majority of CPU time is spent so it is worth optimising.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@brian-brazil do you have benchmarks from this PR? I'm doing work over on prometheus/client_golang#570 and from my testing there I'm not seeing this ~40% reduction, as a matter-of-fact I'm getting on-par performance between encoding/json and jsoniter:
|
This makes queries ~15% faster and cuts cpu
time spent on json encoding by ~40%.