Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enhancement: consider using sync.Pool #3329

Closed
zllovesuki opened this issue Feb 9, 2022 · 3 comments
Closed

enhancement: consider using sync.Pool #3329

zllovesuki opened this issue Feb 9, 2022 · 3 comments

Comments

@zllovesuki
Copy link
Contributor

There were pending PRs regarding to reusing buffers #2928 #2885, but they seem to be stale atm.

Quick profiling on relatively idle servers:

> go tool pprof .\allocs
File: t-server-linux-amd64
Type: alloc_space
Time: Feb 9, 2022 at 7:41am (PST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top5
Showing nodes accounting for 612.02MB, 68.79% of 889.75MB total
Dropped 288 nodes (cum <= 4.45MB)
Showing top 5 nodes out of 122
      flat  flat%   sum%        cum   cum%
  517.51MB 58.16% 58.16%   517.51MB 58.16%  github.com/hashicorp/memberlist.(*NetTransport).udpListen
   29.50MB  3.32% 61.48%    29.50MB  3.32%  github.com/lucas-clemente/quic-go.(*packetPacker).getShortHeader
      23MB  2.59% 64.06%       23MB  2.59%  crypto/aes.newCipher
      21MB  2.36% 66.43%       21MB  2.36%  github.com/lucas-clemente/quic-go/internal/wire.(*Header).toExtendedHeader
      21MB  2.36% 68.79%    48.51MB  5.45%  github.com/lucas-clemente/quic-go.(*oobConn).ReadPacket
> go tool pprof .\allocs.2
Type: alloc_space
Time: Feb 9, 2022 at 7:44am (PST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top5
Showing nodes accounting for 587.59MB, 67.67% of 868.35MB total
Dropped 320 nodes (cum <= 4.34MB)
Showing top 5 nodes out of 148
      flat  flat%   sum%        cum   cum%
  484.50MB 55.80% 55.80%      485MB 55.85%  github.com/hashicorp/memberlist.(*NetTransport).udpListen
      36MB  4.15% 59.94%       36MB  4.15%  github.com/lucas-clemente/quic-go.(*packetPacker).getShortHeader
   25.51MB  2.94% 62.88%    25.51MB  2.94%  crypto/aes.newCipher
      22MB  2.53% 65.41%       32MB  3.69%  github.com/lucas-clemente/quic-go.(*packetPacker).composeNextPacket
   19.59MB  2.26% 67.67%    19.59MB  2.26%  bytes.makeSlice

Disregarding memberlist, quic-go is the second largest allocator in the program. Maybe we should consider the sync.Pool solution?

@marten-seemann
Copy link
Member

The culprit seems to be packet headers, not frames. Or am I missing something?

You're right, in #2928 I started implementing a pool for packet headers, but it turned out to be a huge mess, so this PR never got merged.

@zllovesuki zllovesuki changed the title enhancement: consider using sync.Pool for frames enhancement: consider using sync.Pool Feb 9, 2022
@zllovesuki
Copy link
Contributor Author

Title was misleading 🙃. I meant to convey the idea of quic seems to be allocating a lot buffers

@marten-seemann
Copy link
Member

We now use a sync.Pool for ACK frames (#3547).
#3526 is tracking further work to reduce allocations, including those of header structs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants