Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracking Issue: reduce allocations during bulk data transfers #3526

Closed
8 tasks done
marten-seemann opened this issue Aug 28, 2022 · 8 comments
Closed
8 tasks done

Tracking Issue: reduce allocations during bulk data transfers #3526

marten-seemann opened this issue Aug 28, 2022 · 8 comments
Milestone

Comments

@marten-seemann
Copy link
Member

marten-seemann commented Aug 28, 2022

A 1 GB transfer using v0.29.0 currently creates (as measured by pprof's alloc tool, which measures total allocations over the lifetime of the process) about 500 - 600 MB of allocations, both on the sender and on the receiver side.

This is a problem for performance, because allocating memory consumes resources, and more importantly, all of this memory has to be garbage-collected, putting a lot of pressure on the GC.

We need to drastically reduce the amount of allocations. Target is a reduction of an order of magnitude.

The worst offenders at the moment seem to be:

Traces

Server

server

Client

profile001

@zllovesuki
Copy link
Contributor

On v0.30.0 the allocations look way better:

File: specter-linux-amd64
Type: alloc_space
Time: Oct 30, 2022 at 4:30am (PDT)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top5
Showing nodes accounting for 356.03MB, 51.02% of 697.81MB total
Dropped 309 nodes (cum <= 3.49MB)
Showing top 5 nodes out of 109
      flat  flat%   sum%        cum   cum%
  119.51MB 17.13% 17.13%   119.51MB 17.13%  github.com/lucas-clemente/quic-go.(*packetPacker).getShortHeader
   89.51MB 12.83% 29.95%    89.51MB 12.83%  golang.org/x/sys/unix.ParseSocketControlMessage
   69.50MB  9.96% 39.91%    95.51MB 13.69%  github.com/lucas-clemente/quic-go.(*packetPacker).composeNextPacket
   49.50MB  7.09% 47.01%   159.52MB 22.86%  github.com/lucas-clemente/quic-go.(*oobConn).ReadPacket
      28MB  4.01% 51.02%       28MB  4.01%  kon.nect.sh/specter/chord.(*LocalNode).fixK

@marten-seemann
Copy link
Member Author

marten-seemann commented Dec 6, 2022

Interesting article on how to reduce allocations using a sync.Pool in combination with finalizers: https://web.archive.org/web/20220525085959/http://www.golangdevops.com/2019/12/31/autopool/

@marten-seemann
Copy link
Member Author

pprof update, running in a branch that includes the above PRs (#3644, #3646, #3648, #3655). Again, the scenario is the transfer of a single 1 GB file on a single stream.

Server

profile001

We're down from 540 MB to 128 MB, that's 76%.

Client

image

We're down from 514 MB to 147MB, that's 72%. I believe we can get rid of the allocation in ReadPacket as well, by using a sync.Pool for the receivedPacket struct.

@marten-seemann marten-seemann changed the title Tracking Issue: reduce allocations Tracking Issue: reduce allocations during bulk data transfers Dec 30, 2022
@mholt
Copy link
Contributor

mholt commented Dec 31, 2022

Impressive work, Marten!!

@bt90
Copy link
Contributor

bt90 commented Jan 19, 2023

#3655 looks fixed

@zllovesuki
Copy link
Contributor

Do we know that allocation overhead (if any) for opening/closing streams? That would be common in the RPC use case

@marten-seemann
Copy link
Member Author

Do we know that allocation overhead (if any) for opening/closing streams? That would be common in the RPC use case

I just added a new benchmark test that opens and accepts streams in #3697. Looks like we're allocating almost 4 kB per stream:

goos: darwin
goarch: arm64
pkg: github.com/quic-go/quic-go/integrationtests/self
BenchmarkStreamChurn-10           561832              2662 ns/op            3696 B/op         42 allocs/op

@marten-seemann
Copy link
Member Author

@zllovesuki If you have any ideas how to optimize that, I'd be happy to review a PR :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants