Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Profile Guided Optimization results in high memory usage #6991

Closed
atollena opened this issue Feb 20, 2024 · 6 comments
Closed

Profile Guided Optimization results in high memory usage #6991

atollena opened this issue Feb 20, 2024 · 6 comments
Assignees

Comments

@atollena
Copy link
Collaborator

We recently started using profile guided optimizations (pgo) for our Go gRPC services, and in some cases saw a significant increase in memory usage from optimized binaries.

The details of the investigation can be found in golang/go#65532. To summarize, pgo may inlines internal/transport.(*loopyWriter).processData, which is called 3 times in internal/transport.(*loopyWriter).run. This is the goroutine that schedules writes of HTTP2 frames on TCP connections. processData allocates a 16KiB array on the stack to construct a frame, so inlining it in loopyWriter.run results in a total of fix 48KiB memory allocated per connection, instead of 16KiB (that may even be released if loopy is blocked). When there are many connnections, this can be a lot of memory. One of our production services saw a 20% memory increase after building with PGO due to this issue.

There are options to still use PGO (which provides otherwise interesting gains) while avoiding this undesirable side effect, but they require changes to grpc-go:

  1. As suggested in the issue, simply add a go:noinline pragma to loopyWriter.processData to avoid any memory increase.
  2. Move allocation of the local frame byte array directly inside loopyWriter.run. The downside is that when the connection is idle, the 16KiB cannot be reclaimed.
  3. A variation of option 2 where array allocation happens in a subroutine of loopyWriter.run, so that when loopy blocks because the connection is idle, the array is not allocated.

From those option 1 and 3 seem the most compelling to me. Would you be willing to accept a patch for this?

@atollena atollena changed the title Profile Based Optimization results in higher memory usage Profile Guided Optimization results in higher memory usage Feb 20, 2024
@atollena atollena changed the title Profile Guided Optimization results in higher memory usage Profile Guided Optimization results in high memory usage Feb 20, 2024
@atollena
Copy link
Collaborator Author

Note that on ARM/graviton processors, option 2 and 3 result in 20%+ throughput increase when running grpc-go's benchmarks with concurrency=100 and a 1kb payload:

unary-networkMode_none-bufConn_false-keepalive_false-benchTime_5m0s-trace_false-latency_0s-kbps_0-MTU_0-maxConcurrentCalls_100-reqSize_1024B-respSize_1024B-compressor_off-channelz_false-preloader_false-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-sleepBetweenRPCs_0s-connections_1-recvBufferPool_nil-sharedWriteBuffer_false
               Title       Before        After Percentage
            TotalOps     34150108     42452710    24.31%
            Bytes/op     18312.83     18288.20    -0.13%
           Allocs/op       170.81       169.13    -0.59%
             ReqT/op 932525615.79 1159242001.07    24.31%
            RespT/op 932525615.79 1159242001.07    24.31%
            50th-Lat    856.568µs    675.408µs   -21.15%
            90th-Lat    976.363µs    817.218µs   -16.30%
            99th-Lat   1.559802ms   1.401467ms   -10.15%
             Avg-Lat    877.845µs     705.82µs   -19.60%
           GoVersion     go1.22.0     go1.22.0
         GrpcVersion   1.63.0-dev   1.63.0-dev

Same benchmark running on Intel gives less of a gain, but is still a clear improvement:

unary-networkMode_none-bufConn_false-keepalive_false-benchTime_5m0s-trace_false-latency_0s-kbps_0-MTU_0-maxConcurrentCalls_100-reqSize_1024B-respSize_1024B-compressor_off-channelz_false-preloader_false-clientReadBufferSize_-1-clientWriteBufferSize_-1-serverReadBufferSize_-1-serverWriteBufferSize_-1-sleepBetweenRPCs_0s-connections_1-recvBufferPool_nil-sharedWriteBuffer_false
               Title       Before        After Percentage
            TotalOps     34144564     36850123     7.92%
            Bytes/op     18305.01     18282.50    -0.13%
           Allocs/op       170.25       168.91    -1.17%
             ReqT/op 932374227.63 1006254025.39     7.92%
            RespT/op 932374227.63 1006254025.39     7.92%
            50th-Lat    854.773µs     783.94µs    -8.29%
            90th-Lat    992.547µs    946.702µs    -4.62%
            99th-Lat   1.564684ms   1.582244ms     1.12%
             Avg-Lat    877.519µs    813.102µs    -7.34%
           GoVersion     go1.22.0     go1.22.0
         GrpcVersion   1.63.0-dev   1.63.0-dev

@zasweq zasweq self-assigned this Feb 21, 2024
@zasweq
Copy link
Contributor

zasweq commented Feb 22, 2024

Thanks for looking into this/thinking about potential solutions. After reading both issue threads I like option 1 and 3, especially if 3 increases throughput and also doesn't allocate when connection is idle. However, option 1 seems very simple (I don't know too much historically about these pragmas in this codebase). Doug is out this week and back next and I trust his judgement on this so I'll defer to him on the final decision, but we'd definitely be willing to review any PR's/patches for this :).

@zasweq zasweq assigned dfawley and unassigned zasweq Feb 22, 2024
@dfawley
Copy link
Member

dfawley commented Mar 15, 2024

Sorry for the delay here.

I'm fine with option (1) as a quick fix here.

What are your thoughts about this option:

  1. Allocate on the heap, and use a sync.Pool to re-use memory and prevent too much garbage from being created? IIUC stack allocations can permanently grow the stack and would not ever be reduced for idle connections.

@atollena
Copy link
Collaborator Author

atollena commented Mar 18, 2024

What are your thoughts about this option:
Allocate on the heap, and use a sync.Pool to re-use memory and prevent too much garbage from being created? IIUC stack allocations can permanently grow the stack and would not ever be reduced for idle connections.

Thanks for asking. I agree that option 4 sounds like the best. I had run benchmarks with option 4 (sync.Pool) and got visible performance degradation (with and without PGO). The change I tried is here. I didn't report my results here because I couldn't figure out why it was slower. One theory would be that loopy ends up not gathering as much data before flushing, resulting in more syscall -- if it's the case, then I'm not quite sure what we should do.

I did find out why the perf improvement on ARM comes thought: it is from zeroing the array:

Screenshot 2024-02-22 at 13 39 16

Option 4 also gets rid of this zeroing in the hot path, so there's something else going on with sync.Pool that I couldn't explain. I'll try to finish that investigation and then decide on option 4 vs option 1.

Copy link

This issue is labeled as requiring an update from the reporter, and no update has been received after 6 days. If no update is provided in the next 7 days, this issue will be automatically closed.

@atollena
Copy link
Collaborator Author

This is likely to be addressed by the work in PapaCharlie#1, which reworks the loopy writer using option 4. This probably will decrease performance slightly on the benchmark in ARM, but that might be offset by other optimizations in that branch. And I'm not sure the slowdowns I'm seeing on ARM is going to materialize in real losses on production usages.

I'd suggest we try this branch when @PapaCharlie is ready with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants