Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

increasing memory footprint (v4) #175

Open
DAMEK86 opened this issue Apr 25, 2022 · 1 comment
Open

increasing memory footprint (v4) #175

DAMEK86 opened this issue Apr 25, 2022 · 1 comment

Comments

@DAMEK86
Copy link

DAMEK86 commented Apr 25, 2022

I try to use your lib within a data collection HTTP endpoint by injecting the http.ResponseWriter.
As a result, the endpoint provides a tar.lz4 file which is served very cool on my arm64.

code snippet

zr := lz4.NewWriter(w)
	defer zr.Close()

	options := []lz4.Option{
		lz4.BlockChecksumOption(false),
		lz4.BlockSizeOption(lz4.Block1Mb),
		lz4.ChecksumOption(true),
		lz4.CompressionLevelOption(lz4.Level9),
		lz4.ConcurrencyOption(5),
	}
  err := zr.Apply(options...)
  tw := tar.NewWriter(zr)
	defer tw.Close()

 ...
filepath.Walk() with io.Copy(tw, data)
..

so far so good but in detail, I see a heavy memory lifting and it looks like Close() didn't clean up the lz4block pool

go tool pprof -nodefraction=0 mem.pprof
File: ___go_build_go_com_test_example
Type: inuse_space
Time: Apr 25, 2022 at 10:48pm (CEST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 17420.19kB, 100% of 17420.19kB total
Showing top 10 nodes out of 30
      flat  flat%   sum%        cum   cum%
   16384kB 94.05% 94.05%    16384kB 94.05%  github.com/pierrec/lz4/v4/internal/lz4block.glob..func7
    1032kB  5.92%   100%     1032kB  5.92%  github.com/pierrec/lz4/v4/internal/lz4block.glob..func2
    4.19kB 0.024%   100%     4.19kB 0.024%  runtime.malg
         0     0%   100%     8192kB 47.03%  archive/tar.(*Writer).Flush
         0     0%   100%     8192kB 47.03%  archive/tar.(*Writer).WriteHeader
         0     0%   100%     8192kB 47.03%  github.com/emicklei/go-restful/v3.(*Container).dispatch
         0     0%   100%     8192kB 47.03%  github.com/emicklei/go-restful/v3.(*FilterChain).ProcessFilter
         0     0%   100%     8192kB 47.03%  github.com/emicklei/go-restful/v3.CrossOriginResourceSharing.Filter
         0     0%   100%     8192kB 47.03%  github.com/pierrec/lz4/v4.(*Writer).Write
         0     0%   100%     8192kB 47.03%  github.com/pierrec/lz4/v4.(*Writer).init

I tried different options but always run into the same memory leakage
hope you can help!

profile graph

@hifi
Copy link

hifi commented Dec 17, 2023

We've also seen this become a problem when there's sudden high pressure of concurrent compression going on and the pool will become extremely big. When this pressure goes away the pool isn't being freed by design.

For our use case it could even be better to be able to limit the pool size to slow down concurrent throughput artificially but have consistent memory footprint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants