Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Update zstd information. #506

Merged
merged 1 commit into from Feb 27, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
36 changes: 24 additions & 12 deletions zstd/README.md
Expand Up @@ -78,6 +78,9 @@ of a stream. This is independent of the `WithEncoderConcurrency(n)`, but that is
in the future. So if you want to limit concurrency for future updates, specify the concurrency
you would like.

If you would like stream encoding to be done without spawning async goroutines, use `WithEncoderConcurrency(1)`
which will compress input as each block is completed, blocking on writes until each has completed.

You can specify your desired compression level using `WithEncoderLevel()` option. Currently only pre-defined
compression settings can be specified.

Expand Down Expand Up @@ -283,8 +286,12 @@ func Decompress(in io.Reader, out io.Writer) error {
}
```

It is important to use the "Close" function when you no longer need the Reader to stop running goroutines.
See "Allocation-less operation" below.
It is important to use the "Close" function when you no longer need the Reader to stop running goroutines,
when running with default settings.

Streams are decoded concurrently in 4 asynchronous stages to give the best possible throughput.
However, if you prefer synchronous decompression, use `WithDecoderConcurrency(1)` which will decompress data
as it is being requested only.

For decoding buffers, it could look something like this:

Expand All @@ -293,7 +300,7 @@ import "github.com/klauspost/compress/zstd"

// Create a reader that caches decompressors.
// For this operation type we supply a nil Reader.
var decoder, _ = zstd.NewReader(nil)
var decoder, _ = zstd.NewReader(nil, WithDecoderConcurrency(0))

// Decompress a buffer. We don't supply a destination buffer,
// so it will be allocated by the decoder.
Expand All @@ -303,9 +310,12 @@ func Decompress(src []byte) ([]byte, error) {
```

Both of these cases should provide the functionality needed.
The decoder can be used for *concurrent* decompression of multiple buffers.
The decoder can be used for *concurrent* decompression of multiple buffers.
By default 4 decompressors will be created.

It will only allow a certain number of concurrent operations to run.
To tweak that yourself use the `WithDecoderConcurrency(n)` option when creating the decoder.
To tweak that yourself use the `WithDecoderConcurrency(n)` option when creating the decoder.
It is possible to use `WithDecoderConcurrency(0)` to create GOMAXPROCS decoders.

### Dictionaries

Expand Down Expand Up @@ -357,19 +367,21 @@ In this case no unneeded allocations should be made.
The buffer decoder does everything on the same goroutine and does nothing concurrently.
It can however decode several buffers concurrently. Use `WithDecoderConcurrency(n)` to limit that.

The stream decoder operates on
The stream decoder will create goroutines that:

* One goroutine reads input and splits the input to several block decoders.
* A number of decoders will decode blocks.
* A goroutine coordinates these blocks and sends history from one to the next.
1) Reads input and splits the input into blocks.
2) Decompression of literals.
3) Decompression of sequences.
4) Reconstruction of output stream.

So effectively this also means the decoder will "read ahead" and prepare data to always be available for output.

The concurrency level will, for streams, determine how many blocks ahead the compression will start.

Since "blocks" are quite dependent on the output of the previous block stream decoding will only have limited concurrency.

In practice this means that concurrency is often limited to utilizing about 2 cores effectively.


In practice this means that concurrency is often limited to utilizing about 3 cores effectively.

### Benchmarks

These are some examples of performance compared to [datadog cgo library](https://github.com/DataDog/zstd).
Expand Down