Skip to content

Commit

Permalink
docs: Update zstd information. (#506)
Browse files Browse the repository at this point in the history
  • Loading branch information
klauspost committed Feb 27, 2022
1 parent 15b48b6 commit 0f14e81
Showing 1 changed file with 24 additions and 12 deletions.
36 changes: 24 additions & 12 deletions zstd/README.md
Expand Up @@ -78,6 +78,9 @@ of a stream. This is independent of the `WithEncoderConcurrency(n)`, but that is
in the future. So if you want to limit concurrency for future updates, specify the concurrency
you would like.

If you would like stream encoding to be done without spawning async goroutines, use `WithEncoderConcurrency(1)`
which will compress input as each block is completed, blocking on writes until each has completed.

You can specify your desired compression level using `WithEncoderLevel()` option. Currently only pre-defined
compression settings can be specified.

Expand Down Expand Up @@ -283,8 +286,12 @@ func Decompress(in io.Reader, out io.Writer) error {
}
```

It is important to use the "Close" function when you no longer need the Reader to stop running goroutines.
See "Allocation-less operation" below.
It is important to use the "Close" function when you no longer need the Reader to stop running goroutines,
when running with default settings.

Streams are decoded concurrently in 4 asynchronous stages to give the best possible throughput.
However, if you prefer synchronous decompression, use `WithDecoderConcurrency(1)` which will decompress data
as it is being requested only.

For decoding buffers, it could look something like this:

Expand All @@ -293,7 +300,7 @@ import "github.com/klauspost/compress/zstd"

// Create a reader that caches decompressors.
// For this operation type we supply a nil Reader.
var decoder, _ = zstd.NewReader(nil)
var decoder, _ = zstd.NewReader(nil, WithDecoderConcurrency(0))

// Decompress a buffer. We don't supply a destination buffer,
// so it will be allocated by the decoder.
Expand All @@ -303,9 +310,12 @@ func Decompress(src []byte) ([]byte, error) {
```

Both of these cases should provide the functionality needed.
The decoder can be used for *concurrent* decompression of multiple buffers.
The decoder can be used for *concurrent* decompression of multiple buffers.
By default 4 decompressors will be created.

It will only allow a certain number of concurrent operations to run.
To tweak that yourself use the `WithDecoderConcurrency(n)` option when creating the decoder.
To tweak that yourself use the `WithDecoderConcurrency(n)` option when creating the decoder.
It is possible to use `WithDecoderConcurrency(0)` to create GOMAXPROCS decoders.

### Dictionaries

Expand Down Expand Up @@ -357,19 +367,21 @@ In this case no unneeded allocations should be made.
The buffer decoder does everything on the same goroutine and does nothing concurrently.
It can however decode several buffers concurrently. Use `WithDecoderConcurrency(n)` to limit that.

The stream decoder operates on
The stream decoder will create goroutines that:

* One goroutine reads input and splits the input to several block decoders.
* A number of decoders will decode blocks.
* A goroutine coordinates these blocks and sends history from one to the next.
1) Reads input and splits the input into blocks.
2) Decompression of literals.
3) Decompression of sequences.
4) Reconstruction of output stream.

So effectively this also means the decoder will "read ahead" and prepare data to always be available for output.

The concurrency level will, for streams, determine how many blocks ahead the compression will start.

Since "blocks" are quite dependent on the output of the previous block stream decoding will only have limited concurrency.

In practice this means that concurrency is often limited to utilizing about 2 cores effectively.


In practice this means that concurrency is often limited to utilizing about 3 cores effectively.

### Benchmarks

These are some examples of performance compared to [datadog cgo library](https://github.com/DataDog/zstd).
Expand Down

0 comments on commit 0f14e81

Please sign in to comment.