Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hex::encode performance #18

Open
vladignatyev opened this issue Jan 27, 2019 · 4 comments
Open

hex::encode performance #18

vladignatyev opened this issue Jan 27, 2019 · 4 comments

Comments

@vladignatyev
Copy link

Hi,

My code heavily relies on hex::encode from your crate. I'm not sure that performance of hex::encode is a bottleneck in my project, but for some of my methods a single call to hex::encode takes about 1/4 of overall time the method takes. It's too long for my crypto-related stuff, yet hex is a very nice crate.

I started an investigation and added a bench for hex::encode in my fork.
What's interesting is the fact that the performance of hex::encode depends on a length of input non-lineary.

Here is an output of cargo bench for information.

running 6 tests
test tests::bench_encode_512bits ... bench:         962 ns/iter (+/- 139)
test tests::bench_encode_256bits ... bench:         518 ns/iter (+/- 61)
test tests::bench_encode_128bits ... bench:         287 ns/iter (+/- 42)
test tests::bench_encode_64bits  ... bench:         163 ns/iter (+/- 93)
test tests::bench_encode_32bits  ... bench:         123 ns/iter (+/- 15)
test tests::bench_encode_16bits  ... bench:          95 ns/iter (+/- 63)

What I'm going to do is to try to improve the performance of this method.
Please vote up / comment in a case my investigation and pull-requests are appreciated here.

Thanks!

@vladignatyev
Copy link
Author

But it looks like it is impossible to do better 🐶

@LukasKalbertodt
Copy link
Contributor

Hi there! Just a few quick comments from my side.

The non-linearity you see is probably caused by two things: constant measuring overhead and more importantly memory allocation. hex::encode allocates a String each time you call it. This has some significant overhead (a linear part that scales with the size of the allocation, but also a significant constant part). I am fairly sure that the relationship between input size and time get's pretty close to linear for larger inputs (16KB or so).

About improving performance: for larger inputs, you will be able to improve performance a whole lot by using SIMD instructions. A quick search brought up this crate: faster-hex which implements exactly that. I doubt that it will be faster for tiny inputs, though.

Also, related: #14

@KokaKiwi
Copy link
Owner

KokaKiwi commented Jul 2, 2019

Hi, just sayin' that with last commit b86f391 I managed to double performance according to benchmarks actually implemented:

test bench::a_bench ... bench:      31,055 ns/iter (+/- 11,988) = 412 MB/s

Compared to previous:

test bench::a_bench ... bench:      52,996 ns/iter (+/- 5,083) = 224 MB/s

@taiki-e
Copy link

taiki-e commented Aug 28, 2021

FYI: #62 and #64 will greatly (7-10x) improve the performance of both encoding and decoding.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants