New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Applying SWAR Technique to AsciiString
#13522
Comments
Hello, @franz1981 @normanmaurer @chrisvest. I was investigating the |
I've just run a benchmark by roughly applying it. AWS EC2 c4.2xlarge, CPU(s): 8, Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz, openjdk 17.0.8, Ubuntu 22.04.2 LTS
|
@jchrys The performance gains are probably limited by the strings being so short. I think most ASCII strings are short, though. How are you doing multi byte loads without unsafe? |
@chrisvest I didn't actually employ multi byte loads when |
Ok, makes sense. Yeah, I think this looks worthwhile. Feel free to open a PR and ping me. 👍 |
Absolutely! I'll ping you once it's ready. Thanks! |
Well done @jchrys I see you have absorbed the idea to stress the input sequence to understand if an approach is worthwhile :P |
@franz1981 Thanks a lot! I will certainly look into it. I truly appreciate your advice! |
Currently
AsciiString
utilizes naive iterative approach for itsindexOf
and 'finding first lower case and upper case' methods.However, we can improve the performance of these methods by implementing the SWAR technique, similar to the approach used in
ByteBuf
.The text was updated successfully, but these errors were encountered: