Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PERF: Make truncating backtraces more memory-efficient #108

Merged
merged 1 commit into from Feb 7, 2020
Merged

PERF: Make truncating backtraces more memory-efficient #108

merged 1 commit into from Feb 7, 2020

Conversation

OsamaSayegh
Copy link
Member

Recently we merged this PR #103 that introduced a new config that limited the message's total size to X bytes and X is 10,000 by default. If a message is above the limit, then logster will eat away from the backtrace until the message size goes below the limit. This caused a big performance regression because the implementation is not so great performance-wise; it allocates tons of string and array objects while reducing the message size.

This PR should bring the performance back to about what it was before merging #103.

Some benchmark numbers:

Before this PR:
Time taken: 3649.959228515625 (ms)
==================================================
Total allocated: 1.03 GB (3780401 objects)
After this PR:
Time taken: 1736.82861328125 (ms)
==================================================
Total allocated: 274.55 MB (702720 objects)

My benchmark script reports the same message 2000 times to Logster and measures how long it takes and how much memory is allocated.

@SamSaffron SamSaffron merged commit 519e3b8 into discourse:master Feb 7, 2020
@SamSaffron
Copy link
Member

looks good to me!

@OsamaSayegh OsamaSayegh deleted the message-cap-perf branch February 7, 2020 04:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants