Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eth,core: re-enable LRU cache in GetLogs #25425

Conversation

ryanschneider
Copy link
Contributor

NOTE: IMO this PR is not yet ready for merge, it's more of a starting off point.

As noted in #25421 I'm pretty sure there's a rather severe performance regression around the eth_getLogs RPC.

To validate that, I've made this PR which re-enables the BlockChain codepath to reintroduce the receiptsCache LRU into the mix.

On two mainnet nodes each receiving a rather high volume of eth_getLogs RPCs (but only for the most recent 128 blocks) the pprof flamegraph is rather telling.

Note that in both cases I'm already running with a small patch that raises the size of the receiptsCache LRU from 32 to 1024, available here: ryanschneider@ae9a5ee

Anyways, with only that patch applied, we see that overall a lot of time is spent in unindexedLogs, and that most of that time is spent in ReadLogs:

image

Note that this flame graph is a 30s CPU profile, so at 188s of CPU time unindexedLogs is fully consuming more than 6 CPUs at 100% utilization.

And now with the work-in-progress patch in this PR:

image

Of course we don't see ReadLogs at all any more, since that code isn't in the codepath anymore, but note that overall CPU time of unindexedLogs is now only 3.7s.

Why I consider this PR is still Work-In-Progess

There's two main reasons I'd consider this PR a WIP:

It's definitely possible to get the "best of both worlds" by adding a LRU cache to ReadLogs but I don't currently see anything like that in rawdb and I wouldn't want to introduce something like that w/o discussing it first. Note that doing so would also add more memory usage as this LRU would be specialized to logs and not receipts and thus could not be reused so would "compete" w/ the existing LRUs in BlockChain.

Basically the balancing act is that the current master version has somewhat better overall performance for single RPCs by avoiding some redundant decoding in the rawdb layer, but has overall worse parallel performance due to the lack of caching leading to the same receipts/logs being read from the DB multiple times.

Anyways cc @s1na @fjl and @holiman since you three have been involved in ✅ for all recent PRs on this subject.

@s1na
Copy link
Contributor

s1na commented Jul 28, 2022

Good insight! I completely missed the cache (no pun intended) there. I'd prefer not to revert the culprit PR unless we have to. Specially since there is a second one building on it: #25199.

The most straightforward solution that comes to mind is going through Blockchain as you've done and adding a cache for Logs objects there. A second one is adding a cache for encoded receipts, but this second one would need a bit more re-structuring. What do others think?

@ryanschneider
Copy link
Contributor Author

The most straightforward solution that comes to mind is going through Blockchain as you've done and adding a cache for Logs objects there.

I like this approach too, the only downside is it does add another LRU cache, which also increases overall memory usage.

Tangentially related, it'd be really nice to allow users to tweak these LRU sizes via command line args. For example, a node dedicated largely to serving historical eth_getLogs queries might want to run w/ a very large logs cache (perhaps in the thousands of blocks or more).

@fjl
Copy link
Contributor

fjl commented Aug 2, 2022

We're going with the approach in #25459, and the cache size will be configurable via command-line as well.

@fjl fjl closed this Aug 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants