Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(pruner/light): implement light pruning #3388

Merged
merged 9 commits into from
May 28, 2024
Merged

Conversation

Wondertan
Copy link
Member

@Wondertan Wondertan commented May 10, 2024

Implements light pruning in before the Shwap era.

It recursively traverses the tree and deletes the NMT nodes as it goes.

The time to prune the whole history takes more than 24 hours.
The time to prune recent heights (every 5 mins) takes ~0.5s
The historical pruning reduced disk usage from ~62GB to ~ 38GB.
The RAM usage for active pruning is stable at ~70MB

@Wondertan Wondertan added the kind:feat Attached to feature PRs label May 10, 2024
@Wondertan Wondertan self-assigned this May 10, 2024
@Wondertan
Copy link
Member Author

Accidentally included unrelated changes from make fmt

ramin
ramin previously approved these changes May 10, 2024
Copy link
Collaborator

@ramin ramin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

super clear and simple

Copy link
Member

@walldiss walldiss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to fix few things and should be g2g

share/ipld/delete_test.go Show resolved Hide resolved
pruner/light/pruner.go Show resolved Hide resolved
pruner/light/pruner.go Show resolved Hide resolved
share/ipld/delete.go Show resolved Hide resolved
share/ipld/delete_test.go Show resolved Hide resolved
pruner/light/pruner.go Outdated Show resolved Hide resolved
share/ipld/delete_test.go Show resolved Hide resolved
@walldiss
Copy link
Member

walldiss commented May 13, 2024

Extra thing to consider is that it looks like pruner service seems to be enabled to Light nodes in previous releases. It noop operation, but iterates over headers. So node can start with non height == 1 pruner checkpoint. If thats the case, prunner might skip some old samples. So pruner checkpoint needs to be reset for LN.

@renaynay renaynay added the v0.15.0 Intended for v0.15.0 release label May 13, 2024
@Wondertan Wondertan added v0.14.0 Intended for v0.14.0 release and removed v0.15.0 Intended for v0.15.0 release labels May 13, 2024
walldiss
walldiss previously approved these changes May 14, 2024
@Wondertan
Copy link
Member Author

Actually, Pruner has never been released, so above is not a problem.

Copy link
Member

@renaynay renaynay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good besides fx provide,

would be good to have some profiles as well for what LN looks like during a pruning job on the PR description

pruner/light/pruner.go Outdated Show resolved Hide resolved
pruner/light/pruner.go Show resolved Hide resolved
@renaynay
Copy link
Member

The reason why I say profiles for pruning job is bc I'm wondering if we should adjust config values for how often pruner runs for LNs maybe

renaynay
renaynay previously approved these changes May 15, 2024
pruner/light/pruner.go Show resolved Hide resolved
pruner/service.go Outdated Show resolved Hide resolved
walldiss
walldiss previously approved these changes May 16, 2024
@Wondertan
Copy link
Member Author

Wondertan commented May 16, 2024

After looking at my running node again, I found it in the restart-panic loop. It reached the file descriptor limit and couldn't start. I don't know why it crashed in the first place, but it will likely be the same limit issue. Increasing the limit resolved the issue but introduced another one, and we are now panicking on the DeleteNode code path, particularly here. Apparently, the nmtNode read from the disk does not match any of the constant sizes we define, which suggests some form of DB corruption.

@Wondertan
Copy link
Member Author

I couldn't recover logs on why the node crashed initially, so I must rerun the test and resync the node. I will give it more file descriptors this time and see how it behaves. I suspect that the number of files grew due to badger nature, as every deletion in LSM trees is a write until compaction happens, which cleans up both the original write and the deletion. The question is why that compaction didn't clean up things on time.

@Wondertan
Copy link
Member Author

35748692 .celestia-light/data ~35GB

LN sampling the whole chain on mainnet

Next I will measure the pruned one

@Wondertan
Copy link
Member Author

So far, it looks good, but the resource usage is concerning. My 4 cores are constantly utilized around 60-80%, which is more or less fine. The other thing is that RAM reached almost 4GB and I don't know what to do about that yet

@Wondertan
Copy link
Member Author

Wondertan commented May 17, 2024

Ok, the new pruning round finished in 88.5 minutes, but for some reason, the datastore size is almost the same as before pruning, just a tiny bit less. During the pruning process, it went down 34GB once and back to 35 again. I am confused; why is there no reduction?

35726268 .celestia-light/data ~35GB

@Wondertan
Copy link
Member Author

Wondertan commented May 18, 2024

I resynced and repruned, and the disk size does not go down when the deletion code path is clearly executed(verified through profiling and seeing badger's delete op taking RAM). I don't. The only good news is that the high RAM usage issue I mentioned before is not an issue. It's the same confusion we had with Astria when the process grabbed too much memory, yet available for reclaim by the OS.

At this point, I don't know why data is not cleaned up and what to do next. Maybe its time to look into the Badger's flesh again...

@Wondertan
Copy link
Member Author

Wondertan commented May 20, 2024

Okay, I know why, now. I've never actually synced the chain. Changing the availability to an archival one isn't enough(cc @renaynay), or there is some bug that doesn't propagate the archival window properly.

EDIT: OK..., I found out that I only been changing this line, but not this. It's so confusing to have million ways to construct the modules

@Wondertan
Copy link
Member Author

Wondertan commented May 21, 2024

The sampling is still in progress after 24 hours with ~920000 heights sampled, and the store size is ~47GB. I want it to finish before starting pruning.

@Wondertan
Copy link
Member Author

Wondertan commented May 23, 2024

Sampling is done.
~62GB with height 744063 failing availability(edit: restart fixed that smh)

@Wondertan
Copy link
Member Author

Since the last comment, the pruning has been running, and it still is. It's extremely slow, and I wonder if we should parallelize it. It would definitely help because pruning is not bottlenecked by any resource.

@Wondertan
Copy link
Member Author

Wondertan commented May 27, 2024

The pruning has finished! I can't find the exact time it took for the whole process, as I had to restart the node several times, and it does not log the time it took on node stop. I don't want to rerun it to know the exact time, but what's clear is that it took more than 24 hours. The little (every 5 minutes) pruning rounds take ~0.5 seconds. The pruned light node got to ~38GB from ~62GB

Thinking more about parallelization to speed up historical pruning. I don't think it's worth it atm, even though I have a little urge to implement it:

  • Implementing it in light.Pruner will require abandoning that work as shwap is around the corner
  • The proper parallelization inside of the pruner.Service will take a lot of time and delay the pruning launch.
    • Ideally, we would piggy-back on DASer workers to do that, but merging pruner service and daser is a different discussion.
  • In general, any implementation work may further delay things, while we could deliver a single-threaded version already and add parallelization once slow pruning becomes a real issue.
  • The pruning of recent shares is fast enough, and that is the most important long-term. The historical pruning is important only for after release.
  • It will take more resources from the light node. Looking at the profiles the current pruning process when active takes ~70MB stable and parallelization is likely to X this by the concurrency factor, which is not good for the LN,

@Wondertan Wondertan enabled auto-merge (squash) May 27, 2024 13:28
@walldiss
Copy link
Member

Pruning time of ~0.5 seconds is totally fine and sufficient. Prunner will free up space much faster than new samples are created, so which is the most important thing. Pruning months of data is once in lifetime thing and it is fine if possible take even days to prune everything for long running Light node.

@Wondertan Wondertan merged commit 541844d into main May 28, 2024
30 checks passed
@Wondertan Wondertan deleted the hlib/light-prune branch May 28, 2024 15:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind:feat Attached to feature PRs v0.14.0 Intended for v0.14.0 release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants