Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Periodic 100% disk I/O utilization on Emerald running nodes. #4577

Open
kernelpanic9 opened this issue Mar 18, 2022 · 3 comments
Open

Periodic 100% disk I/O utilization on Emerald running nodes. #4577

kernelpanic9 opened this issue Mar 18, 2022 · 3 comments

Comments

@kernelpanic9
Copy link

SUMMARY

Experiencing 100% disk IO on node running Emerald paratime.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

emerald paratime

OASIS NODE VERSION

Oasis core 21.3.9
emerald 7.1.0

OS / ENVIRONMENT
STEPS TO REPRODUCE
ACTUAL RESULTS

We are seeing periods of 100% disk I/O utilization on 2 nodes running Emerald paratime.
I/O utilization spikes up on both nodes at the same time.
Both nodes use NVME in RAID0 mode.

The same issue was seen on Emerald ver 6.*

Nodes running Cipher paratime do not exhibit the same behavior.

See screenshots below:

NODE 1:
Screen Shot 2022-03-18 at 00 12 52

NODE 2:
Screen Shot 2022-03-18 at 00 16 57

Please check if this might be causing transaction processing delays.

@tjanez
Copy link
Member

tjanez commented Mar 22, 2022

Thanks for reporting, @kernelpanic9 !

AFAIK, the most I/O intensive tasks are:

  • Checkpoints
    Generating that would traverse through the whole trie.
  • Compactions in BadgerDB
    They are very I/O intensive and would also use a lot of memory.

@kernelpanic9
Copy link
Author

I would guess that extended periods (1-2hours) of 100% IO utilization are Compaction events, which happen every day.
Do you have a network-wide graph of average Emerald block time, so we can correlate if slow block generation can be caused by compaction events happening on multiple node simultaneously? They tend to happen about the same time on both of our Emerald nodes.

@kernelpanic9
Copy link
Author

Another point: Is pruning enabled by default for Emerald paratime DB?
We see that the duration of 100% I/O utilization period greatly increases with the size of DB.
Currently 76GB Emerald DB takes ~2.5hours to compact.

Screen Shot 2022-03-24 at 16 08 42

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants