New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: relax reserve eviction during GC #3566
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
target = db.reserveCapacity | ||
} else { | ||
reserveSizeStart, err = db.reserveSize.Get() | ||
} | ||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd recommend moving the error check also into the else
block.
if err != nil { | ||
return 0, false, err | ||
} | ||
db.logger.Debug("gc: reserve eviction", "reserveSizeStart", reserveSizeStart, "target", target) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using snake-case instead of camel-case for log keys is preferable.
Checklist
Description
The localstore reserve size calculation is not correct and for storage incentives, we manually calculate the syncing index to calculate the reserve size.
In the private testnet we observed that the reserve eviction was causing some inconsistencies in the samples generated. During GC, we evict chunks from the reserve till we reach 90% capacity and move them to the cache. As we no longer consider cache chunks in the sample calculation, we have decided to relax the eviction till we are at 100% capacity. So, only evict if we have more chunks.
Also, as the storage radius is now dynamic, we will need to calculate the reserve size using the current radius during eviction. This might lead to node having to store some more data, but eventually, all the expiring batches should be cleaned up.
This change is