Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

db: periodically sync the data directory during file deletions #3025

Open
jbowens opened this issue Oct 30, 2023 · 4 comments · May be fixed by #3122
Open

db: periodically sync the data directory during file deletions #3025

jbowens opened this issue Oct 30, 2023 · 4 comments · May be fixed by #3122

Comments

@jbowens
Copy link
Collaborator

jbowens commented Oct 30, 2023

We pace file deletions to avoid spikes in IO. However, the data directory is never synced, so IO is not necessarily scheduled yet. This could theoretically allow a buildup of unsynced file deletions. When the data directory is finally synced, IO may spike.

Noticed during the investigation of cockroachlabs/support#2673. It's unlikely to be the root cause of that particular.

@jbowens jbowens added this to Incoming in Storage via automation Oct 30, 2023
@nicktrav nicktrav moved this from Incoming to Backlog in Storage Oct 31, 2023
@sidntrivedi012
Copy link

Hey @jbowens, I would like to contribute and fix this. Can you share some more context related to the issue? Thanks 🙌

@jbowens
Copy link
Collaborator Author

jbowens commented Nov 3, 2023

@sidntrivedi012 — Sure! File deletions are performed by a cleanupManager goroutine that paces itself to smooth deletions out over time:

pebble/cleaner.go

Lines 149 to 172 in 844f058

func (cm *cleanupManager) mainLoop() {
defer cm.waitGroup.Done()
var tb tokenbucket.TokenBucket
// Use a token bucket with 1 token / second refill rate and 1 token burst.
tb.Init(1.0, 1.0)
for job := range cm.jobsCh {
for _, of := range job.obsoleteFiles {
if of.fileType != fileTypeTable {
path := base.MakeFilepath(cm.opts.FS, of.dir, of.fileType, of.fileNum)
cm.deleteObsoleteFile(of.fileType, job.jobID, path, of.fileNum, of.fileSize)
} else {
cm.maybePace(&tb, of.fileType, of.fileNum, of.fileSize)
cm.onTableDeleteFn(of.fileSize)
cm.deleteObsoleteObject(fileTypeTable, job.jobID, of.fileNum)
}
}
cm.mu.Lock()
cm.mu.completedJobs++
cm.mu.completedJobsCond.Broadcast()
cm.maybeLogLocked()
cm.mu.Unlock()
}
}

Deletions are performed by calling (objstorage.Provider).Remove.

I think we want to edit the main loop to call (objstorage.Provider).Sync after removing a file if the size of all the files deleted since the last call to Sync is greater than some threshold; say 64 MB.

@sidntrivedi012
Copy link

@jbowens Got it. Thanks for the info. Hoping to get the PR up for it asap.

darshanime added a commit to darshanime/pebble that referenced this issue Dec 4, 2023
@darshanime darshanime linked a pull request Dec 4, 2023 that will close this issue
@darshanime
Copy link

@jbowens, @sidntrivedi012 I have taken the liberty to raise a PR based on the discussion above, please review! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Storage
  
Backlog
Development

Successfully merging a pull request may close this issue.

4 participants