Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix race condition in execOnMany #717

Closed
wants to merge 3 commits into from
Closed

Fix race condition in execOnMany #717

wants to merge 3 commits into from

Conversation

gammazero
Copy link
Contributor

@gammazero gammazero commented May 19, 2021

The execOnMany function was exiting prematurely due to an error, leaving its child goroutines running. These would write to a channel that closed after execOnMany returned in findProvidersAsyncRoutine. Fixed counting error so that goroutines all finish before execOnMany exits.

This fixes ipfs/kubo#8146

The execOnMany function was exiting prematurely due to an error, leaving its child goroutines running.  These would write to a channel that closed after execOnMany returned in findProvidersAsyncRoutine,  Fixed counting error so that goroutines all finish before execOnMany exits.

This fixes ipfs/kubo#8146
fullrt/dht.go Outdated Show resolved Hide resolved
@aschmahmann aschmahmann mentioned this pull request May 25, 2021
case <-waitSuccessCh:
if numSuccess >= numSuccessfulToWaitFor {
if !t.Stop() {
<-t.C
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will block the first time it's called because the timer is already stopped which means t.Stop() is false, and we've already emptied out the channel.

Doing it this way we need some boolean to check for the first time we've reset the timer. This follows roughly the same pattern as the pause detection timer in this function https://github.com/ipfs/go-ipfs-provider/blob/d391dae4a595473f6797eb5d5b803a529a7bbdbc/batched/system.go#L130

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nm, looks like we're discussing this in #719

@gammazero
Copy link
Contributor Author

This will be handled in #719

@gammazero gammazero closed this May 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0.9.0-rc1 crashes with "panic: send on closed channel"
2 participants