Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fix deadlock in Remove (linux/inotify)
Several people have reported this issue where if you are using a single goroutine to watch for fs events and you call Remove in that goroutine it can deadlock. The cause for this is that the Remove was made synchronous by PR fsnotify#73. The reason for this was to try and ensure that maps were no longer leaking. In this PR: IN_IGNORE was used as the event to ensure map cleanup. This worked fine when Remove() was called and the next event was IN_IGNORE, but when a different event was received the main goroutine that's supposed to be reading from the Events channel would be stuck waiting for the sync.Cond, which would never be hit because the select would then block waiting for someone to receive the non-IN_IGNORE event from the channel so it could proceed to process the IN_IGNORE event that was waiting in the queue. Deadlock :) Removing the synchronization then created two nasty races where Remove followed by Remove would error unnecessarily, and one where Remove followed by an Add could result in the maps being cleaned up AFTER the Add call which means the inotify watch is active, but our maps don't have the values anymore. It then becomes impossible to delete the watches via the fsnotify code since it checks it's local data before calling InotifyRemove. This code attempts to use IN_DELETE_SELF as a means to know when a watch was deleted as part of an unlink(). That means that we didn't delete the watch via the fsnotify lib and we should clean up our maps since that watch no longer exists. This allows us to clean up the maps immediately when calling Remove since we no longer try to synchronize cleanup using IN_IGNORE as the sync point. - Fix fsnotify#195 - Fix fsnotify#123 - Fix fsnotify#115
- Loading branch information