Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to have a :until_executed lock with an expiration time? #524

Closed
ravicious opened this issue Jun 30, 2020 · 8 comments
Closed

Comments

@ravicious
Copy link

ravicious commented Jun 30, 2020

As other people reported here, I have an issue where unexpectedly shutting down Sidekiq when a unique job is running results in the lock being left in Redis until I clean it manually. I'm using sidekiq-unique-jobs v6.0.22.

As Redis docs say:

The simplest way to use Redis to lock a resource is to create a key in an instance. The key is usually created with a limited time to live, using the Redis expires feature, so that eventually it will get released (property 2 in our list). When the client needs to release the resource, it deletes the key.

And then Sidekiq wiki doubles down on that:

This means that a second job can be pushed to Redis after 10 minutes or after the first job has successfully processed. If your job retries for a while, 10 minutes can pass, thus allowing another copy of the same job to be pushed to Redis. Design your jobs so that uniqueness is considered best effort, not a 100% guarantee. A time limit is mandatory so that if a process crashes, any locks it is holding won't last forever.

So I'd like to have the lock get removed either when the worker finishes (like with :until_executed) or when a certain amount of time passes. In extreme cases, running the same job twice is acceptable for me and preferrable over the system not being able to recover from shutdown without manual intervention.

However, just from reading README I cannot tell how or if achieving that with sidekiq-unique-jobs is possible? I also read issues mentioned in #362, but couldn't come up with anything. The Locking & Unlocking wiki page also suggest that it's impossible to achieve with sidekiq-unique-jobs.

@ravicious
Copy link
Author

I suppose one workaround would be to use :until_expired and then at the end of perform method in my worker delete the digest manually. Howerver, I found that for some reason SidekiqUniqueJobs::Digests.all doesn't return digests created for workers with :until_expired (it does for :until_executed), so I have no way of knowing the digest from within the worker.

@gchan
Copy link

gchan commented Aug 12, 2020

I've also been thinking about that rare scenario where the processes crashes and the lock lasts forever.

I think :until_executing would minimise the risk as time between when the job is picked up from a queue and the lock is removed would be very small. If the process would segfault or crash during execution, the lock is already removed. The risk of locks being left in Redis due to abrupt process termination would be minimised but not completely eliminated.

The trade-off is that :until_executing would allow a similar job to queue while the same one is being executed so that is something to consider if there are concerns about additional work being executed. By your previous comments, this option would not work for you but I thought I would just post my thoughts here so others could consider this as an option.

@summera
Copy link

summera commented Jan 14, 2021

@mhenrixon I am looking into utilizing this gem and am wondering the same thing. Currently I'm trying out v6.0.25 since v7 is still in pre-release. I do see that v7 has a "reaper" to deal with orphaned locks, but I also see there might be some issues with it (#559). Is there a simple way to just set a redis ttl on the lock so that in the worst case, it gets released after X amount of time?

@mhenrixon
Copy link
Owner

@summera, someone did find a rare race condition there. v7 is regardless of that a much more reliable If you have a look at the comment #553 (comment), #553 (comment) and think about that Shopify uses v7 beta you perhaps have a little more convincing for why v7 makes sense to upgrade to.

@summera
Copy link

summera commented Jan 15, 2021

Hey @mhenrixon, thanks for the response 👍 . Sounds good, I'll give v7 a try. Is there an option to set a redis expiration on locks when using the until_executed lock in v7? I still think this could be useful, especially if you opt to not run the reaper.

@mhenrixon
Copy link
Owner

@summera I'll revisit this topic before releasing V7.

It does seem like a good idea to be able to provide an expiration for all locks starting from when they are scheduled.

I think I already have a calculation that takes the time in future into consideration so I see no harm in that.

Might be an option to enforce expiration unless the reaper is running. First I will have to fix that race condition somehow though.

@mhenrixon
Copy link
Owner

@summera @gchan @ravicious v7 supports expiration on all locks. Meaning they will expire starting from the time the job is pushed to the queue. If they are scheduled in the future, they will expire after that timestamp.

If the job is scheduled to be executed The first of January 2022 at 10am sharp. With a lock_ttl: 1_000 the lock will expire 1000 seconds after that time.

@summera
Copy link

summera commented Jan 21, 2021

@mhenrixon thanks a lot, working on v7 👍 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants