Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Locks not being released after Heroku dyno restart #691

Open
schwern opened this issue Mar 12, 2022 · 1 comment
Open

Locks not being released after Heroku dyno restart #691

schwern opened this issue Mar 12, 2022 · 1 comment

Comments

@schwern
Copy link

schwern commented Mar 12, 2022

Describe the bug
I have some workers which are meant to run continuously. I'm using Heroku which restarts every 24 hours. I've noticed after the restart there will be locks for JIDs which are no longer running. The reaper eventually clears them, but I'd expect the locks to be cleared when the workers are killed by the restart.

Expected behavior
When a restart kills running workers their locks should be released.

Current behavior
The locks remain for JIDs which no longer exist.

Worker class
This module is included in workers which are meant to run continuously. Their perform is an infinite loop fetching from an API and pushing it to a Redis queue, or doing blocking pops from a Redis queue.

Stoppable just provides a Redis flag to tell them to exit the loop.

module DaemonWorker
  include Stoppable
  extend ActiveSupport::Concern

  @classes = []

  # @return [Array] Classes which include DaemonWorker
  def self.classes
    @classes
  end

  included do
    include Sidekiq::Worker

    DaemonWorker.classes << self

    sidekiq_options(
      lock: :until_and_while_executing,
      on_conflict: { client: :log, server: :reject }
    )
  end
end

Additional context

Using Redis 6.2.6 provided by redisgreen.net.

# config/initializers/sidekiq.rb
require "sidekiq/throttled"
Sidekiq::Throttled.setup!

SidekiqUniqueJobs.config.lock_info = true

Sidekiq.configure_server do |config|
  config.server_middleware do |chain|
    chain.add(Sidekiq::Middleware::Server::RetryCount)
    chain.add(SidekiqUniqueJobs::Middleware::Server)
  end

  config.client_middleware do |chain|
    chain.add(SidekiqUniqueJobs::Middleware::Client)
  end

  # Clean up locks when a job dies.
  # Not needed in Sidekiq 7.
  # See https://github.com/mhenrixon/sidekiq-unique-jobs#3-cleanup-dead-locks
  config.death_handlers << ->(job, _ex) do
    digest = job['lock_digest']
    SidekiqUniqueJobs::Digests.new.delete_by_digest(digest) if digest
  end

  SidekiqUniqueJobs::Server.configure(config)
end

Sidekiq.configure_client do |config|
  config.client_middleware do |chain|
    chain.add(SidekiqUniqueJobs::Middleware::Client)
  end
end
    rails (6.1.4.7)

    sidekiq (6.4.1)
      connection_pool (>= 2.2.2)
      rack (~> 2.0)
      redis (>= 4.2.0)
    sidekiq-throttled (0.15.0)
      concurrent-ruby
      redis-prescription
      sidekiq
    sidekiq-unique-jobs (7.1.12)
      brpoplpush-redis_script (> 0.1.1, <= 2.0.0)
      concurrent-ruby (~> 1.0, >= 1.0.5)
      sidekiq (>= 5.0, < 8.0)
      thor (>= 0.20, < 3.0)

RUBY VERSION
   ruby 2.7.4p191
@mhenrixon
Copy link
Owner

Unfortunately, that isn't a problem I can solve directly. That's why I added the orphan reaper to help mitigate it.

The locks should stay unless Sidekiq fails to put the job back on the queue. Otherwise, you'd end up with duplicates, rendering the gem completely useless.

The issue in this situation is the while executing part of your job that could sometimes delay the processing of this job slightly but not indefinitely. I am sure there are some edge cases that I've missed here.

There are also quite a few improvements in the later versions of the gem that might be interesting to you. Especially if you are going to upgrade to Sidekiq 7.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants