Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explicitly signal that we handled an exception with a retry, fixes #4138 #4141

Merged
merged 2 commits into from Apr 12, 2019

Commits on Apr 5, 2019

  1. Explicitly signal that we handled an exception with a retry, fixes #4138

    
    
    Under just the right conditions, we could lose a job:
    
    - Job raises an error
    - Retry subsystem catches error and tries to create a retry in Redis but this raises a "Redis down" exception
    - Processor catches Redis exception and thinks a retry was created
    - Redis comes back online just in time for the job to be acknowledged and lost
    
    That's a very specific and rare set of steps but it can happen.
    
    Instead have the Retry subsystem raise a specific error signaling that it created a retry.  There will be three common cases:
    
    1. Job is successful: job is acknowledged.
    2. Job fails, retry is created, Processor rescues specific error: job is acknowledged.
    3. Sidekiq::Shutdown is raised: job is not acknowledged
    
    Now there is another case:
    
    4. Job fails, retry fails, Processor rescues Exception: job is NOT acknowledged. Sidekiq Pro's super_fetch will rescue the orphaned job at some point in the future.
    mperham committed Apr 5, 2019
    Copy the full SHA
    72a6461 View commit details
    Browse the repository at this point in the history

Commits on Apr 9, 2019

  1. changes

    mperham committed Apr 9, 2019
    Copy the full SHA
    dea5c8e View commit details
    Browse the repository at this point in the history