Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle rate limiting #1114

Open
kylefox opened this issue Aug 16, 2022 · 4 comments
Open

Handle rate limiting #1114

kylefox opened this issue Aug 16, 2022 · 4 comments

Comments

@kylefox
Copy link

kylefox commented Aug 16, 2022

Is your feature request related to a problem? Please describe.

It's a bit difficult to gracefully handle Stripe's rate limits when there are various places within an application that make requests to the API.

This is particularly a problem for Stripe Connect applications (or Stripe Apps) where an unexpected surge in volume from a connected account can cause a flurry of API requests from the application.

Describe the solution you'd like

I'm not sure if it's technically feasible (or even possible) but it would be amazing if the Stripe Ruby client could internally manage rate limit errors automatically.

I suspect the biggest challenge/barrier would be the delay that results from exponentially backing off retries, which is a blocking operation. An async approach could work, but that might make the API for this library too complex.

Describe alternatives you've considered

We have investigated token buckets and using Sidekiq to queue/throttle API requests, but both of those solutions are non-trivial for an application like ours (Stripe App / Connect extension) that has a high surface area with the Stripe API.

Additional context

No response

@remi-stripe
Copy link
Contributor

@kylefox Thanks for the feature request, we definitely understand how painful this can sometimes be for applications. Part of the problem is that designing a reliable retry system that fits everyone's needs is not simple and can often be too crude for most application needs.
In a lot of cases, when you have a spike, you do want to be aware of it and handle it in a certain way, for example by blocking a majority of your reads (Retrieve/List calls) to let you writes (Create/Update/Delete calls) go through in priority. But it's not that easy as some integrations have a read before a write.

Overall, it's definitely something we would like to improve in the future so we'll see if we get more developers to chime in about this need so that we can prioritize it in the future.

@stripe stripe deleted a comment from jackote14 Feb 21, 2023
@richardm-stripe
Copy link
Contributor

I think stripe-ruby should at least obey the standard HTTP Retry-After header. Stripe servers don't send this header, presently, but it would be good to have the option.

@BubonicPestilence
Copy link

Some suggestion: auto_paging_each doesn't have any way to add "sleep" :) Which theoretically can hit "rate limit"

@msf-caesar
Copy link

Try with implementing a rate limiting mechanism in your application. This can involve setting a limit on the number of API requests that can be made within a certain time frame. Additionally, you can implement exponential backoff retries to handle rate limit errors and manage the delay caused by these retries. Consider using asynchronous processing to avoid blocking operations and keep the API simple.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants