Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Account for rate limiting in AWS fetchers #2074

Open
Tracked by #2054
orouz opened this issue Apr 1, 2024 · 1 comment · May be fixed by #2223
Open
Tracked by #2054

Account for rate limiting in AWS fetchers #2074

orouz opened this issue Apr 1, 2024 · 1 comment · May be fixed by #2223
Assignees
Labels
8.15 candidate aws bug Something isn't working Team:Cloud Security Cloud Security team related

Comments

@orouz
Copy link
Collaborator

orouz commented Apr 1, 2024

Motivation
we need to account for rate limiting in our AWS fetchers to avoid losing resources we want to evaluate.

Definition of done

  • figure out the quotas for each AWS fetcher
  • when applicable, AWS fetchers method usage does not exceed the default quota
  • add a retry (with backoff) mechanism for failed requests

Out of scope

  • synced cloudbeat instances consuming the same quota
@orouz orouz added aws Team:Cloud Security Cloud Security team related labels Apr 1, 2024
@tehilashn tehilashn added the bug Something isn't working label May 20, 2024
@orouz
Copy link
Collaborator Author

orouz commented May 21, 2024

turns out AWS Go SDK v2 comes with a built-in retry mechanism. verified by modifying the aws clients config to include logging:

awsConfig.Logger = logging.NewStandardLogger(os.Stdout)
awsConfig.ClientLogMode = aws.LogRetries | aws.LogRequestWithBody | aws.LogResponseWithBody 

the important log we get to see is:

Amz-Sdk-Request: attempt=1; max=3

which corresponds to the default settings of the Standard retry of 3 attempts.

regarding quotas for AWS APIs, there doesn't seem to be any publicly defined general limits nor does the client tries to take that into account, and looks like retrying-with-backoff is the approach taken. (for List/Describe/Get requests)

in terms of modifications to cloudbeat, i think adding 429 HTTP status code to Retryable Errors would be sufficient to ensure we retry when APIs are rate limited (this is a safe-measure addition, as a similar retry condition is already present)

adding the additional retry condition, should also be done with consolidation of AWS config initialization as we're currently doing it in 2 ways:

  1. using the config provider, which uses libbeat
  2. directly using libbeat

@orouz orouz linked a pull request May 21, 2024 that will close this issue
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
8.15 candidate aws bug Something isn't working Team:Cloud Security Cloud Security team related
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants