Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug discovered when attempting to consume more tokens than the bandwidth allows #220

Closed
shahr opened this issue Jan 26, 2022 · 8 comments
Closed

Comments

@shahr
Copy link

shahr commented Jan 26, 2022

We have discovered a potential bug in the ConsumptionProbe#getNanosToWaitForRefill method's response.

To reproduce, we can configure a new bucket e.g. BucketConfiguration.builder().addLimit(Bandwidth.simple(3, Duration.ofSeconds(2))).build();. We then consume a token via bucket.tryConsumeAndReturnRemaining(4) and when we look at the response in the ConsumptionProbe, it comes back as 0.66s instead of Long.MAX_VALUE as it should not be possible to consume 4 tokens.

I think I have identified where this might be going wrong (at least for our usecase): https://github.com/vladimir-bukhtoyarov/bucket4j/blob/master/bucket4j-core/src/main/java/io/github/bucket4j/BucketState64BitsInteger.java#L474

Not sure if there are other usecases and how this affects features like when there more tokens in the bucket than the maximum allows (which is a feature I requested some time ago :))

Let me know your thoughts.

@shahr shahr changed the title Bug discovered when attempting to consume more token that bandwidth allows Bug discovered when attempting to consume more tokens than the bandwidth allows Jan 26, 2022
@vladimir-bukhtoyarov
Copy link
Collaborator

vladimir-bukhtoyarov commented Jan 26, 2022

Hello @shahr

It is explicitly made decision to be consistent with bucket.asBlocking.consume(...), which will block the current thread for 0.66s and then return. There is no bug, and there are unit tests that check that response should be as you see.

@shahr
Copy link
Author

shahr commented Jan 26, 2022

Perhaps, I'm missing the point - when attempting to consume more tokens than the bucket has bandwith, shouldn't the time to refill should be "never" (in this case represented by Long.MAX_VALUE because you can never really consume more tokens than the bucket can hold?

@vladimir-bukhtoyarov
Copy link
Collaborator

vladimir-bukhtoyarov commented Jan 26, 2022

But the requested amount can be consumed via a few smaller requests.

@vladimir-bukhtoyarov
Copy link
Collaborator

I can change this behavior if you provide a meaningful use case where the current leads to problems in your production.

@shahr
Copy link
Author

shahr commented Jan 26, 2022

Ah, I see your point regarding the re-consumption via smaller token amounts. The challenge is when you have atomic operations that cost more than the bucket allows for (which is likely to be a configuration error).

The usecase we have is rate limiting an HTTP service. Some calls cost 1 tokens, others 5 tokens etc. and we are using multiple bandwidths to prevent "bursty" traffic. An example is 100 tokens/sec but 1000 tokens/min. If someone then implements an endpoint which they describe to consume 101 tokens, this will be an issue because that clients calling that service will not be able to use that endpoint.

Ideally (in my opinion - but open to hear other suggestions), if an attempt is made to consume more tokens than the bucket is able to provide - an exception should be thrown so that the caller can handle such cases - maybe they can automatically split operations, maybe they can write a test to detect misconfigurations etc.

What do you think?

@vladimir-bukhtoyarov
Copy link
Collaborator

The behavior that you described in the first message of this thread looks preferable for me because of the following reasons:

  • Throwing an exception from a method whose name is started from try looks radical.
  • It will conflict with Verbose API that is addressed for a deep internal state investigation because throwing an exception prevents providing the internal state snapshot.

@shahr
Copy link
Author

shahr commented Jan 27, 2022

That makes sense - so we ammend to return Long.MAX_VALUE

vladimir-bukhtoyarov added a commit that referenced this issue Jan 28, 2022
…equested amount of tokens is greater than bandwidth capacity
@vladimir-bukhtoyarov
Copy link
Collaborator

Has been released with version 7.1.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants