New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug discovered when attempting to consume more tokens than the bandwidth allows #220
Comments
Hello @shahr It is explicitly made decision to be consistent with bucket.asBlocking.consume(...), which will block the current thread for 0.66s and then return. There is no bug, and there are unit tests that check that response should be as you see. |
Perhaps, I'm missing the point - when attempting to consume more tokens than the bucket has bandwith, shouldn't the time to refill should be "never" (in this case represented by |
But the requested amount can be consumed via a few smaller requests. |
I can change this behavior if you provide a meaningful use case where the current leads to problems in your production. |
Ah, I see your point regarding the re-consumption via smaller token amounts. The challenge is when you have atomic operations that cost more than the bucket allows for (which is likely to be a configuration error). The usecase we have is rate limiting an HTTP service. Some calls cost 1 tokens, others 5 tokens etc. and we are using multiple bandwidths to prevent "bursty" traffic. An example is 100 tokens/sec but 1000 tokens/min. If someone then implements an endpoint which they describe to consume 101 tokens, this will be an issue because that clients calling that service will not be able to use that endpoint. Ideally (in my opinion - but open to hear other suggestions), if an attempt is made to consume more tokens than the bucket is able to provide - an exception should be thrown so that the caller can handle such cases - maybe they can automatically split operations, maybe they can write a test to detect misconfigurations etc. What do you think? |
The behavior that you described in the first message of this thread looks preferable for me because of the following reasons:
|
That makes sense - so we ammend to return |
…equested amount of tokens is greater than bandwidth capacity
Has been released with version |
We have discovered a potential bug in the
ConsumptionProbe#getNanosToWaitForRefill
method's response.To reproduce, we can configure a new bucket e.g.
BucketConfiguration.builder().addLimit(Bandwidth.simple(3, Duration.ofSeconds(2))).build();
. We then consume a token viabucket.tryConsumeAndReturnRemaining(4)
and when we look at the response in theConsumptionProbe
, it comes back as 0.66s instead ofLong.MAX_VALUE
as it should not be possible to consume 4 tokens.I think I have identified where this might be going wrong (at least for our usecase): https://github.com/vladimir-bukhtoyarov/bucket4j/blob/master/bucket4j-core/src/main/java/io/github/bucket4j/BucketState64BitsInteger.java#L474
Not sure if there are other usecases and how this affects features like when there more tokens in the bucket than the maximum allows (which is a feature I requested some time ago :))
Let me know your thoughts.
The text was updated successfully, but these errors were encountered: