Bucket4J server side distributed throttling case #277
-
Our system currently uses a bucket4j distributed implementation for querying rest APIs using a global rate limit. One requirement we are interested in moving forward is to provide server-side rate limiting per consumer while also maintaining a global system rate limit. Below is our use case: System A uses distributed Bucket4J to query System B's REST API. The call to System B's rest API is behind a global rate limit (i.e.: 1k requests/min). A consumer can query System A's REST API which will in turn call System B provided tokens are available. We can have multiple consumers all calling System A at once, or at different times. As System A is bottlenecked by System B's global rate limit, we'd like to introduce rate limit caps per consumer of System A for this use case. Ex: Consumers calling System A which calls System B:
Is this something that bucket4j supports? Essentially in Bucket4J terms, we would have a bucket for each consumer with their own rate limit keyed by some consumer-specific attribute, but these would all need to abide by a global rate limit. Two consumers may need a dedicated share of requests and the other consumers need to share the remaining requests from the global pool. All consumer requests combined need to abide by the global rate limit. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hello @jstgodard Bucket4j does not support scenarios when multiple buckets can be transitionally updated in single request, where "transactionally" means that if tokens were consumed from one bucket then tokens must consumed from another bucket, and wise versa if tokens were rejected from one then it must be rejected from another. So there is no way other then solve problem on your side by maintaining many independent buckets and using compensation transactions when tokens were consumed from one but rejected by another: private Bucket globalBucket = ...;
private Bucket myBucket = ...;
public boolen tryConsume(long tokens) {
if (!myBucket.tryConsume(tokens)) {
return false;
} else if (globalBucket.tryConsume(tokens)) {
return true;
} else {
// need to return tokens back to myBucket in order to avoid under-consumption
myBucket.addTokens(tokens);
return false;
}
} |
Beta Was this translation helpful? Give feedback.
Hello @jstgodard
Bucket4j does not support scenarios when multiple buckets can be transitionally updated in single request, where "transactionally" means that if tokens were consumed from one bucket then tokens must consumed from another bucket, and wise versa if tokens were rejected from one then it must be rejected from another.
So there is no way other then solve problem on your side by maintaining many independent buckets and using compensation transactions when tokens were consumed from one but rejected by another: