Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[8.x] Bring Rate Limiters to Jobs #34829

Merged
merged 6 commits into from Oct 15, 2020
Merged

[8.x] Bring Rate Limiters to Jobs #34829

merged 6 commits into from Oct 15, 2020

Conversation

paras-malhotra
Copy link
Contributor

@paras-malhotra paras-malhotra commented Oct 14, 2020

We already have rate limiters available for the request throttling middleware. This PR enables the same rate limiter classes and syntax to be used for rate limiting jobs!

You may define named rate limiters in your app service provider:

use Illuminate\Cache\RateLimiting\Limit;
use Illuminate\Support\Facades\RateLimiter;

// Returning no rate limit for certain customers...
RateLimiter::for('analytics', function ($job) {
    if ($job->user->vipCustomer()) {
        return Limit::none();
    }

    return Limit::perHour(1)->by($job->user->id),
});

// Returning an array of rate limits the job must pass through...
RateLimiter::for('backups', function ($job) {
    return [
        Limit::perMinute(100),
        Limit::perMinute(3)->by($job->user->id),
    ];
});

You can attach these named rate limiters to jobs by passing their names to the RateLimitsJobs middleware:

public function middleware()
{
    return [new RateLimitsJobs('backups')];
}

@paras-malhotra paras-malhotra changed the title [8.x] Bring Rate Limiters to Jobs As Middleware [8.x] Bring Rate Limiters to Jobs Oct 14, 2020
@taylorotwell
Copy link
Member

Have you seen the ThrottlesRequestsWithRedis class? We offer both ThrottleRequests and the with Redis version because the Redis version is more efficient and has no race conditions. We may want to offer both varieties here as well?

@paras-malhotra
Copy link
Contributor Author

@taylorotwell, thanks for your feedback. I didn't know about the ThrottlesRequestsWithRedis class. I'll mark this PR as draft and work on adding a RateLimitJobsWithRedis class.

@paras-malhotra paras-malhotra marked this pull request as draft October 14, 2020 18:21
@paras-malhotra paras-malhotra marked this pull request as ready for review October 14, 2020 20:46
@paras-malhotra
Copy link
Contributor Author

@taylorotwell I've added a new class for rate limiting jobs with Redis along with a test. Let me know if you'd like me to make any changes.

@paras-malhotra
Copy link
Contributor Author

Currently, it is possible to share the same limiter for 2 or more jobs (just like in requests). All jobs sharing the limiter would count towards the rate limiting in that case. In fact, we could even use the same limiter for a job and a request (again both the job and the request would count towards the limit). Just wanted to make a note of it here.

@rodrigopedra
Copy link
Contributor

Hi @paras-malhotra , great idea on bringing default job middleware to the core, congrats!

In a app I am working on we have a queued job that syncs some information with an external API, so after migrating this project to Laravel 8 I wrote a similar job middleware to rate limit this, mainly because for this particular project we don't have Redis available.

I shared my take on a gist (and on twitter) about 3 weeks ago:

https://gist.github.com/rodrigopedra/ca36827783f1acdb065a98838014b8ce

Overall your code seems more organized, but I would like to highlight some differences from my take as it could aggregate some ideas if you find them helpful.

  • In my version I take the limiter name on the middleware's handle method, much like how it is done with parameters in most HTTP middlewares.

    • This allows services the middleware depends on to be injected in its constructor
    • In, my personal opinion, this brings it closer to the HTTP middlewares we are more used to
    • To help creating the middleware string I have a static method
  • When the limiter resolver returns an array, I loop through all of them first to check if any has too many attempts, and them loop again to call hit in each of them case none had too many attempts

    • Let's say a resolver returned an array with 3 limiters, and upon checking the second one returns that there are too many attempts, then we should not call ->hit on the first one as will be skipping the job.

I have some other minor differences specific to my use-case and client's requirements that I don't think need to make into core, for example:

  • I add a 5 additional seconds after the availableIn call
  • I also allow skipping the rate limiting in case the job is run on the sync connection
    • In this project some records are scheduled to be updated 3 times a day, but a user can also update a record manually through the web UI, as such we want to skip the rate limit check to avoid failing the UI.

Hope it helps somehow, and again congrats on the initiative on bringing this middlewares into the core.

@paras-malhotra
Copy link
Contributor Author

paras-malhotra commented Oct 15, 2020

Hi @rodrigopedra, thanks for your feedback!

I take the limiter name on the middleware's handle method, much like how it is done with parameters in most HTTP middlewares

Yes, I thought about that but job middlewares are constructed in userland (in the middleware method of the job class), unlike request middleware where the container is injected in the constructor

When the limiter resolver returns an array, I loop through all of them first to check if any has too many attempts, and them loop again to call hit in each of them case none had too many attempts

That's a good point. I made it similar to ThrottleRequests@handleRequest. @taylorotwell would you like me to change it so that it doesn't hit any limits if it's throttled? If this change is made, then the behaviour of the Redis limiter would differ from the non-Redis one, because the Redis limiter has an atomic operation for both checking the limit and incrementing the count.

@rodrigopedra
Copy link
Contributor

Thanks for the response @paras-malhotra

Although job middlewares are constructed in the job itself, they are run through the Pipeline, much like their request counterparts.

As such, if we pass a FQCN, with optional additional parameters, any constructor dependecies will be provided by the pipeline's container instance.

You can see in my implementation that the middleware expect a RateLimiter instance, and that I use the static method to build its FQCN with the parameters so there is no need to use the Container singleton there.

Personally I don’t see a problem on using the Container singleton there, when I implemented that I inspired myself on other existing middleware in the corresponding project.

Just thought on sharing so it could provide any good idea.

One posibility is sharing middlewares betwen jobs and requests.

For example instead of releasing the job righr away we could throw an exception and release the job on its failed method. That way the job could be used as a request middleware witout major changes.

But anyway I think this is a great addition to the framework.

@paras-malhotra
Copy link
Contributor Author

paras-malhotra commented Oct 15, 2020

Ahh okay, Pipeline@carry resolves the dependencies 😅. I'd prefer to leave it verbose rather than the string pattern. I don't see enough scenarios where job and request middleware would be shared (and we anyway have separate classes for job and request rate limiting middleware) but would love to hear what Taylor thinks about it.

Thanks so much for the insights @rodrigopedra!

@taylorotwell
Copy link
Member

I did add some buffer seconds like @rodrigopedra's implementation.

@taylorotwell taylorotwell merged commit 49c8b3d into laravel:8.x Oct 15, 2020
@taylorotwell
Copy link
Member

Also renamed classes to RateLimited and RateLimitedWithRedis.

@aachekalov
Copy link

Hi, @paras-malhotra
Thank you for your work! This is great idea!
I have a question. Is it possible to delete the job instead of releasing it back to the queue. I mean something like dontRelease() in case of Illuminate\Queue\Middleware\WithoutOverlapping class

@paras-malhotra
Copy link
Contributor Author

@aachekalov, this is not supported yet. I've just submitted a PR to allow this (#35010).

@aachekalov
Copy link

Thank you very much! That is just what I need!

@bolechen
Copy link

bolechen commented Nov 6, 2020

How to test job has middleware with Bus::fake()? when use Bus::fake, the middleware looks not called.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants