Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching of results #69

Open
AJenbo opened this issue Jun 29, 2023 · 9 comments
Open

Caching of results #69

AJenbo opened this issue Jun 29, 2023 · 9 comments

Comments

@AJenbo
Copy link
Contributor

AJenbo commented Jun 29, 2023

PHPStan has a pretty good caching scheme but from what I can tell Bladestan is working in a way that doesn't really benefit from it.

Since adding Bladestan to our CI process the total time it takes to run has almost doubled. I'm thinking that it might be worth looking into implementing result caching to speed things up with little has changed between two runs.

Are there any thought about how caching could be implemented in an effective way for Bladestan?

@TomasVotruba
Copy link
Owner

This would be very helpful 👍

@AJenbo
Copy link
Contributor Author

AJenbo commented Oct 2, 2023

@staabm I'm pinging you here since the original topic has been locked.

If you are still willing to look in to the performance of Bladestan here is a small sample app that should have just enough content that you can start to notice the performance issues:
bladestan-example-app.zip

No cache: 4.8 -> 9.6sec
Cache (minor edit to app/Http/Controllers/ArticlesController.php): 3 -> 7 sec

The might not seem like big numbers, so here for scale are some numbers from a couple of real world applications I work on:
Project A no cache: 78 -> 348 sec
Project B no cache: 35 -> 345 sec

From what I can tell I think the blade compiler isn't being run in a way that makes use of it's cache, but there might be other things that I'm overlooking since I haven't done any real profiling.

Additionally I think we would also see some performance gains from caching the processed PHP code. Although we should avoid ending up caching the version where @included templates have been embedded in a way where edits to the included templates do not invalidate the cache. On that note if you have some input for #85 that would be greatly appreciated also :)

Thank you for taking a look at this and lending some of you expertise.

@staabm
Copy link
Contributor

staabm commented Oct 2, 2023

@AJenbo if you would consider a github sponsing I could have a look in the next few days

@AJenbo
Copy link
Contributor Author

AJenbo commented Oct 2, 2023

@staabm I already have a https://github.com/phpstan/phpstan#phpstan-pro subscription does that work?

@staabm
Copy link
Contributor

staabm commented Oct 2, 2023

I am a contributor to PHPStan in my freetime and I don't get any money from the PHPStan project itself.

@staabm
Copy link
Contributor

staabm commented Oct 2, 2023

(I had a quick look at the reproducer. it is either not "big enough" to show a real bottleneck or does not contain obvious low hanging fruits in the blackfire profile)

@staabm
Copy link
Contributor

staabm commented Oct 2, 2023

@AJenbo https://twitter.com/markusstaab/status/1708845590413521011

@staabm
Copy link
Contributor

staabm commented Oct 3, 2023

created a summary of my understanding how bladestan works on the phpstan repo which hopefully sparks some discussion on how we could better integrate bladestan with phpstan


From what I can tell I think the blade compiler isn't being run in a way that makes use of it's cache, but there might be other things that I'm overlooking since I haven't done any real profiling.

Additionally I think we would also see some performance gains from caching the processed PHP code. Although we should avoid ending up caching the version where @included templates have been embedded in a way where edits to the included templates do not invalidate the cache. On that note if you have some input for #85 that would be greatly appreciated also :)

my profilling lead to the following results

  • blade compilation itself seems to be fast enough atm (which I assumed would be the slow part)
  • add a return []; as a first line of processTemplateFilepath makes bladestan overhead nearly completly go away -> the nested file analysis call takes almost all the time

@AJenbo
Copy link
Contributor Author

AJenbo commented Oct 3, 2023

That sounds like a really good idea, there are for sure something about the integration that are less then optimal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants