Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Detect unnecessary renders #225

Open
mdjastrzebski opened this issue Oct 19, 2022 · 9 comments
Open

[FEATURE] Detect unnecessary renders #225

mdjastrzebski opened this issue Oct 19, 2022 · 9 comments
Labels
hacktoberfest help wanted Extra attention is needed

Comments

@mdjastrzebski
Copy link
Member

Is your feature request related to a problem? Please describe.
This would be an optional feature activated by measurePerformance option on per test level and configure option on global level. Option name could be detectRedundantRenders or something similar.

When turned on, measuring code would analyze the rendered output and notify user if render did result in the same user interface being generated, i.e. that render was redundant.

Describe the solution you'd like
When turned on, after each onRender callback from React.Profiler component measuring code would run .toJSON method from RNTL/RTL in order to generate host component representation of the output. Next it would compare the output with similar output generated on previous onRender callback and warn user if the output is the same, meaning that the render did not cause change to the host component representation of the UI, i.e. UI did not change.

@thymikee
Copy link
Member

This has a potential to increase to test duration significantly. I'd also consider adding a CLI flag, so it can be done from command line once in a while

@mdjastrzebski
Copy link
Member Author

Yes, and also there could be some technical difficulties with that. As I haven't done POC for that. But we would still award any meaningful progress with #hacktoberfest-accepted tag even if not merged 🚀

@mdjastrzebski
Copy link
Member Author

One idea not to affect the test duration too much would be to perform this check only during a single run. That make sense assuming that test scenarios are deterministic and do not have variable render patterns from time to time. We could either do additional non-measured run and/or use the first run which we already discard from the results.

@adhorodyski
Copy link
Contributor

An additional run sounds great actually, as you said doesn't have to be measured.

@guvenkaranfil
Copy link

I love the idea. I was looking for optimization one of the client app and found there was so unnecessary re-render. So I could write a performance test to measure and get the time result. But as far as I can see I can not see an option to check that item of the list is rendering unnecessarily @mdjastrzebski I also open up another feature issue with a repository sample

@mdjastrzebski mdjastrzebski added the help wanted Extra attention is needed label Jan 19, 2024
@guvenkaranfil
Copy link

I've started to investigate how to implement the feature. measureRender function contains the onRender callback. I could not figure out since screen defining after buildUiToRender function calling how we would call the toJSON

You were thinking implementing logic inside measureRender function right @mdjastrzebski

@mdjastrzebski
Copy link
Member Author

@guvenkaranfil This is the part which would need a separate React.js vs RN flow.

  • RN/RNTL - call screen.toJSON() method on screen object exported from RNTL
  • React.js/RTL - I am not yet sure how this could be achieved. Perhaps we should compare HTMLElement tree starting from container / baseElement?

@mdjastrzebski
Copy link
Member Author

@guvenkaranfil In the v1 of implementation you could target either RN or React.js. We could developer this feature for each platfrom separately.

@guvenkaranfil
Copy link

I opened up a draft PR to forward potential solutions from the code here is the PR @mdjastrzebski

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hacktoberfest help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants