Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use-case: Automated performance regression testing in Utopia #211

Open
Rheeseyb opened this issue Apr 24, 2023 · 1 comment
Open

Use-case: Automated performance regression testing in Utopia #211

Rheeseyb opened this issue Apr 24, 2023 · 1 comment
Labels

Comments

@Rheeseyb
Copy link

Hi,

I'm one of the developers of an online IDE (Utopia) which is by its very nature quite a heavyweight webapp (it needs to handle the expected IDE behaviours and render the React application being edited all whilst providing ways to make changes to that application via a form of canvas). One of our primary requirements is of course for the webapp to be highly performant, but we've always struggled to find a way to automate the testing of performance in a meaningful and reliable way. Our best attempt so far has been to use puppeteer scripts to capture timed recordings of common interactions and compare those recordings over time, but even when running those tests on dedicated hardware we have found that the variance is so high as to make the comparison almost meaningless.

As I understand it, the "Future Goals" section of this proposal (specifically the CPU utilisation, and potentially other hardware resource consumption) would provide us with a way to measure the impact of those common interactions in a way that would allow us to compare the measurements between an open PR and the currently deployed production version of our application in a much more reliable way.

Our likely approach to using this would be (upon the opening of a new PR in our repo):

  1. Load the production version of the application (via a puppeteer script)
  2. Measure the CPU load whilst the application is idle
  3. Perform a set of common interactions, taking measurements of the CPU load at various points during those interactions to calculate the effect of the interactions on it
  4. Deploy the PR branch's code to a staging environment
  5. Repeat the same measurements against that environment
  6. Compare and chart them, adding the chart to the PR
@anssiko anssiko added the V2 label Apr 26, 2023
@anssiko
Copy link
Member

anssiko commented Apr 26, 2023

Thanks @Rheeseyb for this use case description. We'll look at this use case more closely once the v1 has solidified and will reach out to you for more information as needed.

@kenchris kenchris changed the title Use Case: Automated performance regression testing in Utopia Use-case: Automated performance regression testing in Utopia Nov 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants