Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion to track all attributions #466

Open
psabharwal123 opened this issue Apr 22, 2024 · 4 comments
Open

Suggestion to track all attributions #466

psabharwal123 opened this issue Apr 22, 2024 · 4 comments

Comments

@psabharwal123
Copy link

psabharwal123 commented Apr 22, 2024

hi
this is not a bug but a feature request to track all attributions that caused a particular CLS, in the attribution build. the logic behind this is as follows -
lets say we have 5 layout shifts within 5s, first one has score of .15 and other 4 have score between .1 and .14, now attribution would always be given to first shift (even with reportAllChanges) but it is important for us to track attribution of each shift, as we need to fix each one for the CLS to be considered good.

@tunetheweb
Copy link
Member

This can be measured with a simple performance observer:

const observer = new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    // Count layout shifts without recent user input only
    if (!entry.hadRecentInput) {
      console.log("LayoutShift value:", entry.value);
    }
  }
});

observer.observe({ type: "layout-shift", buffered: true });

The main benefit of the library is to manage all the complexities and calculations to calculate the CLS metric from those raw layout shifts. Without the need of that, the raw underlying layout-shifts can be used.

Can you explain if there’s something you would expect the library to do on top of that?

@psabharwal123
Copy link
Author

we would still like to use the library because, as you said, it mnages complexities like CLS being all layout shifts within 5s period that are no more than 1s apart. simple performance observer would not provide that.

It would be nice if attribution build can provide details about each attribution that makes up the score and not just the one that has most impact

@tunetheweb
Copy link
Member

Oh sorry I misread. The metric entries object should contain a list of all the entries. The attribution build then just filters that down to the largest one for you, but you have them all if you need them.

@psabharwal123
Copy link
Author

in that case we would have to repeat all of the logic that attribution build has for each entry, for eg. getting selector, etc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants