Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leak issue in regards to SSR #891

Open
Bananicorn opened this issue Feb 10, 2022 · 16 comments
Open

Memory Leak issue in regards to SSR #891

Bananicorn opened this issue Feb 10, 2022 · 16 comments

Comments

@Bananicorn
Copy link

Upon installing frontity locally with:

  • npx frontity create my-first-frontity-project,
  • choosing any of the two default themes
  • Installing dependencies with npm install
  • starting it with npm run build && npm run serve

It seems as if the node application can't garbage collect all the memory allocated during requests.
It happens naturally when the site is called, but can be seen a bit more clearly when testing with siege, with the following command:
timeout 30s siege http://localhost:3000/
The memory spikes up, as expected for a stress test, but then settles on the following values: 279mb -> 330mb -> 381mb -> 430mb, and refuses to go down any further.

I thought it was only the case with my theme, since the docker instance it's running in would periodically run out of memory - but upon removing all prefetching for SSR, it stopped, and it also seems to be happening with the default themes.

For better local testing, I've also tried it with this (added to the package.json) "inspect": "frontity build && node --inspect ./node_modules/frontity/dist/src/cli/index.js serve", which enables me to use the nodejs debugger in chromium, but I can't really make sense of the memory dumps there, except that they steadily increase in size.

Locally, I'm running:

System:

  • OS: Linux 5.4 Ubuntu 20.04.3 LTS (Focal Fossa)
  • CPU: (4) x64 Intel(R) Core(TM) i5-2540M CPU @ 2.60GHz
  • Memory: 562.04 MB / 11.48 GB
  • Shell: 5.8 - /bin/zsh

Binaries:

  • Node: 16.11.1 - ~/.nvm/versions/node/v16.11.1/bin/node
  • npm: 8.0.0 - ~/.nvm/versions/node/v16.11.1/bin/npm

Browsers:

  • Chrome: Not Found
  • Firefox: 96.0

npmPackages:

  • @frontity/core: ^1.14.3 => 1.14.3
  • @frontity/html2react: ^1.7.0 => 1.7.0
  • @frontity/tiny-router: ^1.4.4 => 1.4.4
  • @frontity/twentytwenty-theme: ./packages/twentytwenty-theme => 1.3.3
  • @frontity/wp-source: ^1.11.7 => 1.11.7
  • frontity: ^1.17.1 => 1.17.1

npmGlobalPackages:

  • frontity: Not Found
  • npx: Not Found

I've tried searching the forums, but the closest thing I've found was this:
https://community.frontity.org/t/npx-frontity-dev-aborted-core-dumped-javascript-heap-out-of-memory/1321/9
But that seems like a slightly different problem.

Thank you for your time - and thank you for Frontity, it's been a joy to develop on so far!

@luisherranz
Copy link
Member

The memory spikes up, as expected for a stress test, but then settles on the following values: 279mb -> 330mb -> 381mb -> 430mb, and refuses to go down any further

Does it keep growing after ~430mb?

@Bananicorn
Copy link
Author

Does it keep growing after ~430mb?

Sorry that was a little unclear - yes, it'll just keep growing until running out of memory;

Of course the example with siege is a bit of an extreme one, because it makes ~1000 requests over the course of 30 seconds, but our staging environment crashes over the course of about an hour, with just the healthcheck calling the homepage every few minute or so. (I don't know if I can share our clients's theme here, but I think the same leak applies to the default themes).

@DonKoko
Copy link

DonKoko commented Feb 11, 2022

I can confirm I am having the same issue. This causes my local server to crash approx every 15-60 minutes.

@luisherranz
Copy link
Member

Ok. Would you mind trying packages one by one to see which one has the leak (or if it's the core)? Then we can trace back to see the exact point in time when that problem was introduced.

@Bananicorn
Copy link
Author

I'm sorry, how would I go about that?
the only thing in the packages folder at the moment is the theme itself, but that's probably not what you meant.
I can remove packages from frontity.settings.js, but mostly that just results in the page just crashing.

Is there a theme which only uses the core, so I can add the others step by step and see when it starts leaking?

@luisherranz
Copy link
Member

Oh, I guess you can create a minimal theme then. Something like:

const Root = () => (
  <div>Empty theme</div>
);

const EmptyTheme = {
  name: "empty-theme",
  roots: {
    theme: Root,
  },
};

export default EmptyTheme;

@Bananicorn
Copy link
Author

I've got the minimum theme I could observe the leak on here:
https://github.com/Bananicorn/frontity-memory-leaking-theme

I only managed to remove @frontity/html2react, because if I remove @frontity/wp-source or @frontity/tiny-router I can't reproduce it anymore.
It's leaking a bit more slowly now, about 2MB per 500 requests.

@luisherranz
Copy link
Member

if I remove @frontity/wp-source or @frontity/tiny-router I can't reproduce it anymore

So the memory leak only happens if both are present?

@Bananicorn
Copy link
Author

Bananicorn commented Feb 16, 2022

So the memory leak only happens if both are present?

I can't really tell, since I didn't manage to get it to run without the router - at least not yet.

@Bananicorn
Copy link
Author

Okay, I got it to run with @frontity/wp-source and @frontity/tiny-router individually, and I can only really reproduce it when both are present.

@luisherranz
Copy link
Member

It'd be great if we can narrow it down to the code.

Is it only the package presence? Only by using connect? Or when accessing a specific part of the state?

@Bananicorn
Copy link
Author

Hello @luisherranz, it's been a while, but I've got some more specific news this time:
Okay, so we've figured out our problem to an extent, but don't have a real solution yet.
It's the menus fault
Since we've stopped fetching our menu in the beforeSSR hook, our Server's been running smoothly, without leaking.
Of course now we have the problem that the menu isn't here on page load, which is pretty bad for SEO, and it still blinks into existence, which isn't all too pretty either.

I've put together a small example in the test repo https://github.com/Bananicorn/frontity-memory-leaking-theme, just taking the minimal code from the official examples to fetch a menu from wordpress and display it.

Steps to reproduce:

  • Clone repo
  • npm ci
  • Set up an empty wordpress instance with the menu rest api plugin installed
  • Create a menu and assign it to the menu location "menu", preferably with a good handful of links so the leak shows faster
  • Add .env file containing the link to the test-server, like this FRONTITY_SOURCE_URL=https://test.frontity.org
  • Start frontend with npm run inspect
  • Open chrome's chrome://inspect and choose the remote target
    grafik
  • Open a bunch of tabs (30-ish) and refresh them a few times
  • In the chrome inspector-window for the node process, you should be able to see a clear trend after a few cycles of refreshing and making a few heap snapshots
    grafik

@erdigokce
Copy link

erdigokce commented Jan 12, 2023

Hello all,

We are having the same memory leak issue after upgrading frontity from 1.12.0 to 1.17.2 . As shown below (12h period), while the frontity application is idle, memory consumption linearly increases and at a point it crashes and reruns.
image

So far, I have tried to take a few heap snapshots with the production build and tried to compare them. Then investigate the positive deltas, but it did not help clearly because there are numerous meaningless objects listed.

After a while I have tried "Allocation Timeline" tool to see the allocations and GC actions on the go.

image

As I seek for the changedBeforeMounted property use, it leaded me to a "useMemo" closure allocation in frontity's connect library.

image

Obviously we can see that if connected Comp changes overtime, useMemo will allocate memory space for a new closure constantly.

My opinion (correct me if I am wrong):
GC does not collect these closures because the properties used in the closure are referenced out of the closure. So GC mechanism sees them as actively used closures.

@luisherranz
Copy link
Member

if connected Comp changes overtime

Under what circumstances will the connected Comp component change over time?

Can you trace that down to a specific component to understand what's going on?

@DonKoko
Copy link

DonKoko commented Jan 12, 2023

Just wanted to confirm that for me the memory leak issue as become worse as well. My local server constantly crashes due to it. I have no idea what i have to do to provide some more info about the problem to help find the cause. LEt me know if I can contribute somehow.

@luisherranz
Copy link
Member

luisherranz commented Jan 12, 2023

Even though I don't fully understand the problem, if those flags indeed cause a memory leak, these should fix it:

@erdigokce, @DonKoko: would you mind testing it out? You can just replace this file in your node_modules folder.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants