Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. #1295

Closed
tamtamchik opened this issue Sep 16, 2021 · 25 comments · Fixed by #1474 · May be fixed by #1325
Closed

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. #1295

tamtamchik opened this issue Sep 16, 2021 · 25 comments · Fixed by #1474 · May be fixed by #1325

Comments

@tamtamchik
Copy link

tamtamchik commented Sep 16, 2021

Hi there,

I think there is a regression in version 3.0.

When I'm using Custom Agent (https) with keepAlive: true option and then just make a bunch of requests. Pretty soon such warnings show up:

(node:24083) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 data listeners added to [TLSSocket]. Use emitter.setMaxListeners() to increase limit
MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 data listeners added to [TLSSocket]. Use emitter.setMaxListeners() to increase limit
at _addListener (node:events:469:17)
at TLSSocket.addListener (node:events:491:10)
at TLSSocket.Readable.on (node:internal/streams/readable:876:35)
at ClientRequest. (.../node_modules/node-fetch/src/index.js:316:10)
at ClientRequest.emit (node:events:394:28)
at tickOnSocket (node:_http_client:755:7)
at onSocketNT (node:_http_client:815:5)
at processTicksAndRejections (node:internal/process/task_queues:84:21)

The same thing with http, just listeners added not to TLSSocket but to just Socket.

Reproduction

Here is a really simple example.

import https from 'node:https'
import fetch from 'node-fetch'

const agent = new https.Agent({ keepAlive: true })

async function start () {
  for (let i = 1; i < 1000; i++) {
    await fetch('https://example.com', { agent })
  }
}

process.on('warning', e => console.warn(e.stack))

start().catch(e => console.error(e))

Steps to reproduce the behavior:

  1. Run this code with node-fetch@3 installed and you will start getting warnings.
  2. Install node-fetch@2 as a dependency, run the same code, and boom, no warnings.

Expected behavior

Version 2.6.2 does not produce any warning on the same code.
Version 3.0.0 produces warnings.

Your Environment

I have been running this on node.js 14 (LTS) and 16 (latest) with the same results.

software version
node-fetch 3.0.0
node v14.17.6 & v16.9.1
npm 6.14.15 & 7.21.1
Operating System MacOS 11.5.2 (20G95) & Docker with alpine linux
@tamtamchik tamtamchik added the bug label Sep 16, 2021
@jimmywarting
Copy link
Collaborator

The undici project and we might have a problem where the body isn't consumed
https://github.com/nodejs/undici#garbage-collection

what would happend if you try out

import https from 'node:https'
import fetch from 'node-fetch'

const agent = new https.Agent({ keepAlive: true })

async function start () {
  for (let i = 1; i < 1000; i++) {
    const res = await fetch('https://example.com', { agent })
    for await (const chunk of res.body) {} // do nothing
  }
}

process.on('warning', e => console.warn(e.stack))

start().catch(e => console.error(e))

@jimmywarting
Copy link
Collaborator

i might have some trick up my sleeve of how to handle unhandled bodies with finalizers

@tamtamchik
Copy link
Author

tamtamchik commented Sep 16, 2021

@jimmywarting thanks for a prompt reply! Indeed body consumption affected the frequency of warnings (I only have 2 now for this example), but they are still there. In production projects response bodies are always consumed by calling .json(), but those warnings still appear rather often.

@kristoffer-zliide
Copy link

kristoffer-zliide commented Sep 21, 2021

We just spent more than a week upgrading our code base to ESM and can finally upgrade to node-fetch 3, but was immediately hit by this issue.

@jimmywarting: while not consuming the body is probably causing leaks, I don't think this issue is related to that. 51861e9#diff-bfe9874d239014961b1ae4e89875a6155667db834a410aaaa2ebe3cf89820556R310 looks fundamentally suspicious; a close listener is attached to the socket on each request and only removed when the socket is aborted, but that doesn't work well with keep alive, right? Also, is the data listener ever removed? Our problem seems to largely go away if we disable keep-alive. This guy seems to also be hit by it, even though they consume the body: https://stackoverflow.com/questions/67168047/node-js-memory-leak-am-i-doing-something-wrong-or-is-it-a-problem-with-a-packag

@kristoffer-zliide
Copy link

@tekwiz: is there a way to fix #1222 without introducing memory leaks?

@tekwiz
Copy link
Member

tekwiz commented Sep 30, 2021

After some experimentation, it turns out that this solution is not compatible with keep-alive. Because the socket is the only place that can be used to verify that a chunked transfer has properly finished, the only option is to disable fixResponseChunkedTransferBadEnding when keep-alive is enabled. We should also advise users in the README that they need to be sure to implement timeouts with keep-alive to avoid the kinds of issues described in #1055 and #968. I can work on a change for this tomorrow.

tekwiz added a commit to tekwiz/node-fetch that referenced this issue Oct 1, 2021
@tekwiz
Copy link
Member

tekwiz commented Oct 1, 2021

@tamtamchik @kristoffer-zliide I pushed up a change to remove the fix when keep-alive is enabled. Please give it a try when you have a moment: https://github.com/tekwiz/node-fetch/tree/fix/chunked-encoding-keepalive

@tamtamchik
Copy link
Author

tamtamchik commented Oct 4, 2021

@tekwiz made a couple of runs for this code:

import https from 'node:https'
import fetch from 'node-fetch'

const agent = new https.Agent({ keepAlive: true })

async function start () {
  for (let i = 1; i < 1000; i++) {
    const res = await fetch('https://example.com', { agent })
    for await (const chunk of res.body) {} // do nothing
  }

  console.log(`memoryUsage: `, process.memoryUsage());
}

process.on('warning', e => console.warn(e.stack))

start().catch(e => console.error(e))

Results without your patch:

$ time node fetch-https.js
(node:5553) ExperimentalWarning: stream/web is an experimental feature. This feature could change at any time
ExperimentalWarning: stream/web is an experimental feature. This feature could change at any time
    ...
(node:5553) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 data listeners added to [TLSSocket]. Use emitter.setMaxListeners() to increase limit
MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 data listeners added to [TLSSocket]. Use emitter.setMaxListeners() to increase limit
    ...
memoryUsage:  {
  rss: 103960576,      // 99.1 Mb
  heapTotal: 52555776, // 50.1 Mb
  heapUsed: 22500456,  // 21.4 Mb
  external: 17568481,
  arrayBuffers: 16409888
}
node fetch-https.js  2.90s user 0.19s system 1% cpu 3:05.48 total

Results with your patch (I'm aware of setting up AbortController, but sacrificed it for sake of test simplicity, in fact when I used it it gave identical results because I had no timeouts in my requests):

$ time node fetch-https.js
(node:5666) ExperimentalWarning: stream/web is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
ExperimentalWarning: stream/web is an experimental feature. This feature could change at any time
    ...
memoryUsage:  {
  rss: 50876416,      // 48.5 Mb
  heapTotal: 8253440, // 7.9 Mb
  heapUsed: 7273968,  // 6.9 Mb
  external: 1654597,
  arrayBuffers: 241914
}
node fetch-https.js  1.81s user 0.20s system 1% cpu 3:05.54 total

@jimmywarting
Copy link
Collaborator

Curios what the memoryUsage was from the start... 🤔

@kristoffer-zliide
Copy link

@tekwiz: we replaced "node-fetch": "3.0.0" with "node-fetch": "https://github.com/tekwiz/node-fetch.git#1c43dd773db587a064083d15fedf9d1d65c261c7" in a couple of our services and can confirm that the warnings have gone and that they seem to be working fine in a QA environment.

We already have abort controller-based timeout in those services, but can you elaborate a bit on it is we need to be aware of, in particular, what has changed since 2.6.2 that caused this issue to happen? How is 2.6.2 dealing with these issues?

@tamtamchik
Copy link
Author

@jimmywarting slightly adjusted the script https://gist.github.com/tamtamchik/8ad09447ae9ad8621e268fa9921d5917

without patch (made several runs, results are almost identical):

memoryUsage.rss: 35.97 Mb ⇒ 96.75 Mb (Diff: 168.99%)
memoryUsage.heapTotal: 7.11 Mb ⇒ 49.87 Mb (Diff: 601.10%)
memoryUsage.heapUsed: 5.23 Mb ⇒ 29.92 Mb (Diff: 471.66%)
memoryUsage.external: 1.04 Mb ⇒ 16.76 Mb (Diff: 1512.99%)
memoryUsage.arrayBuffers: 0.03 Mb ⇒ 15.68 Mb (Diff: 50570.52%)

with patch:

memoryUsage.rss: 35.87 Mb ⇒ 47.92 Mb (Diff: 33.61%)
memoryUsage.heapTotal: 7.11 Mb ⇒ 7.62 Mb (Diff: 7.14%)
memoryUsage.heapUsed: 5.22 Mb ⇒ 6.86 Mb (Diff: 31.51%)
memoryUsage.external: 1.04 Mb ⇒ 8.10 Mb (Diff: 679.89%)
memoryUsage.arrayBuffers: 0.03 Mb ⇒ 0.21 Mb (Diff: 592.11%)

@tekwiz
Copy link
Member

tekwiz commented Oct 7, 2021

We already have abort controller-based timeout in those services, but can you elaborate a bit on it is we need to be aware of, in particular, what has changed since 2.6.2 that caused this issue to happen? How is 2.6.2 dealing with these issues?

Be aware of connection problems. With chunked transfer-encoding, Node.js has trouble figuring out that a chunked response didn't complete successfully if the connection ends prematurely (#1055). Also, and maybe more importantly, there are certain cases where chunked responses never trigger the end of the stream, and programs will hang (#968). All versions of node-fetch before 3.0.0-beta.10 are affected.

@kristoffer-zliide
Copy link

Thank you, @tekwiz.

@kristoffer-zliide
Copy link

What's the hold-up here? Can we please get a release without memory leaks? Thanks.

@olegade
Copy link

olegade commented Dec 2, 2021

Any updates on this issue?

@derrickrc
Copy link

I am getting these errors and after Googling I believe it is related to using node-fetch (Google Cloud Functions Node.js runtime). In my package-lock.json I believe the version I'm using is 2.6.0

@felipec
Copy link
Contributor

felipec commented Jan 24, 2022

I've found the problem.

In fixResponseChunkedTransferBadEnding() two signals are added to the socket in every request, but they are never removed.

In #1474 I fix that problem by removing both signals on the socket's close() event.

This was referenced Jan 24, 2022
@olegade
Copy link

olegade commented Feb 8, 2022

Is it possible to get an update on this issue? We eagerly await a new version without the memory leak.

@jimmywarting
Copy link
Collaborator

So far I have been waiting on @tekwiz response to #1325 (review) - currently it seems as if we have two possible PRs with different approaches

@ghost
Copy link

ghost commented Mar 14, 2022

@jimmywarting The PR has been inactive for 2 months. Can you chose the other fix and get out a release to unblock anyone experiencing the issue?

@AlttiRi
Copy link

AlttiRi commented Apr 21, 2022

In nodejs/undici#430 it was fixed nodejs/undici#431 the similar way.

@AlttiRi
Copy link

AlttiRi commented May 2, 2022

I took a look at both pull requests. Check my comments there.
#1325
#1474

I think #1474 by @felipec should be accepted. It does the correct things.

@AlttiRi
Copy link

AlttiRi commented May 7, 2022

So, what?
It is not only the memory and performance leak, it's also the annoying console log warning.

Both pull requests resolve the problem.

#1474 Prevent the leak by removing the previous listener in fixResponseChunkedTransferBadEnding on each new request.

However.
fixResponseChunkedTransferBadEnding function is not suited to work with keep-alive Agent.
So, #1325 disables it at all, for keep-alive Agents.

I should say that the #1325 is more correct one.


Also note that fixResponseChunkedTransferBadEnding (which was added in Aug 2021) is only needed for Node <= 15 version for only some requests. #1219


Both pull request nave no cons.
Just accept any.
#1474 is soft fix, but it leaves 1 useless listener, #1325 is more radical, but it's correct.

jimmywarting pushed a commit that referenced this issue May 7, 2022
Since 8eeeec1 it seems listeners have been leaking on keep-alive
sockets.

Apparently abort() has been deprecated since v17, using close() the
onSocketClose listener is properly removed, and by creating a new onData
function I'm able to remove the 'data' listener too.
@davidmz
Copy link

davidmz commented May 27, 2022

Could you make a release with this change?

@github-actions
Copy link

github-actions bot commented Jun 1, 2022

🎉 This issue has been resolved in version 3.2.5 🎉

The release is available on:

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
9 participants