Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGHUP listeners added. Use emitter.setMaxListeners() to increase limit #852

Closed
jonhdom opened this issue Mar 28, 2019 · 17 comments · Fixed by #1110
Milestone

Comments

@jonhdom
Copy link

jonhdom commented Mar 28, 2019

when I log info, it makes misstake. Do anyone has solve this ?

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGHUP listeners added. Use emitter.setMaxListeners() to increase limit

@nomiddlename
Copy link
Collaborator

Are you calling log4js.configure multiple times? You only need to call it once in your application.

@greenais
Copy link

I experience the same issue. To ensure that I call log4js.configure once I made seperate function module (one watcher logger as example)

//log4js_configure.js
const log4js = require('log4js');
const configureLog4js = () => {
 log4js.configure({
   appenders: {
     watcher: {
       type: 'file',
       filename: '/0.web/log/workers/watcher.log', 
       maxLogSize: 51200,
       backups: 10
     },
...
   },
   categories: {
     watcher: {
       appenders: ['watcher'],
       level: 'debug'
     },
...
   },
 })
};
module.exports = configureLog4js;

then I create and export logger

//log4js.js
const log4js = require('log4js');
...
exports.watcher = log4js.getLogger('watcher');

but if I call it then in my watcher.js like this

const configureLog = require('../config/log4js_configure.js');
configureLog();
const log = require('../config/log4js.js').watcher;

I get "MaxListenersExceededWarning" like above. My goal is to avoid duplicating const log4js = require('log4js'); in every module where I need specific logger, but require appropriate one from common module, is it achievable?

@nomiddlename
Copy link
Collaborator

Do you call configureLog() in each module that you use a logger? If so, then that will cause the issue you're seeing. Instead, move your log4js.configure call into your log4js.js module. Node's module caching will mean that configure is only called once. To be clear - instead of the two logging modules you have now, just have one that looks like this:

// logging.js
const log4js = require('log4js');
log4js.configure({ ... config from above... });
exports.watcher = log4js.getLogger('watcher');

and use it like this:

// watcher.js
const log = require('./logging').watcher;

@greenais
Copy link

No, that's exactly what I did at first - put log4js.configure in log4js.js and exported all loggers there as you suggest. But it brought 11 SIGHUP warning(
That's why I decieded to move configure to separate module and call it shurely once at app start (I call it in watcher.js just for example - to show that it was done).
But it gets warning again...
.
The thing is that if I call logger in watcher.js in a way:

configureLog = require('../config/log4js_configure.js');
configureLog();
const log4js = require('log4js');
const log = log4js.getLogger('watcher');

I've got no such error. Could it be connected somehow to exports specific?
I tried in log4js.js

const watcher = log4js.getLogger('watcher');
module.exports = {watcher};

but got the same warning..
I'm on the latest LTS node 10.15.3 and log4js 4.1.0, if it matters

@klimashkin
Copy link

I get the same error on 4.1.0 and I pass an object to log4js.configure, not a file path

@nomiddlename
Copy link
Collaborator

I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

@denkerny
Copy link

denkerny commented Jan 24, 2020

@nomiddlename so whats the point? I have the following code with more than 12 services, that's why it brings me current info message...

Custom logger format + export:

import { configure, getLogger } from 'log4js'; 

//log4js format
const layout = { 
    type: 'pattern', 
    pattern: '[%d] [%p] [%c] [%x{context}] - %m%n', 
    tokens: {
        context: ({ data }) => {
            const { context } = data.pop();
            return context;
        } 
    } 
} 

//log4js types + config
const types = {
    log: { type: 'console' },
    file: (appender) => ({ 
        type: 'file', 
        filename: `../logs/${appender}/${appender}.log`, 
        maxLogSize: 10485760, 
        backups: 3 
    })
};

//active services
const services = [ 
    'console', 
    'promin',  
    'errorHandler', 
    'startInfo', 
    'authpp',
    'issues',
    'timeentry',
    'kanbanboards', 
    'teams', 
    'user', 
    'calendar', 
    'activity', 
    'staff', 
    'tracker', 
    'issuecategory', 
    'salary', 
    'esc', 
    'version', 
    'issuestatus', 
    'projects', 
    'member'
];

//service interface
const appender = ({ ...args }) => ({ layout, ...args });

//add service to category
const category = ( ...appenders ) => ({ appenders, level: 'info' });

//logger interface
const logger = (appender) => getLogger(appender);

//'console.log' or type to 'file'
const type = (type) => {
    const { log, file } = types;
    return type === 'console' ? log : file(type);
}

//common interface for all currents
const config = (appenders, action, options) => 
    appenders
        .reduce((prev, curr) => 
            ({ ...prev, [curr]: action(curr) }), 
            options || {}
        )

//common appender interface
const currentAppender = (current) => appender(type(current)); 

//common category interface
const currentCategory = (current) => category(current);

//common logger interface
const currentLogger = (current) => logger(current);

//filter console service (choosen by default)
const filter = (appenders, exception) => appenders.filter(a => a !== exception);

//configure appenders + categories
configure({
    appenders: config(services, currentAppender),
    categories: config(
        filter(services, 'console'), 
        currentCategory, 
        { default: category('console') }
    )
});

//export all configured services
export default {
    ...config(services, currentLogger)
} 

Then in my getResponse wrapper catching:

            .catch(e => {
                next({ 
                    context: req.baseUrl, 
                    loggerName: req.serviceName,
                    ...e 
                });
            })

Setting loggerName to every specific route:

const service = (name) => {
    return (req, res, next) => {
        req.serviceName = name;
        next();
    }
}

router.use('/issues', service('issues'), issueRouter);
router.use('/timeentry', service('timeentry'), timeEntryRouter);
router.use('/member', service('member'), memberRouter);
router.use('/projects', service('projects'), projectRouter);
router.use('/issuestatus', service('issuestatus'), issueStatusRouter);
router.use('/version', service('version'), versionRouter);
router.use('/esc', service('esc'), escRouter);
router.use('/salary', service('salary'), salaryRouter);
router.use('/issuecategory', service('issuecategory'), issueCategoryRouter);
router.use('/tracker', service('tracker'), trackerRouter);
router.use('/staff', service('staff'), staffRouter);
router.use('/activity', service('activity'), activityRouter);
router.use('/calendar', service('calendar'), calendarRouter);
router.use('/user', service('user'), userRouter);
router.use('/teams', service('teams'), teamsRouter);
router.use('/kanbanboards', service('kanbanboards'), kanbanboardsRouter);

ErrorHandler uses logger:

       logger[loggerName || 'errorHandler']
         .info(
           `[Login: ${username}] [${statusCode}] ${JSON.stringify(error)}`, 
           { context }
         );

How i can resolve this possible memory leak problem with the following own strategy ?

@nomiddlename
Copy link
Collaborator

@kernyden have you tried using the multiFile appender?

@denkerny
Copy link

denkerny commented Jan 30, 2020

@nomiddlename hm, am i need just this scenario for all my "appenders" like (MultiFile Appender)

log4js.configure({
  appenders: {
    multi: { type: 'multiFile', base: 'logs/', property: 'categoryName', extension: '.log' }
  },
  categories: {
    default: { appenders: [ 'multi' ], level: 'info' }
  }
});

export default {
   cheese:  log4js.getLogger('cheese');
   potato:   log4js.getLogger('potato');
}

in a galaxy far, far away...

logger.cheese.info('this will be logged in logs/cheese.log');
logger.potato.info('this will be logged in logs/potato.log');

@mpiivonen
Copy link

mpiivonen commented Feb 5, 2020

Using multifile doesn't solve the problem either. At least in my case I use multiple threads which in some error cases logs data each time to new file where name is specified by uuid. Documentation says if running multiple threads there wouldn't be need to use tcp logger. Anyway at least I had to use tcp to get child threads to log anything.

// create functions as imported from other file
if (cluster.isMaster) {
  createMasterLogger('INFO','path')
  for (let i = 0; i < cCPU; i++) {
    cluster.fork()
  }

  cluster.on('exit', (worker, _code, _signal) => {
    console.info(`worker ${worker.process.pid} died`)
    console.info('...respawning')
    cluster.fork()
  })
} else {
  createChildLogger('debug', process.pid)
  const logger = getLogger()
  logger.info('Worker %d - logging something ', process.pid)
}

// other file
import * as log4js from 'log4js'

  export const createChildLogger = (level: string, pid: number): void => {
    console.info(`Creating child logger for ${pid}`)
    log4js.configure({
      appenders: {
        worker: { type: 'multiprocess', mode: 'worker' }
      },
      categories: {
        default: { appenders: ['worker'], level: level }
      }
    })

export const createMasterLogger = (level: string, path: string): void => {
  log4js.configure({
    appenders: {
      multiFile: { type: 'multiFile', base: `${path}/` property: 'uuid', extension: '.log' },
      error: { type: 'file', filename: `${path}/error.log` },
      master: {
        type: 'multiprocess',
        mode: 'master',
        appender: 'multiFile'
      }
    },
    categories: {
      default: { appenders: ['multiFile'], level: level },
      error: { appenders: ['error'], level: 'error' }
    }
  })
const logger = log4js.getLogger()
    if (logger !== undefined) {
      logger.info(`Created child logger for ${pid}`)
    } else {
      console.error(`Failed to create child logger for ${pid}`)
    }
}
// get default or named logger
export const getLogger = (logger?: string): log4js.Logger => {
  // return
  if (logger === undefined) {
    return log4js.getLogger()
  }
  return log4js.getLogger(logger)
}

When running and making multiple concurrent calls at some point I'll get same error as "MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGHUP listeners added. Use emitter.setMaxListeners() to increase limit" and from that point onwards anything doesn't get logged.

@nomiddlename
Copy link
Collaborator

@mpiivonen Do you have multiple threads, or multiple processes? Are you using node's cluster module or something else? If you're using node's cluster module then you don't need to do anything with multiprocess appenders - support for clustering is built-in.

@mpiivonen
Copy link

mpiivonen commented Feb 5, 2020

@nomiddlename I have multiple threads. Previously I had an issue get threads call logger correctly. My approach was to initialize master and worker loggers, had getter to decide which one of the loggers I'd use master or worker. Anyway it seems that type of approach doesn't work. I got correct type of logger, I assumed the configuration should have been correctly but I didn't get anything logged. I believe this is by design.

My next approach is simpler as configure logger and get correct type after that

let logger

log4js.configure({
    appenders: {
      out: { type: 'console' }
    },
    categories: { default: { appenders: ['out'], level: 'debug' } }
  })

  if (cluster.isMaster) {
    logger = log4js.getLogger('master')
  } else {
    logger = log4js.getLogger('worker')
  }

This seems to log correctly but even with this I face exhausting issues when stress testing lets say 40-50 concurrent users, for some time everything gets logged correctly but at some point logging just stops. Once logging has stop only way to get anything logged is to restart the application.

My logger usage case isn't ideal since instead of logging simple logs, I use this as a backup to other logging and for that reason I'm streaming quite large json objects and log them once stream has ended. Heap memory usage of any given time is between 100M - 350M (multiple streams transform json to string which will be logged).

I did have the same case with tcp solution as well so probably it's just my use case which is not really where loggers are usually designed and I should start looking for another solution.

@nomiddlename
Copy link
Collaborator

You might find the stdout appender gives better performance than console - I think console can end up being synchronous in certain circumstances, and in the past console has built up big buffers of objects that chew up memory.

@mpiivonen
Copy link

mpiivonen commented Feb 5, 2020

Right, my initial idea was to log to files but for testing I did use console.

const logStream = bfj.streamify(log)
          let logChunk = '{log:'
          logStream.on('data', data => {
            logChunk += data
          })

          // avoid content loss in case of error
          logStream.on('error', () => {
           // console.info(JSON.stringify({ log: logContent }))
          })

          // log based on uuid
          logStream.on('end', () => {
            console.info('log end')
            logChunk += '}'
            logger.addContext('uuid', uuid)
            logger.info(logChunk)
          })

Just gave a try for stdout and it definitely has better performance. Tried some concurrency tests:
2-5 times 3-5 users
1 time 20 users
1 time 40 users

All of the cases eventually ends to the same situation where logging stops. Api will function correctly but anything doesn't get logged or any of console.info's what I use to see where logging should go doesn't get fired. Maybe there's something else with my implementation than just the logging but I'll continue with this tomorrow.

I also thought maybe nodejs isn't designed to handle this at all and consider writing proxy with some other language eg with Go which would handle logging instead of the nodejs.

What comes to the memory you might be right, some cases the json can be 20-30M and of course the more concurrent users the more will be buffered and for that reason using bfj.streamify. I just came to idea maybe the gc doesn't clear built strings fast enough and that could lead to memory issues. I'll check that tomorrow.

lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 14, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
@lamweili
Copy link
Contributor

Originally posted by @nomiddlename in #852 (comment)
@kernyden have you tried using the multiFile appender?

The multiFile appender also has the same issue as it is utilizing file appender within itself.
The issue has to be tackled at the file appender.


Originally posted by @nomiddlename in #852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

Hi @nomiddlename, I have submitted a pull request (#1110) for this based on your comments.
Please kindly review, thanks!

@lamweili
Copy link
Contributor

lamweili commented Dec 14, 2021

This issue can be tested with the following codes in NodeJS.

const log4js = require('log4js');

log4js.configure({
  appenders: {
    a1: { type: 'file', filename: 'app1.log' },
    a2: { type: 'file', filename: 'app2.log' },
    a3: { type: 'file', filename: 'app3.log' },
    a4: { type: 'file', filename: 'app4.log' },
    a5: { type: 'file', filename: 'app5.log' },
    a6: { type: 'file', filename: 'app6.log' },
    a7: { type: 'file', filename: 'app7.log' },
    a8: { type: 'file', filename: 'app8.log' },
    a9: { type: 'file', filename: 'app9.log' },
    a10: { type: 'file', filename: 'app10.log' },
    a11: { type: 'file', filename: 'app11.log' },
    a12: { type: 'file', filename: 'app12.log' }
  },
  categories: {
    default: { 
      appenders: [ 'a1', 'a2', 'a3', 'a4', 'a5', 'a6', 'a7', 'a8', 'a9', 'a10', 'a11', 'a12' ], 
      level: 'debug'
    }
  }
});
> (node:18567) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGHUP listeners added to [process]. Use emitter.setMaxListeners() to increase limit
(Use `node --trace-warnings ...` to show where the warning was created)

A check on the listener count:

process.listenerCount('SIGHUP');
12

@lamweili
Copy link
Contributor

After PR #1110 , no more MaxListenersExceededWarning.

A check on the listener count:

process.listenerCount('SIGHUP');
1

lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
lamweili added a commit to lamweili/log4js-node that referenced this issue Dec 16, 2021
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix.

_Originally posted by @nomiddlename in log4js-node#852 (comment)
@lamweili lamweili added this to the 6.4.0 milestone Jan 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants