New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGHUP listeners added. Use emitter.setMaxListeners() to increase limit #852
Comments
Are you calling |
I experience the same issue. To ensure that I call log4js.configure once I made seperate function module (one watcher logger as example)
then I create and export logger
but if I call it then in my watcher.js like this
I get "MaxListenersExceededWarning" like above. My goal is to avoid duplicating const log4js = require('log4js'); in every module where I need specific logger, but require appropriate one from common module, is it achievable? |
Do you call // logging.js
const log4js = require('log4js');
log4js.configure({ ... config from above... });
exports.watcher = log4js.getLogger('watcher'); and use it like this: // watcher.js
const log = require('./logging').watcher; |
No, that's exactly what I did at first - put log4js.configure in log4js.js and exported all loggers there as you suggest. But it brought 11 SIGHUP warning(
I've got no such error. Could it be connected somehow to exports specific?
but got the same warning.. |
I get the same error on |
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. |
@nomiddlename so whats the point? I have the following code with more than 12 services, that's why it brings me current info message... Custom logger format + export: import { configure, getLogger } from 'log4js';
//log4js format
const layout = {
type: 'pattern',
pattern: '[%d] [%p] [%c] [%x{context}] - %m%n',
tokens: {
context: ({ data }) => {
const { context } = data.pop();
return context;
}
}
}
//log4js types + config
const types = {
log: { type: 'console' },
file: (appender) => ({
type: 'file',
filename: `../logs/${appender}/${appender}.log`,
maxLogSize: 10485760,
backups: 3
})
};
//active services
const services = [
'console',
'promin',
'errorHandler',
'startInfo',
'authpp',
'issues',
'timeentry',
'kanbanboards',
'teams',
'user',
'calendar',
'activity',
'staff',
'tracker',
'issuecategory',
'salary',
'esc',
'version',
'issuestatus',
'projects',
'member'
];
//service interface
const appender = ({ ...args }) => ({ layout, ...args });
//add service to category
const category = ( ...appenders ) => ({ appenders, level: 'info' });
//logger interface
const logger = (appender) => getLogger(appender);
//'console.log' or type to 'file'
const type = (type) => {
const { log, file } = types;
return type === 'console' ? log : file(type);
}
//common interface for all currents
const config = (appenders, action, options) =>
appenders
.reduce((prev, curr) =>
({ ...prev, [curr]: action(curr) }),
options || {}
)
//common appender interface
const currentAppender = (current) => appender(type(current));
//common category interface
const currentCategory = (current) => category(current);
//common logger interface
const currentLogger = (current) => logger(current);
//filter console service (choosen by default)
const filter = (appenders, exception) => appenders.filter(a => a !== exception);
//configure appenders + categories
configure({
appenders: config(services, currentAppender),
categories: config(
filter(services, 'console'),
currentCategory,
{ default: category('console') }
)
});
//export all configured services
export default {
...config(services, currentLogger)
} Then in my getResponse wrapper catching: .catch(e => {
next({
context: req.baseUrl,
loggerName: req.serviceName,
...e
});
}) Setting loggerName to every specific route: const service = (name) => {
return (req, res, next) => {
req.serviceName = name;
next();
}
}
router.use('/issues', service('issues'), issueRouter);
router.use('/timeentry', service('timeentry'), timeEntryRouter);
router.use('/member', service('member'), memberRouter);
router.use('/projects', service('projects'), projectRouter);
router.use('/issuestatus', service('issuestatus'), issueStatusRouter);
router.use('/version', service('version'), versionRouter);
router.use('/esc', service('esc'), escRouter);
router.use('/salary', service('salary'), salaryRouter);
router.use('/issuecategory', service('issuecategory'), issueCategoryRouter);
router.use('/tracker', service('tracker'), trackerRouter);
router.use('/staff', service('staff'), staffRouter);
router.use('/activity', service('activity'), activityRouter);
router.use('/calendar', service('calendar'), calendarRouter);
router.use('/user', service('user'), userRouter);
router.use('/teams', service('teams'), teamsRouter);
router.use('/kanbanboards', service('kanbanboards'), kanbanboardsRouter); ErrorHandler uses logger: logger[loggerName || 'errorHandler']
.info(
`[Login: ${username}] [${statusCode}] ${JSON.stringify(error)}`,
{ context }
); How i can resolve this possible memory leak problem with the following own strategy ? |
@kernyden have you tried using the multiFile appender? |
@nomiddlename hm, am i need just this scenario for all my log4js.configure({
appenders: {
multi: { type: 'multiFile', base: 'logs/', property: 'categoryName', extension: '.log' }
},
categories: {
default: { appenders: [ 'multi' ], level: 'info' }
}
});
export default {
cheese: log4js.getLogger('cheese');
potato: log4js.getLogger('potato');
} in a galaxy far, far away... logger.cheese.info('this will be logged in logs/cheese.log');
logger.potato.info('this will be logged in logs/potato.log'); |
Using multifile doesn't solve the problem either. At least in my case I use multiple threads which in some error cases logs data each time to new file where name is specified by uuid. Documentation says if running multiple threads there wouldn't be need to use tcp logger. Anyway at least I had to use tcp to get child threads to log anything. // create functions as imported from other file
if (cluster.isMaster) {
createMasterLogger('INFO','path')
for (let i = 0; i < cCPU; i++) {
cluster.fork()
}
cluster.on('exit', (worker, _code, _signal) => {
console.info(`worker ${worker.process.pid} died`)
console.info('...respawning')
cluster.fork()
})
} else {
createChildLogger('debug', process.pid)
const logger = getLogger()
logger.info('Worker %d - logging something ', process.pid)
}
// other file
import * as log4js from 'log4js'
export const createChildLogger = (level: string, pid: number): void => {
console.info(`Creating child logger for ${pid}`)
log4js.configure({
appenders: {
worker: { type: 'multiprocess', mode: 'worker' }
},
categories: {
default: { appenders: ['worker'], level: level }
}
})
export const createMasterLogger = (level: string, path: string): void => {
log4js.configure({
appenders: {
multiFile: { type: 'multiFile', base: `${path}/` property: 'uuid', extension: '.log' },
error: { type: 'file', filename: `${path}/error.log` },
master: {
type: 'multiprocess',
mode: 'master',
appender: 'multiFile'
}
},
categories: {
default: { appenders: ['multiFile'], level: level },
error: { appenders: ['error'], level: 'error' }
}
})
const logger = log4js.getLogger()
if (logger !== undefined) {
logger.info(`Created child logger for ${pid}`)
} else {
console.error(`Failed to create child logger for ${pid}`)
}
}
// get default or named logger
export const getLogger = (logger?: string): log4js.Logger => {
// return
if (logger === undefined) {
return log4js.getLogger()
}
return log4js.getLogger(logger)
} When running and making multiple concurrent calls at some point I'll get same error as "MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGHUP listeners added. Use emitter.setMaxListeners() to increase limit" and from that point onwards anything doesn't get logged. |
@mpiivonen Do you have multiple threads, or multiple processes? Are you using node's cluster module or something else? If you're using node's cluster module then you don't need to do anything with multiprocess appenders - support for clustering is built-in. |
@nomiddlename I have multiple threads. Previously I had an issue get threads call logger correctly. My approach was to initialize master and worker loggers, had getter to decide which one of the loggers I'd use master or worker. Anyway it seems that type of approach doesn't work. I got correct type of logger, I assumed the configuration should have been correctly but I didn't get anything logged. I believe this is by design. My next approach is simpler as configure logger and get correct type after that let logger
log4js.configure({
appenders: {
out: { type: 'console' }
},
categories: { default: { appenders: ['out'], level: 'debug' } }
})
if (cluster.isMaster) {
logger = log4js.getLogger('master')
} else {
logger = log4js.getLogger('worker')
} This seems to log correctly but even with this I face exhausting issues when stress testing lets say 40-50 concurrent users, for some time everything gets logged correctly but at some point logging just stops. Once logging has stop only way to get anything logged is to restart the application. My logger usage case isn't ideal since instead of logging simple logs, I use this as a backup to other logging and for that reason I'm streaming quite large json objects and log them once stream has ended. Heap memory usage of any given time is between 100M - 350M (multiple streams transform json to string which will be logged). I did have the same case with tcp solution as well so probably it's just my use case which is not really where loggers are usually designed and I should start looking for another solution. |
You might find the |
Right, my initial idea was to log to files but for testing I did use console. const logStream = bfj.streamify(log)
let logChunk = '{log:'
logStream.on('data', data => {
logChunk += data
})
// avoid content loss in case of error
logStream.on('error', () => {
// console.info(JSON.stringify({ log: logContent }))
})
// log based on uuid
logStream.on('end', () => {
console.info('log end')
logChunk += '}'
logger.addContext('uuid', uuid)
logger.info(logChunk)
}) Just gave a try for stdout and it definitely has better performance. Tried some concurrency tests: All of the cases eventually ends to the same situation where logging stops. Api will function correctly but anything doesn't get logged or any of console.info's what I use to see where logging should go doesn't get fired. Maybe there's something else with my implementation than just the logging but I'll continue with this tomorrow. I also thought maybe nodejs isn't designed to handle this at all and consider writing proxy with some other language eg with Go which would handle logging instead of the nodejs. What comes to the memory you might be right, some cases the json can be 20-30M and of course the more concurrent users the more will be buffered and for that reason using bfj.streamify. I just came to idea maybe the gc doesn't clear built strings fast enough and that could lead to memory issues. I'll check that tomorrow. |
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
The multiFile appender also has the same issue as it is utilizing file appender within itself.
Hi @nomiddlename, I have submitted a pull request (#1110) for this based on your comments. |
This issue can be tested with the following codes in NodeJS. const log4js = require('log4js');
log4js.configure({
appenders: {
a1: { type: 'file', filename: 'app1.log' },
a2: { type: 'file', filename: 'app2.log' },
a3: { type: 'file', filename: 'app3.log' },
a4: { type: 'file', filename: 'app4.log' },
a5: { type: 'file', filename: 'app5.log' },
a6: { type: 'file', filename: 'app6.log' },
a7: { type: 'file', filename: 'app7.log' },
a8: { type: 'file', filename: 'app8.log' },
a9: { type: 'file', filename: 'app9.log' },
a10: { type: 'file', filename: 'app10.log' },
a11: { type: 'file', filename: 'app11.log' },
a12: { type: 'file', filename: 'app12.log' }
},
categories: {
default: {
appenders: [ 'a1', 'a2', 'a3', 'a4', 'a5', 'a6', 'a7', 'a8', 'a9', 'a10', 'a11', 'a12' ],
level: 'debug'
}
}
}); > (node:18567) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGHUP listeners added to [process]. Use emitter.setMaxListeners() to increase limit
(Use `node --trace-warnings ...` to show where the warning was created) A check on the listener count: process.listenerCount('SIGHUP'); 12 |
After PR #1110 , no more A check on the listener count: process.listenerCount('SIGHUP'); 1 |
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
I think the issue is that each file appender instance adds a SIGHUP handler, when they could all use the same handler. I'll see if I can work on a fix. _Originally posted by @nomiddlename in log4js-node#852 (comment)
when I log info, it makes misstake. Do anyone has solve this ?
MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGHUP listeners added. Use emitter.setMaxListeners() to increase limit
The text was updated successfully, but these errors were encountered: