Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

increment_var does not increment when restarting/reloading app #4502

Open
fedebaier opened this issue Nov 13, 2019 · 34 comments
Open

increment_var does not increment when restarting/reloading app #4502

fedebaier opened this issue Nov 13, 2019 · 34 comments

Comments

@fedebaier
Copy link

What's going wrong?

The increment_var works correctly only when starting the app for the first time. When restarting or reloading, it doesn't work, so all instances get the same initial value in the env variable.

How could we reproduce this issue?

  • Set a custom env variable, with initial value 1.
  • Set increment_var with the name of the custom env variable.
  • Make sure the exec_mode is cluster.
  • Log the value of the custom env variable in the application.
  • Start the application. Each instance should log a different value.
  • Restart or reload the application. Each instance will log the same initial value.

Supporting information

--- PM2 report ----------------------------------------------------------------
Date                 : Wed Nov 13 2019 12:55:41 GMT+0000 (Coordinated Universal Time)
===============================================================================
--- Daemon -------------------------------------------------
pm2d version         : 4.1.2
node version         : 12.13.0
node path            : not found
argv                 : /usr/bin/node,/usr/lib/node_modules/pm2/lib/Daemon.js
argv0                : node
user                 : stimme
uid                  : 1000
gid                  : 1000
uptime               : 4788min
===============================================================================
--- CLI ----------------------------------------------------
local pm2            : 4.1.2
node version         : 12.13.0
node path            : /usr/bin/pm2
argv                 : /usr/bin/node,/usr/bin/pm2,report
argv0                : node
user                 : stimme
uid                  : 1000
gid                  : 1000
===============================================================================
--- System info --------------------------------------------
arch                 : x64
platform             : linux
type                 : Linux
cpus                 : Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
cpus nb              : 2
freemem              : 724017152
totalmem             : 4136554496
home                 : /home/stimme
===============================================================================
--- PM2 list -----------------------------------------------
┌────┬─────────────────────────┬─────────┬─────────┬──────────┬────────┬──────┬──────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name                    │ version │ mode    │ pid      │ uptime │ ↺    │ status   │ cpu      │ mem      │ user     │ watching │
├────┼─────────────────────────┼─────────┼─────────┼──────────┼────────┼──────┼──────────┼──────────┼──────────┼──────────┼──────────┤
│ 3  │ Imaging API             │ 1.0.0   │ cluster │ 19522    │ 2h     │ 2    │ online   │ 0.5%     │ 73.5mb   │ stimme   │ disabled │
│ 4  │ Imaging API             │ 1.0.0   │ cluster │ 19538    │ 2h     │ 2    │ online   │ 0.5%     │ 74.4mb   │ stimme   │ disabled │
│ 5  │ Safir Core API          │ 1.0.0   │ cluster │ 29155    │ 21s    │ 2    │ online   │ 0.5%     │ 139.9mb  │ stimme   │ disabled │
│ 6  │ Safir Core API          │ 1.0.0   │ cluster │ 29172    │ 18s    │ 2    │ online   │ 0.5%     │ 132.8mb  │ stimme   │ disabled │
└────┴─────────────────────────┴─────────┴─────────┴──────────┴────────┴──────┴──────────┴──────────┴──────────┴──────────┴──────────┘
Module
┌────┬───────────────────────────────────────┬────────────────────┬───────┬──────────┬──────┬──────────┬──────────┬──────────┐
│ id │ module                                │ version            │ pid   │ status   │ ↺    │ cpu      │ mem      │ user     │
├────┼───────────────────────────────────────┼────────────────────┼───────┼──────────┼──────┼──────────┼──────────┼──────────┤
│ 0  │ pm2-logrotate                         │ 2.7.0              │ 1509  │ online   │ 0    │ 0%       │ 78.9mb   │ stimme   │
└────┴───────────────────────────────────────┴────────────────────┴───────┴──────────┴──────┴──────────┴──────────┴──────────┘
===============================================================================
--- Daemon logs --------------------------------------------
/home/stimme/.pm2/pm2.log last 20 lines:
PM2        | 2019-11-13T12:45:22: PM2 log: App [Safir Core API:_old_5] exited with code [0] via signal [SIGINT]
PM2        | 2019-11-13T12:45:22: PM2 log: pid=28104 msg=process killed
PM2        | 2019-11-13T12:45:22: PM2 log: App [Safir Core API:6] starting in -cluster mode-
PM2        | 2019-11-13T12:45:22: PM2 log: App [Safir Core API:6] online
PM2        | 2019-11-13T12:45:25: PM2 log: Stopping app:Safir Core API id:_old_6
PM2        | 2019-11-13T12:45:25: PM2 log: App name:Safir Core API id:_old_6 disconnected
PM2        | 2019-11-13T12:45:25: PM2 log: App [Safir Core API:_old_6] exited with code [0] via signal [SIGINT]
PM2        | 2019-11-13T12:45:25: PM2 log: pid=28111 msg=process killed
PM2        | 2019-11-13T12:55:19: PM2 log: App [Safir Core API:5] starting in -cluster mode-
PM2        | 2019-11-13T12:55:19: PM2 log: App [Safir Core API:5] online
PM2        | 2019-11-13T12:55:22: PM2 log: -reload- New worker listening
PM2        | 2019-11-13T12:55:22: PM2 log: Stopping app:Safir Core API id:_old_5
PM2        | 2019-11-13T12:55:22: PM2 log: App name:Safir Core API id:_old_5 disconnected
PM2        | 2019-11-13T12:55:22: PM2 log: App [Safir Core API:_old_5] exited with code [0] via signal [SIGINT]
PM2        | 2019-11-13T12:55:22: PM2 log: pid=28362 msg=process killed
PM2        | 2019-11-13T12:55:22: PM2 log: App [Safir Core API:6] starting in -cluster mode-
PM2        | 2019-11-13T12:55:22: PM2 log: App [Safir Core API:6] online
PM2        | 2019-11-13T12:55:25: PM2 log: Stopping app:Safir Core API id:_old_6
PM2        | 2019-11-13T12:55:25: PM2 log: App [Safir Core API:_old_6] exited with code [0] via signal [SIGINT]
PM2        | 2019-11-13T12:55:25: PM2 log: pid=28379 msg=process killed
@hmaracic
Copy link

I seem to have the same issue, but when running in fork mode. My PM2 version is 4.1.2.

I'm using a file called pm2start.json to start the app, its contents are

{
  "name" : "app-name",
  "script" : "./index.js",
  "instances" : 4,
  "increment_var" : "API_SERVER_PORT",
  "exec_mode" : "fork",
  "env" : {
    "API_SERVER_PORT" : 6060,
    "NODE_ENV" : "myEnvironment"
  }
}

The index.js has a line console.log('STARTED WITH PORT: ', process.env.API_SERVER_PORT);
After deleting ~/.pm2 the log states:

0|app-name  | STARTED WITH PORT:  6060
1|app-name  | STARTED WITH PORT:  6061
2|app-name  | STARTED WITH PORT:  6062
3|app-name  | STARTED WITH PORT:  6063

After doing pm2 stop pm2start.json and then again pm2 start pm2start.json, the log says:

2|app-name  | STARTED WITH PORT:  6062
2|app-name  | STARTED WITH PORT:  6060
0|app-name  | STARTED WITH PORT:  6060
0|app-name  | STARTED WITH PORT:  6060
0|app-name  | STARTED WITH PORT:  6060
0|app-name  | STARTED WITH PORT:  6060
1|app-name  | STARTED WITH PORT:  6061
1|app-name  | STARTED WITH PORT:  6060
1|app-name  | STARTED WITH PORT:  6060
1|app-name  | STARTED WITH PORT:  6060
3|app-name  | STARTED WITH PORT:  6063
3|app-name  | STARTED WITH PORT:  6060
3|app-name  | STARTED WITH PORT:  6060
3|app-name  | STARTED WITH PORT:  6060

It shows that the port numbers are correct for the first start, but on both manual and subsequent automatic restarts (the app crashes since after logging it tries to listen on that port and that doesn't work when all 4 instances try listening on the same port) all instances get the same port - 6060.

@fedebaier
Copy link
Author

Can someone help with this? It's still happening in PM2 version 4.2.0

@fedebaier
Copy link
Author

Still happening in version 4.2.1

@Jackeriss
Copy link

same issue, im using 4.2.1 in fork mode.

@marcobellaccini
Copy link

Same issue with version 4.2.3.
My nasty workaround is deleting and starting processes every time - but, please, fix this!
...or, at least, it may be time to write about this limitation in the docs.

@fernetmatt
Copy link

Same issue with version 4.2.3.
My nasty workaround is deleting and starting processes every time - but, please, fix this!
...or, at least, it may be time to write about this limitation in the docs.

I'm also having the same issue and still using the same workaround. Very nasty!

@guizzo
Copy link

guizzo commented Mar 19, 2020

Same issue with version 4.2.3.
My nasty workaround is deleting and starting processes every time - but, please, fix this!
...or, at least, it may be time to write about this limitation in the docs.

I'm also having the same issue and still using the same workaround. Very nasty!

Same for me :(

@SmartArray
Copy link

Same here!

@leeyyl
Copy link

leeyyl commented Apr 19, 2020

me too😔

@stale
Copy link

stale bot commented May 23, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label May 23, 2020
@marcobellaccini
Copy link

marcobellaccini commented May 23, 2020 via email

@stale stale bot removed the stale label May 23, 2020
@fedebaier
Copy link
Author

😑

@stale
Copy link

stale bot commented Jun 22, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jun 22, 2020
@fedebaier
Copy link
Author

😒

@stale stale bot removed the stale label Jun 22, 2020
@cook1e-monster
Copy link

still the same ???

@leeyyl
Copy link

leeyyl commented Jul 8, 2020

still the same ???

Yes, now I must pm2 kill then pm2 start every time.

@fernetmatt
Copy link

Unfurtunately yes

@xyanlucasx
Copy link

same problem...

@stale
Copy link

stale bot commented Aug 10, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Aug 10, 2020
@fedebaier
Copy link
Author

Ping to prevent this issue from being closed for inactivity

@kroko
Copy link

kroko commented Jan 4, 2021

Ping this issue. Happens in fork mode.

  • On start each forked instance get's incremented increment_var value (PORT in the example)
  • On restart all forked instances get same increment_var (PORT in the example, all get 33010, which leads to errororing of all but one instance because of EADDRINUSE)
const path = require('path');
const os = require('os');

const packageJson = require(path.resolve(__dirname, 'package.json'));
const PORT = 33010;
const WARPAPP_PM_FORK_PORT_START = PORT;
const WARPAPP_PM_FORK_PORT_COUNT = Math.max(1, os.cpus().length - 1);

module.exports = {
  apps: [
    {
      name: packageJson.name,
      script: path.resolve(__dirname, 'app/app.js'),
      autorestart: true,
      watch: false,
      max_memory_restart: '128M',
      instance_var: 'WARPAPP_PM_FORK_ID',

      wait_ready: true,
      listen_timeout: 3000,
      kill_timeout: 3000,

      instances: WARPAPP_PM_FORK_PORT_COUNT,
      exec_mode: 'fork',
      increment_var: 'PORT',

      env: {
        DEBUG: 'warp*',
        NODE_ENV: 'development',
        WARP_FQDN_APP: 'xxx.devserver.tld',
        WARP_FQDN_CDN: 'xxx.devcdn.tld',
        WARP_VERSION_CDN_PRECOMPILED: 'vp001',
        PORT,
        WARPAPP_PM_FORK_PORT_START,
        WARPAPP_PM_FORK_PORT_COUNT
      },
      env_development: {
        DEBUG: 'warp*',
        NODE_ENV: 'development',
        WARP_FQDN_APP: 'xxx.devserver.tld',
        WARP_FQDN_CDN: 'xxx.devcdn.tld',
        WARP_VERSION_CDN_PRECOMPILED: 'vp001',
        PORT,
        WARPAPP_PM_FORK_PORT_START,
        WARPAPP_PM_FORK_PORT_COUNT
      },
      env_testing: {
        NODE_ENV: 'testing',
        WARP_FQDN_APP: 'xxx.devserver.tld',
        WARP_FQDN_CDN: 'xxx.devcdn.tld',
        WARP_VERSION_CDN_PRECOMPILED: 'vp001',
        PORT,
        WARPAPP_PM_FORK_PORT_START,
        WARPAPP_PM_FORK_PORT_COUNT
      },
      env_staging: {
        NODE_ENV: 'production',
        WARP_FQDN_APP: 'stage.finalapp.tld',
        WARP_FQDN_CDN: 'xxxxxx.cloudfront.net',
        WARP_VERSION_CDN_PRECOMPILED: 'vp001',
        PORT,
        WARPAPP_PM_FORK_PORT_START,
        WARPAPP_PM_FORK_PORT_COUNT
      },
      env_production: {
        NODE_ENV: 'production',
        WARP_FQDN_APP: 'www.finalapp.tld',
        WARP_FQDN_CDN: 'xxxxxx.cloudfront.net',
        WARP_VERSION_CDN_PRECOMPILED: 'vp001',
        PORT,
        WARPAPP_PM_FORK_PORT_START,
        WARPAPP_PM_FORK_PORT_COUNT
      },
      error_file: path.join(__dirname, '../../logs/pm2/pm2-err.log'),
      out_file: path.join(__dirname, '../../logs/pm2/pm2-out.log'),
      log_file: path.join(__dirname, '../../logs/pm2/pm2-combined.log'),
      time: true
    }
  ]
};

Note that

  • On start each forked instance get's incremented instance_var value (WARPAPP_PM_FORK_ID in the example)
  • On restart each forked instance get's incremented instance_var value (WARPAPP_PM_FORK_ID in the example)
    which allows to have a workaround in app logic, where port is set by arithmetic operations that depends on increment_var variable name, i.e.
// do not consume port number directly from env
// const PORT = normalizeEnvNumber(process.env.PORT || 33000);
// instead ->

const WARPAPP_PM_FORK_ID = normalizeEnvNumber(process.env.WARPAPP_PM_FORK_ID || 0);
const WARPAPP_PM_FORK_PORT_START = normalizeEnvNumber(process.env.WARPAPP_PM_FORK_PORT_START || 0);
const WARPAPP_PM_FORK_PORT_COUNT = normalizeEnvNumber(process.env.WARPAPP_PM_FORK_PORT_COUNT || 0);
// set PORT from PORT_START plus FORK_ID
const PORT = WARPAPP_PM_FORK_PORT_START + WARPAPP_PM_FORK_ID;

@EXOMODE
Copy link

EXOMODE commented Jan 27, 2021

I confirm the existence of this problem. It can't be solved for more than a year. Fortunately, there is a simple logical workaround that does not require restarting the process:

  1. Remove the increment_var parameter so that the first run doesn't break;
  2. Add the instance_var: "INSTANCE_ID" setting (the name is taken as an example);
  3. In your code, at the listening start point, add a port initializer based on the instance, for example, http.listen(parseInt(process.env.PORT) + parseInt(process.env.INSTANCE_ID));.

This solution is probably the most logical, because knowing the initial port number and the current instance, we can always find out the current increment offset.

@MickL
Copy link

MickL commented Jun 4, 2021

I cant reproduce in fork mode on PM2 4.5.6

@NinoSkopac
Copy link

I just updated to 5.1.1 and the issue is fixed!

@AhmadSinaSaeedi
Copy link

it works after kill pm2

@maganap
Copy link

maganap commented Apr 9, 2022

@NinoSkopac I'm using 5.2.0 and I still have this issue.
Actually it happened something wrong when using NODE_APP_INSTANCE : after restart, it started counting from 1 or 2 instead of 0.
So I moved to using increment_var , and now I've got this issue about having all instances with the same '0' initial value after restart. Deleting and re-creating the app most times fixes the issue, but I've had a couple of times where it didn't work and I had to delete and re-create the app a second time :-/

@gopricy
Copy link

gopricy commented Jul 21, 2022

this is a serious bug that needs to be fixed which already affected our long running service multiple times, just found this issue that lived for almost 3 years. I really regret using pm2 in the first place

@RamDivrania
Copy link

Seems like @Unitech is on lifelong vacation.

This bug is killing us.
@WillianAgostini

@sladg
Copy link

sladg commented May 29, 2023

Almost 4 years later .... still same thing.
How is there actual product build on this (http://pm2.keymetrics.io) considering the state of affairs? 🤔

@gopricy
Copy link

gopricy commented Nov 1, 2023

it is really a tier-0 bug

@NinoSkopac
Copy link

NinoSkopac commented Nov 2, 2023 via email

@MagnumGoYB
Copy link

5.2.2 The problem is still found.

@WillianAgostini
Copy link

WillianAgostini commented Nov 14, 2023

Solved #5715

@WillianAgostini
Copy link

@Unitech

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests