You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And then it seems possibly have broken the worker and after some time the Puma server stops accepting connections.
Puma config:
workersInteger(ENV.fetch('PUMA_MAX_WORKERS','3'))worker_culling_strategy:oldestwait_for_less_busy_worker(0.01)force_shutdown_after25threads1,1# NO Multithreadingpuma_fork_worker_mode=ENV.fetch("PUMA_ENABLE_FORK_WORKER_MODE","0") == "1"preload_app!(!puma_fork_worker_mode)nakayoshi_fork(true)unlessENV.fetch("PUMA_DISABLE_NAKAYOSHI_FORK","0") == "1"ifpuma_fork_worker_moderestart_randomness_base=ENV.fetch('PUMA_RESTART_WORKERS_AFTER_REQUESTS',500.0).to_frestart_randomness_jitter=ENV.fetch('PUMA_RESTART_WORKERS_AFTER_REQUESTS_JITTER',0.0).to_frestart_randomness=(restart_randomness_base + (rand * restart_randomness_jitter) - (restart_randomness_jitter / 2.0)).to_iputs"Will restart workers after #{restart_randomness} requests (base = #{restart_randomness_base}, jitter = #{restart_randomness_jitter})"# Due to the randomness of how requests are assigned, at any given time it seems we have workers with like 1k requests and# other workers with like 10 requests. So we'll tell puma to refork the process at some randomized interval.# This should help reduce memory footprint and optimize the copy-on-write memory benefits.#fork_worker(restart_randomness)endrackupDefaultRackupportENV.fetch('PORT','3000')before_forkdo# we should just need to disconnect redis and it will reconnect on usedisconnect_redis=->(redis){ifredis.kind_of?(::Redis)redis.closeelsifdefined?(::MockRedis) && redis.kind_of?(::MockRedis)redis.flushdbendredis}disconnect_redis.(::StandaloneRedis.connect)ifdefined?(::StandaloneRedis)disconnect_redis.(::Resque.redis&.redis)ifdefined?(::Resque)disconnect_redis.(::Stoplight::Light.default_data_store.instance_variable_get(:@redis))ifdefined?(::Stoplight)disconnect_redis.(::ActionCable.server.pubsub.redis_connection_for_subscriptions)ifdefined?(::ActionCable) && ::ActionCable.server.pubsub.kind_of?(::ActionCable::SubscriptionAdapter::Redis)disconnect_redis.($redis)ifdefined?($redis)begin
::Rails.cache.clearrescueNotImplementedError# Ignoredendend
To Reproduce
This just starts to happen after some time of running the application server.
In the above config, it happens with:
PUMA_ENABLE_FORK_WORKER_MODE=1
PUMA_DISABLE_NAKAYOSHI_FORK=0
PUMA_MAX_WORKERS=22
PUMA_RESTART_WORKERS_AFTER_REQUESTS=300
PUMA_RESTART_WORKERS_AFTER_REQUESTS_JITTER=50
Specifically if you change PUMA_ENABLE_FORK_WORKER_MODE=0 the error ceases.
The worker count fits on a Private-L dyno for Heroku (14gb ram) for our somewhat bloated app.
Expected behavior
I expect this error to not be raised.
Desktop (please complete the following information):
OS: Ubuntu 18 (heroku-18)
Puma Version: 5.6.4
The text was updated successfully, but these errors were encountered:
This failed CI run MRI: macos-13 2.7 logged Assertion failed: (("libev: kqueue found invalid fd", 0)), function kqueue_poll, file ev_kqueue.c, line 133.
Uploaded the logs from that run as they will eventually disappear: MRI macos-13 2.7.zip
Describe the bug
puma: cluster worker 3: 19 [app]: ../libev/ev.c:4043: ev_run: Assertion
("libev: ev_loop recursion during release detected", loop_done != EVBREAK_RECURSE)' failed.`And then it seems possibly have broken the worker and after some time the Puma server stops accepting connections.
Puma config:
To Reproduce
This just starts to happen after some time of running the application server.
In the above config, it happens with:
PUMA_ENABLE_FORK_WORKER_MODE=1
PUMA_DISABLE_NAKAYOSHI_FORK=0
PUMA_MAX_WORKERS=22
PUMA_RESTART_WORKERS_AFTER_REQUESTS=300
PUMA_RESTART_WORKERS_AFTER_REQUESTS_JITTER=50
Specifically if you change PUMA_ENABLE_FORK_WORKER_MODE=0 the error ceases.
The worker count fits on a Private-L dyno for Heroku (14gb ram) for our somewhat bloated app.
Expected behavior
I expect this error to not be raised.
Desktop (please complete the following information):
The text was updated successfully, but these errors were encountered: