New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Booting puma shows "WARNING: Detected 21 Thread(s) started in app boot" #25259
Comments
I've followed up a bit on puma/puma#991. A summary:
I'm beginning to suspect the answer is not to bother with clustered mode in development, and I've updated my configuration accordingly. 😄 |
r? @matthewd threads are coming from the new file watcher related #24990 (comment) |
Feel free to close this as a dupe of #24990. 😄 |
@mathie - not really a dupe. The large number of threads are because on OSX Listen uses one thread per watched directory. The ONLY reason for this is because the current implementation of The ONLY reason this in turn hasn't been improved is because I don't use OSX, so I don't even have Xcode or a way to test things on OSX. If someone can extend |
We need one file checker booted per process as talked about in rails#24990. Before we do a check to see if any updates have been registered by the listener we first check to make sure that the current process has booted a listener. We are intentionally not starting a listener when the checker is created. This way we can avoid rails#25259 in which puma warns of multiple threads created before fork. As written the listener for each process will be invoked by the `ActionDispatch::Executor` middleware when the `updated?` method is called. This is the first middleware on the stack and will be invoked before application code is read into memory. The downside of this approach is that the API is a little less obvious. I.e. that you have to call `updated?` to get the listener to start is not intuitive. We could make `boot!` not private if we want to make the API a little nicer. Alternatively we could boot when the checker is initialized however this reintroduces the puma threads warning, and also means that in cases of `rails server` or when using `preload!` that we have extra threads notifying of changes on a process that we don't care about. [close rails#24990] [close rails#25259]
We need one file checker booted per process as talked about in rails#24990. Before we do a check to see if any updates have been registered by the listener we first check to make sure that the current process has booted a listener. We are intentionally not starting a listener when the checker is created. This way we can avoid rails#25259 in which puma warns of multiple threads created before fork. As written the listener for each process will be invoked by the `ActionDispatch::Executor` middleware when the `updated?` method is called. This is the first middleware on the stack and will be invoked before application code is read into memory. The downside of this approach is that the API is a little less obvious. I.e. that you have to call `updated?` to get the listener to start is not intuitive. We could make `boot!` not private if we want to make the API a little nicer. Alternatively we could boot when the checker is initialized however this reintroduces the puma threads warning, and also means that in cases of `rails server` or when using `preload!` that we have extra threads notifying of changes on a process that we don't care about. [close rails#24990] [close rails#25259]
@e2 thanks for your work and your comments For our cases we don't really need the names of files or directories changed, we only set one essentially global flag of Regarding the socket comment, I guess that approach is kind of what puma/unicorn do essentially they self pipe for communication between master and workers. Might be worth looking more into that approach. |
There's really no overhead which can be avoided, because all APIs pass events (files/dirs) in callbacks.
To be honest I wouldn't worry about the thread count. It's one thread per watched directory, and not one thread per directory. Most are waiting on a read/select anyway, so they're not busy at all. They are actually somewhat needed on OSX, since MD5 sum checking is needed to detect if directories actually changed. (HFS has 1-second time resolution, so scanning directories alone is insufficient, and while the rb-fsevent driver can handle file-level changes, the amount of events send per change used to be too much to handle efficiently). Just saying that Listen is pretty much just a large stack of OSX "workarounds". Even Windows support is ironically simpler (for a change). So given the 2 approaches:
Overall, I think running Puma cluster mode in develompent is overkill - and that's the real root cause here. |
I took a stab at using a IO.pipe but got stuck https://gist.github.com/schneems/1e50a78b64a7920c503b319d6112166b. Feel like there might be a way forwards there.
This would make the problem much simpler but I do think there are valid reasons for running clusters locally. While i'm not using evented file monitor in production yet, if I happen to be using other code that breaks when used with forks, I would rather find it in development then when it breaks in production. For example what if we started keeping a whitelist of positive |
I'll gladly check it out when I'm back. (I'm gone for a few days). I'll be happy to extend Listen to support something like this out of the box too. Listen could even maybe detect forking internally and switch to a pipe (or optionally socket) implementation automatically (if allowed by an option).
I agree. Some people do use Listen e.g. for tracking uploaded files - although that's best handled by a single process, because you don't want multiple processes handling the same added files. Making Listen forkable would be beneficial. Especially since pros like you would save lots of time ;) |
That PR didn't actually solve this problem but this is more of a "not a bug" type of a bug, yes you'll still get a ton of thread warnings, but you should be able to safely ignore them. |
We need one file checker booted per process as talked about in rails#24990. Before we do a check to see if any updates have been registered by the listener we first check to make sure that the current process has booted a listener. We are intentionally not starting a listener when the checker is created. This way we can avoid rails#25259 in which puma warns of multiple threads created before fork. As written the listener for each process will be invoked by the `ActionDispatch::Executor` middleware when the `updated?` method is called. This is the first middleware on the stack and will be invoked before application code is read into memory. The downside of this approach is that the API is a little less obvious. I.e. that you have to call `updated?` to get the listener to start is not intuitive. We could make `boot!` not private if we want to make the API a little nicer. Alternatively we could boot when the checker is initialized however this reintroduces the puma threads warning, and also means that in cases of `rails server` or when using `preload!` that we have extra threads notifying of changes on a process that we don't care about. [close rails#24990] [close rails#25259]
@schneems @e2 @mathie I was running into this and it stopped happening once I added https://github.com/Shopify/bootsnap |
@abinoda But you don't use |
@olivierlacan I do use |
I have the same issue, and it's connected to the connection reaper, the trace leads me to:
do we know if |
At the least the aforementioned Heroku article recommends this, so I assume "yes".
I use bootsnap (1.4.6) and still have this issue. May be reopen the issue as it's still actual? @schneems
Rails 6.0.2.2 |
Upgrading concurrent-ruby to 1.1.7 solves it for some people: ruby-concurrency/concurrent-ruby#868 But not for me... I still have this warning on Ruby 2.6.6 and Rails 5.2.3.4 :( |
I can confirm that I too have this issue on |
For those seeing the "reaper sleep" warning, I think this is harmless. My Puma configuration includes
and I get the reaper-sleep warning in the staging (but not the development) environment (not sure why). I modify the
Then
yields
so it seems that reaper for the bottom-level process (19924) persists and is checking for stale connections in what is now an inactive pool, while the workers each have their own reaper doing the same for their active pools. I guess it would be tidier to have some mechanism for terminating the bottom-level reaper thread,
but that would seem low priority. |
Steps to reproduce
I'm just spinning up a brand new Rails 5 project to kick the tyres and find out what's new. It comes with puma by default (which is awesome, and saves me installing it later!). Straight out of the box, it works fine. Then I follow the Heroku guidelines for configuring it, which essentially boils down to uncommenting the following lines in
config/puma.rb
:Now when I boot the server, with
bundle exec puma -C config/puma.rb
I get:The bit that bothers me is the warning about multiple threads, in case that wasn't clear.
If I were to hazard a guess, I'd say whatever's new in Rails 5 that has introduced the dependency on
listen
is responsible for this, but I don't know what that is. I also don't know if it needs its threads restartedon_worker_boot
or whether Rails already handles that correctly, automatically. It would be ace if, between you and the Puma team, you could figure out how to elide the warning. I don't like ignoring warnings!Having written this up, I suspect I should be submitting an issue to the Rails repo instead, but since it's now such little effort to do so, I'm going to flag it up here, too. 😄
Expected behavior
I expect a brand new Rails app, configured to run in a popular deployment scenario, to boot with zero warnings.
Actual behavior
In reality, with what I believe is a fairly standard deployment configuration, the Rails app boots with several warnings, as shown above, which is a suboptimal new user experience.
System configuration
Rails version: 5.0.0.rc1
Ruby version: 2.3.1
The text was updated successfully, but these errors were encountered: