Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address already in use 127.0.0.1 port 9293 error while trying to start two instances of Puma on localhost #3142

Closed
jiggneshhgohel opened this issue May 2, 2023 · 4 comments · Fixed by #3204

Comments

@jiggneshhgohel
Copy link

Puma version: 6.2.2 (ruby 3.1.2-p20)

I tried to start two instances of Puma server (one API and another WebApp) (through hanami gem based application) on my development machine but was unable to start the server for WebApp successfully. API server started normally but WebApp server showed error Address already in use - bind(2) for "127.0.0.1" port 9293.

Please find below the server startup logs.

API App Logs

    jignesh@jignesh-Latitude-7290:~/hanami_projects/my_api_app$ bundle exec hanami server --port=2400
    13:59:36 - INFO - Using Guardfile at /......./my_api_app/Guardfile.
    13:59:36 - INFO - Puma starting on port 2400 in development environment.
    13:59:36 - INFO - Guard is now watching at '/......./my_api_app'
    [15170] Puma starting in cluster mode...
    [15170] * Puma version: 6.2.1 (ruby 3.1.2-p20) ("Speaking of Now")
    [15170] *  Min threads: 5
    [15170] *  Max threads: 5
    [15170] *  Environment: development
    [15170] *   Master PID: 15170
    [15170] *      Workers: 2
    [15170] *     Restarts: (✔) hot (✖) phased
    [15170] * Preloading application
    [15170] * Listening on http://0.0.0.0:2400
    [15170] Use Ctrl-C to stop
    [15170] * Starting control server on http://127.0.0.1:9293
    [15170] * Starting control server on http://[::1]:9293
    [15170] - Worker 0 (PID: 15177) booted in 0.0s, phase: 0
    [15170] - Worker 1 (PID: 15179) booted in 0.0s, phase: 0
    [15170] ! Terminating timed out worker (worker failed to check in within 60 seconds): 15177
    [15170] - Worker 0 (PID: 25100) booted in 0.0s, phase: 0

WebApp Logs

    jignesh@jignesh-Latitude-7290:~/hanami_projects/my_web_app$ bundle exec hanami server
    15:32:31 - INFO - Using Guardfile at /......./my_web_app/Guardfile.
    15:32:32 - INFO - Puma starting on port 2300 in development environment.
    15:32:32 - INFO - Guard is now watching at '/......./my_web_app'
    [25763] Puma starting in cluster mode...
    [25763] * Puma version: 6.2.2 (ruby 3.1.2-p20) ("Speaking of Now")
    [25763] *  Min threads: 5
    [25763] *  Max threads: 5
    [25763] *  Environment: development
    [25763] *   Master PID: 25763
    [25763] *      Workers: 2
    [25763] *     Restarts: (✔) hot (✖) phased
    [25763] * Preloading application
    [25763] * Listening on http://127.0.0.1:2300
    [25763] Use Ctrl-C to stop
    /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/binder.rb:335:in `initialize': Address already in use - bind(2) for "127.0.0.1" port 9293 (Errno::EADDRINUSE)
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/binder.rb:335:in `new'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/binder.rb:335:in `add_tcp_listener'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/binder.rb:329:in `block in add_tcp_listener'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/binder.rb:328:in `each'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/binder.rb:328:in `add_tcp_listener'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/binder.rb:164:in `block in parse'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/binder.rb:147:in `each'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/binder.rb:147:in `parse'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/runner.rb:78:in `start_control'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/cluster.rb:410:in `run'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/launcher.rb:194:in `run'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/lib/puma/cli.rb:75:in `run'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/gems/puma-6.2.2/bin/puma:10:in `<top (required)>'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/bin/puma:25:in `load'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/bin/puma:25:in `<main>'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/bin/ruby_executable_hooks:22:in `eval'
    	from /home/jignesh/.rvm/gems/ruby-3.1.2@my_web_app/bin/ruby_executable_hooks:22:in `<main>'

Following is the Puma config file app/config/puma.rb content. In both the applications the configuration is identical.

    # frozen_string_literal: true
    
    max_threads_count = ENV.fetch("HANAMI_MAX_THREADS", 5)
    min_threads_count = ENV.fetch("HANAMI_MIN_THREADS") { max_threads_count }
    threads min_threads_count, max_threads_count
    
    port        ENV.fetch("HANAMI_PORT", 2300)
    environment ENV.fetch("HANAMI_ENV", "development")
    workers     ENV.fetch("HANAMI_WEB_CONCURRENCY", 2)
    
    on_worker_boot do
      Hanami.shutdown
    end
    
    preload_app!

So for my WebApp I tried making following changes to my_web_app/config/puma.rb based on the suggestions in #2113

#port        ENV.fetch("HANAMI_PORT", 2300)
bind        "tcp://127.0.0.1:2300"

but no luck.

Exploring more I found few more resources at following locations

#782
#1022
#1318

but couldn't get exactly what I should do in my case.

Just in case it helps following are the contents of my /etc/hosts


127.0.0.1	localhost
127.0.1.1	jignesh-Latitude-7290
127.0.0.1 api.some_api.local

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

So I tried seeking help from hanami community and Stackoverflow and I received a response to my Stackoverflow post wherein it was suggested to do following:

bundle exec puma --control-url tcp://127.0.01:9293 --port=2300
bundle exec puma --control-url tcp://127.0.01:9294 --port=2400 

And trying that out it solved my problem. But it raised another question in mind that why the --control-url option worked but not the bind configuration option I defined in my puma config file (please refer the code snippets shared above).

As can be seen in the logs shared hanami automatically starts Puma available in my applications Gemfile. So the options defined in puma config should be used by Puma, right? Also the documentation for hanami server command implementation (shown below)

https://github.com/hanami/cli/blob/v2.0.3/lib/hanami/cli/commands/app/server.rb#L12-L27

says

The server is just a thin wrapper on top of Rack::Server

So that way also the puma config options should get passed to Puma when Rack::Server internally invokes Puma (as part of invocations referenced below), right?

https://github.com/puma/puma/blob/v6.2.2/lib/rack/handler/puma.rb#L136
https://github.com/hanami/cli/blob/v2.0.3/lib/hanami/cli/commands/app/server.rb#L49
https://github.com/hanami/cli/blob/v2.0.3/lib/hanami/cli/server.rb#L29

https://github.com/rack/rack/blob/v2.2.7/lib/rack/server.rb#L52
https://github.com/rack/rack/blob/v2.2.7/lib/rack/server.rb#L167
https://github.com/rack/rack/blob/v2.2.7/lib/rack/handler.rb#L60
https://github.com/rack/rack/blob/v2.2.7/lib/rack/handler.rb#L40
https://github.com/rack/rack/blob/v2.2.7/lib/rack/handler.rb#L13

Note: Above links refer the version of code my applications uses hanami (2.0.3), hanami-cli (2.0.3), rack (2.2.7).

And bind is documented as a supported config option at following link https://github.com/puma/puma/tree/v6.2.2#binding-tcp--sockets and also in the DSL at https://github.com/puma/puma/blob/master/lib/puma/dsl.rb#L245-L277 (DSL reference found under https://github.com/puma/puma/tree/v6.2.2#configuration-file)

**So the question is: are there differences between bind and control-url options because of which when defined bind option in my puma config file it didn't worked but using the control-url option directly from command line worked? If yes, then in which contexts each of those options should be used?

A subjective thought: I thought it would be easy to just use a different port for starting both the instances but a lot of hours passed to find out the cause for the issue I shared but still I couldn't find a solution and had to seek help from community. I don't think starting multiple instances of server on same machine is a rare scenario and that too esp for development purpose and it's really unfortunate if it can't be achieved in smooth manner.

Thanks.

@nateberkopec
Copy link
Member

Hm, could we make it easier to discover this for people?

Maybe in runner.rb:78:in start_control'` we could catch the binder error and put out a more informative message - hey, you're trying to start a control server and the control server port is taken.

@dentarg
Copy link
Member

dentarg commented Aug 1, 2023

But it raised another question in mind that why the --control-url option worked but not the bind configuration option I defined in my puma config file (please refer the code snippets shared above).

Because they are different options, sometimes, users have to read the docs :-) See the docs on bind

The option you are looking for is activate_control_app, here's example usage:

$ echo 'app { [200, {}, ["OK"]] }\nactivate_control_app "tcp://0.0.0.0:9393"' | puma --config /dev/stdin
Puma starting in single mode...
* Puma version: 6.3.0 (ruby 3.2.2-p53) ("Mugi No Toki Itaru")
*  Min threads: 0
*  Max threads: 5
*  Environment: development
*          PID: 28402
* Listening on http://0.0.0.0:9292
* Starting control server on http://0.0.0.0:9393
Use Ctrl-C to stop
^C- Gracefully stopping, waiting for requests to finish
=== puma shutdown: 2023-08-01 19:09:03 +0200 ===
- Goodbye!

@dentarg
Copy link
Member

dentarg commented Aug 1, 2023

I do agree Puma could be improved, showing an better error message, here is repro using only Puma:

$ echo 'app { [200, {}, ["OK"]] }\nactivate_control_app "tcp://0.0.0.0:9292"' | puma --config /dev/stdin
Puma starting in single mode...
* Puma version: 6.3.0 (ruby 3.2.2-p53) ("Mugi No Toki Itaru")
*  Min threads: 0
*  Max threads: 5
*  Environment: development
*          PID: 28326
* Listening on http://0.0.0.0:9292
/Users/dentarg/.arm64_rubies/3.2.2/lib/ruby/gems/3.2.0/gems/puma-6.3.0/lib/puma/binder.rb:334:in `initialize': Address already in use - bind(2) for "0.0.0.0" port 9292 (Errno::EADDRINUSE)
	from /Users/dentarg/.arm64_rubies/3.2.2/lib/ruby/gems/3.2.0/gems/puma-6.3.0/lib/puma/binder.rb:334:in `new'
	from /Users/dentarg/.arm64_rubies/3.2.2/lib/ruby/gems/3.2.0/gems/puma-6.3.0/lib/puma/binder.rb:334:in `add_tcp_listener'
	from /Users/dentarg/.arm64_rubies/3.2.2/lib/ruby/gems/3.2.0/gems/puma-6.3.0/lib/puma/binder.rb:163:in `block in parse'
	from /Users/dentarg/.arm64_rubies/3.2.2/lib/ruby/gems/3.2.0/gems/puma-6.3.0/lib/puma/binder.rb:146:in `each'
	from /Users/dentarg/.arm64_rubies/3.2.2/lib/ruby/gems/3.2.0/gems/puma-6.3.0/lib/puma/binder.rb:146:in `parse'
	from /Users/dentarg/.arm64_rubies/3.2.2/lib/ruby/gems/3.2.0/gems/puma-6.3.0/lib/puma/runner.rb:78:in `start_control'
	from /Users/dentarg/.arm64_rubies/3.2.2/lib/ruby/gems/3.2.0/gems/puma-6.3.0/lib/puma/single.rb:50:in `run'
	from /Users/dentarg/.arm64_rubies/3.2.2/lib/ruby/gems/3.2.0/gems/puma-6.3.0/lib/puma/launcher.rb:194:in `run'
	from /Users/dentarg/.arm64_rubies/3.2.2/lib/ruby/gems/3.2.0/gems/puma-6.3.0/lib/puma/cli.rb:75:in `run'
	from /Users/dentarg/.arm64_rubies/3.2.2/lib/ruby/gems/3.2.0/gems/puma-6.3.0/bin/puma:10:in `<top (required)>'
	from /Users/dentarg/.arm64_rubies/3.2.2/bin/puma:25:in `load'
	from /Users/dentarg/.arm64_rubies/3.2.2/bin/puma:25:in `<main>'

@dhavalsingh
Copy link
Contributor

#3204
This should close the issue.
Let me know if I need to add tests, where should I? Couldn't find the right place to add a test for this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants