Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TCP server socket not optimized for low latency by default, contradicting docs #2623

Closed
FooBarWidget opened this issue May 7, 2021 · 1 comment · Fixed by #2631
Closed

Comments

@FooBarWidget
Copy link

FooBarWidget commented May 7, 2021

Describe the bug
dsl.rb says the following for #bind:

Set whether to optimize for low latency instead of throughput with +low_latency+, default is to optimize for low latency. This is done via +Socket::TCP_NODELAY+.

example Disable optimization for low latency
bind 'tcp://0.0.0.0:9292?low_latency=false'

This is not the actual behavior. The actual behavior is that low latency is off by default.

Root cause analysis

It's true that binder.rb #add_tcp_listener has the parameter optimize_for_latency set to true by default. However, this default value is never used, because the code that calls add_tcp_listener passes an explicit value.

When running Puma, runner.rb #load_and_bind is called. This in turn calls @launcher.binder.parse, which contains the following code:

opt = params.key?('low_latency')
...
io = add_tcp_listener uri.host, uri.port, opt, bak

So load_and_bind sets optimize_for_latency to whether the low_latency parameter appears in the TCP URI. It doesn't even check for its value.

Either the documentation and the parameter default are wrong (and optimize_for_latency is supposed to be false by default), or the #parse method is wrong (and it should set optimize_for_latency to false only if and only if the low_latency parameter equals false).

The phrase "default is to optimize for low latency" was first introduced in 6dd084b, which was a documentation update. It doesn't look like the actual behavior w.r.t. whether low latency is enabled by default, has ever changed. This makes me believe that the docs are wrong, instead of the code being wrong. If the docs are indeed wrong, then optimize_for_latency's default value should be set to false in order to avoid confusion.

It looks like someone else found this issue too: #2457 (comment)

To Reproduce

Prepare a config.ru:

app = lambda do |env|
  opt = env['puma.socket'].getsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY)
  puts "tcp_nodelay = #{opt.inspect}"
  [200, {}, ["ok\n"]]
end
run app

Test 1:

  1. Run Puma normally: puma
  2. curl http://127.0.0.1:9292
  3. Observe that TCP_NODELAY is disabled: tcp_nodelay = #<Socket::Option: INET TCP NODELAY 0>

Test 2:

  1. Run Puma with low_latency parameter: puma -b 'tcp://127.0.0.1:9292?low_latency'
  2. curl http://127.0.0.1:9292
  3. Observe that TCP_NODELAY is enabled: tcp_nodelay = #<Socket::Option: INET TCP NODELAY 4>

Test 3:

  1. Run Puma with low_latency=false parameter: puma -b 'tcp://127.0.0.1:9292?low_latency=false'
  2. curl http://127.0.0.1:9292
  3. Observe that TCP_NODELAY is enabled: tcp_nodelay = #<Socket::Option: INET TCP NODELAY 4>

Expected behavior

  • Test 1 should report that tcp_nodelay is enabled.
  • Test 3 should report that tcp_nodelay is disabled.

Desktop (please complete the following information):

  • OS: macOS
  • Puma Version: 5.2.2 and git fb71323
@nateberkopec
Copy link
Member

Looking at TCP_NODELAY, I can't see a compelling reason why it should be on by default. I think most the time it will be overridden anyway by Puma's use of TCP_CORK.

If anyone has a compelling reason we should be TCP_NODELAY by default I'm happy to discuss in another issue.

Marking this as a docs fix, we should change the docs to fit the code behavior here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants