Skip to content

Commit

Permalink
fix some spell errors (#2615)
Browse files Browse the repository at this point in the history
  • Loading branch information
calvinxiao committed Apr 29, 2021
1 parent 0e63c68 commit 3e80f7c
Show file tree
Hide file tree
Showing 8 changed files with 10 additions and 10 deletions.
2 changes: 1 addition & 1 deletion docs/jungle/rc.d/README.md
@@ -1,6 +1,6 @@
# Puma as a service using rc.d

Manage multilpe Puma servers as services on one box using FreeBSD's rc.d service.
Manage multiple Puma servers as services on one box using FreeBSD's rc.d service.

## Dependencies

Expand Down
2 changes: 1 addition & 1 deletion docs/kubernetes.md
Expand Up @@ -61,6 +61,6 @@ For some high-throughput systems, it is possible that some HTTP requests will re

There is a subtle race condition between step 2 and 3: The replication controller does not synchronously remove the pod from the Services AND THEN call the pre-stop hook of the pod, but rather it asynchronously sends "remove this pod from your endpoints" requests to the Services and then immediately proceeds to invoke the pods' pre-stop hook. If the Service controller (typically something like nginx or haproxy) receives this request handles this request "too" late (due to internal lag or network latency between the replication and Service controllers) then it is possible that the Service controller will send one or more requests to a Puma process which has already shut down its listening socket. These requests will then fail with 5XX error codes.

The way Kubernetes works this way, rather than handling step 2 synchronously, is due to the CAP theorem: in a distributed system there is no way to guarantuee that any message will arrive promptly. In particular, waiting for all Service controllers to report back might get stuck for an indefinite time if one of them has already been terminated or if there has been a net split. A way to work around this is to add a sleep to the pre-stop hook of the same time as the `terminationGracePeriodSeconds` time. This will allow the Puma process to keep serving new requests during the entire grace period, although it will no longer receive new requests after all Service controllers have propagated the removal of the pod from their endpoint lists. Then, after `terminationGracePeriodSeconds`, the pod receives `SIGKILL` and closes down. If your process can't handle SIGKILL properly, for example because it needs to release locks in different services, you can also sleep for a shorter period (and/or increase `terminationGracePeriodSeconds`) as long as the time slept is longer than the time that your Service controllers take to propagate the pod removal. The downside of this workaround is that all pods will take at minimum the amount of time slept to shut down and this will increase the time required for your rolling deploy.
The way Kubernetes works this way, rather than handling step 2 synchronously, is due to the CAP theorem: in a distributed system there is no way to guarantee that any message will arrive promptly. In particular, waiting for all Service controllers to report back might get stuck for an indefinite time if one of them has already been terminated or if there has been a net split. A way to work around this is to add a sleep to the pre-stop hook of the same time as the `terminationGracePeriodSeconds` time. This will allow the Puma process to keep serving new requests during the entire grace period, although it will no longer receive new requests after all Service controllers have propagated the removal of the pod from their endpoint lists. Then, after `terminationGracePeriodSeconds`, the pod receives `SIGKILL` and closes down. If your process can't handle SIGKILL properly, for example because it needs to release locks in different services, you can also sleep for a shorter period (and/or increase `terminationGracePeriodSeconds`) as long as the time slept is longer than the time that your Service controllers take to propagate the pod removal. The downside of this workaround is that all pods will take at minimum the amount of time slept to shut down and this will increase the time required for your rolling deploy.

More discussions and links to relevant articles can be found in https://github.com/puma/puma/issues/2343.
2 changes: 1 addition & 1 deletion docs/stats.md
Expand Up @@ -53,7 +53,7 @@ end

### single mode and individual workers in cluster mode

When Puma is run in single mode, these stats ar available at the top level. When Puma is run in cluster mode, these stats are available within the `worker_status` array in a hash labeled `last_status`, in an array of hashes, one hash for each worker.
When Puma is run in single mode, these stats are available at the top level. When Puma is run in cluster mode, these stats are available within the `worker_status` array in a hash labeled `last_status`, in an array of hashes, one hash for each worker.

* backlog: requests that are waiting for an available thread to be available. if this is above 0, you need more capacity [always true?]
* running: how many threads are running
Expand Down
2 changes: 1 addition & 1 deletion lib/puma/binder.rb
Expand Up @@ -13,7 +13,7 @@ module Puma
require 'puma/minissl'
require 'puma/minissl/context_builder'

# Odd bug in 'pure Ruby' nio4r verion 2.5.2, which installs with Ruby 2.3.
# Odd bug in 'pure Ruby' nio4r version 2.5.2, which installs with Ruby 2.3.
# NIO doesn't create any OpenSSL objects, but it rescues an OpenSSL error.
# The bug was that it did not require openssl.
# @todo remove when Ruby 2.3 support is dropped
Expand Down
2 changes: 1 addition & 1 deletion lib/puma/const.rb
Expand Up @@ -235,7 +235,7 @@ module Const

EARLY_HINTS = "rack.early_hints".freeze

# Mininum interval to checks worker health
# Minimum interval to checks worker health
WORKER_CHECK_INTERVAL = 5

# Illegal character in the key or value of response header
Expand Down
2 changes: 1 addition & 1 deletion lib/puma/dsl.rb
Expand Up @@ -484,7 +484,7 @@ def workers(count)

# Disable warning message when running in cluster mode with a single worker.
#
# Cluster mode has some overhead of running an addtional 'control' process
# Cluster mode has some overhead of running an additional 'control' process
# in order to manage the cluster. If only running a single worker it is
# likely not worth paying that overhead vs running in single mode with
# additional threads instead.
Expand Down
4 changes: 2 additions & 2 deletions lib/puma/error_logger.rb
Expand Up @@ -23,7 +23,7 @@ def self.stdio
new $stderr
end

# Print occured error details.
# Print occurred error details.
# +options+ hash with additional options:
# - +error+ is an exception object
# - +req+ the http request
Expand All @@ -34,7 +34,7 @@ def info(options={})
log title(options)
end

# Print occured error details only if
# Print occurred error details only if
# environment variable PUMA_DEBUG is defined.
# +options+ hash with additional options:
# - +error+ is an exception object
Expand Down
4 changes: 2 additions & 2 deletions lib/puma/thread_pool.rb
Expand Up @@ -13,7 +13,7 @@ module Puma
# a thread pool via the `Puma::ThreadPool#<<` operator where it is stored in a `@todo` array.
#
# Each thread in the pool has an internal loop where it pulls a request from the `@todo` array
# and proceses it.
# and processes it.
class ThreadPool
class ForceShutdown < RuntimeError
end
Expand Down Expand Up @@ -220,7 +220,7 @@ def <<(work)
# then the `@todo` array would stay the same size as the reactor works
# to try to buffer the request. In that scenario the next call to this
# method would not block and another request would be added into the reactor
# by the server. This would continue until a fully bufferend request
# by the server. This would continue until a fully buffered request
# makes it through the reactor and can then be processed by the thread pool.
def wait_until_not_full
with_mutex do
Expand Down

0 comments on commit 3e80f7c

Please sign in to comment.