Skip to content

Commit

Permalink
docs: Add section about capacity planning (#4386)
Browse files Browse the repository at this point in the history
  • Loading branch information
kibertoad committed Nov 1, 2022
1 parent a02650e commit 041cf41
Showing 1 changed file with 37 additions and 1 deletion.
38 changes: 37 additions & 1 deletion docs/Guides/Recommendations.md
Expand Up @@ -8,7 +8,8 @@ This document contains a set of recommendations when using Fastify.
- [HAProxy](#haproxy)
- [Nginx](#nginx)
- [Kubernetes](#kubernetes)

- [Kubernetes](#kubernetes)
- [Capacity Planning For Production](#capacity)
## Use A Reverse Proxy
<a id="reverseproxy"></a>

Expand Down Expand Up @@ -298,3 +299,38 @@ readinessProbe:
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 5
```

## Capacity Planning For Production
<a id="capacity"></a>

In order to rightsize the production environment for your Fastify application,
you are highly recommended to perform your own measurements against
different configurations of the environment, which may
use real CPU cores, virtual CPU cores (vCPU), or even fractional
vCPU cores. We will use the term vCPU throughout this
recommendation to represent any CPU type.

You can use such tools as [k6](https://github.com/grafana/k6)
or [autocannon](https://github.com/mcollina/autocannon) for conducting
the necessary performance tests.

That said, you may also consider the following as a rule of a thumb:

* In order to have the lowest possible latency, 2 vCPU are recommended per app
instance (e.g., a k8s pod). The second vCPU will mostly be used by the
garbage collector (GC) and libuv threadpool. This will minimize the latency
for your users, as well as the memory usage, as the GC will be run more
frequently. Also, the main thread won't have to stop to let the GC run.

* In order to optimize for throughput (handling the largest possible amount of
requests per second per vCPU available), consider using smaller amount of vCPUs
per app instance. It is totally fine to run Node.js application with 1 vCPU.

* You may experiment with even smaller amount of vCPU, which may provide even
better throughput in certain use-cases. There are reports of e. g. API gateway
solutions working well with 100m-200m vCPU in Kubernetes.

See [Node's Event Loop From the Inside Out ](https://www.youtube.com/watch?v=P9csgxBgaZ8)
in order to understand the workings of Node.js in greater detail and make a
better determination about what your specific application needs.

0 comments on commit 041cf41

Please sign in to comment.