Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Document multiple app approach #4393

Merged
merged 10 commits into from Nov 7, 2022
15 changes: 15 additions & 0 deletions docs/Guides/Recommendations.md
Expand Up @@ -8,6 +8,7 @@ This document contains a set of recommendations when using Fastify.
- [HAProxy](#haproxy)
- [Nginx](#nginx)
- [Kubernetes](#kubernetes)
- [Running Multiple Instances](#multiple)

## Use A Reverse Proxy
<a id="reverseproxy"></a>
Expand Down Expand Up @@ -298,3 +299,17 @@ readinessProbe:
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 5
```

## Running Multiple Instances
<a id="multiple"></a>

There are several use-cases where running multiple Fastify
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not clear to me why this setup is better?

The metrics data is calculated from the API server itself.
If you are exposing healthcheck and metrics in a dedicated instance. Does it means that you are create an extra proxy?

Reverse Proxy ---> Metrics Server ---> API Server

Then, why don't you setup the metrics handler inside Reverse Proxy?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least our situation is that while it is possible to use an elaborate nginx setup to only expose a subset of endpoints, our platform team is asking us to expose metrics on a separate port instead, as that would reduce complexity on their end, and this seems to be a fairly popular practice in the industry.
Would you recommend otherwise?

It would be something close to this: SkeLLLa/fastify-metrics#43 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me, two port for a single application certainly increase the complexity (including both server and applications).

Imagine when you try to scale up horizontally, which means the port open is always double and shouldn't be the same.

Moreover, taking nginx as configuration example. It requires to use two different upstream instead of sharing one. It actually duplicating works.

It is completely fine if this is hard requirement for your company. But I wouldn't recommend to others.

I can give out a more solid example for why it only increase complexity in later time.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not a hard requirement, it's an open discussion between engineers and platform. If one is preferable over the other, we should clearly document it and then follow it.
@mcollina Can you chime in on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

our platform team is asking us to expose metrics on a separate port instead, as that would reduce complexity on their end, and this seems to be a fairly popular practice in the industry.

Is it? This is the first time I have heard of doing such a thing. In fact, it makes no sense to me. The healthcheck/metrics endpoint, on a different port, is going to give you insight into an instance that is not processing requests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exposing a separate port is the recommended approach by prometheus.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exposing a separate port is the recommended approach by prometheus.

I search a bit around and couldn't found any related article about this approach. And it is not on prometheus website documentation either.
May I know where you read this information?

apps on the same server is a recommended approach. Most common examples
would be exposing healthcheck and metrics endpoints on a separate port, in
order to prevent public access.
kibertoad marked this conversation as resolved.
Show resolved Hide resolved

It is perfectly fine to spin up several Fastify instances within the same
Node.js process and run them concurrently, even in heavy load systems.
kibertoad marked this conversation as resolved.
Show resolved Hide resolved
Each Fastify instance only generates as much load as the traffic it receives,
plus the memory used for that Fastify instance.