Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GOMAXPROCS usage instead of NumCPU for the pool size #1801

Merged
merged 1 commit into from Jun 29, 2021

Conversation

makhov
Copy link
Contributor

@makhov makhov commented Jun 28, 2021

What

runtime.GOMAXPROCS() is used instead of runtime.NumCPU() to detect a default pool size.

Why

We are running our services in the Kubernetes cluster and usually, each of them consumes (and is limited to) 2-3 CPUs.
But it's a logical restriction and runtime.NumCPU() will return all available CPUs on the node(64 in our case).
That's why we set the GOMAXPROCS env variable to improve system's performance.

In the current implementation, each of our services can create 5 * 64 connections to each Redis node, which in our case means ~300 * 5 * 64 = ~100k connections to the single node (if I'm not mistaken)

Thanks to the lazy connection initialization it happens rarely and completely unexpected. Once service or Redis node is busy the client starts generating a huge number of connections:

Screenshot 2021-06-28 at 18 00 13

Pros and cons

The main benefit of the change will be the more reasonable defaults. Less unexpected behavior and fewer incidents in production :)

The main downside is that default behavior has changed in some cases. In most cases, GOMAXPROCS is equal to num CPU, but sometimes is not. But I do believe that it's more reasonable value and more aligned with the original purpose of using NumCPU() as a pool size

@monkey92t
Copy link
Collaborator

Thanks

@monkey92t monkey92t merged commit b7d23d4 into redis:master Jun 29, 2021
@makhov makhov deleted the gomaxprocs-instead-of-numcpu branch August 13, 2021 13:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants