Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] auto scaling configurations #120

Open
AyWa opened this issue Sep 21, 2020 · 1 comment
Open

[Feature] auto scaling configurations #120

AyWa opened this issue Sep 21, 2020 · 1 comment

Comments

@AyWa
Copy link
Contributor

AyWa commented Sep 21, 2020

In a cluster with a huge variance of usage, it is good to be able to set different configuration for auto scaling depending of the size of the cluster.

it would be good to set different minShardsPerNode, maxShardsPerNode, scaleUpCPUBoundary dependings of the size of the cluster.
Not sure what would be the correct syntax.
But for example adding a rules or overwrites part. And a selector like replicaLte (replica less than). The operator could check the overwrite part and fallback to default if there is none.

Before

  scaling:
    enabled: true
    minReplicas: 2
    maxReplicas: 99
    minIndexReplicas: 1
    maxIndexReplicas: 40
    minShardsPerNode: 3
    maxShardsPerNode: 3
    scaleUpCPUBoundary: 75
    scaleUpThresholdDurationSeconds: 240
    scaleUpCooldownSeconds: 1000
    scaleDownCPUBoundary: 40
    scaleDownThresholdDurationSeconds: 1200
    scaleDownCooldownSeconds: 1200
    diskUsagePercentScaledownWatermark: 80

After

  scaling:
    enabled: true
    minReplicas: 2
    maxReplicas: 99
    minIndexReplicas: 1
    maxIndexReplicas: 40
    minShardsPerNode: 3
    maxShardsPerNode: 3
    scaleUpCPUBoundary: 75
    scaleUpThresholdDurationSeconds: 240
    scaleUpCooldownSeconds: 1000
    scaleDownCPUBoundary: 40
    scaleDownThresholdDurationSeconds: 1200
    scaleDownCooldownSeconds: 1200
    diskUsagePercentScaledownWatermark: 80
    rules:
      - replicaLte: 2
        scaleUpCPUBoundary: 30
      - replicaLte: 4
        scaleUpCPUBoundary: 40
      - replicaLte: 10
        scaleUpCPUBoundary: 60

It is mainly for huge cost optimization. During night a cluster can be very small, but at early morning, the cluster need to be able to scale aggressively, but when cluster start to be big, it can scale slowly

I am willing to implement this feature if it makes sense for this project

@otrosien
Copy link
Member

@AyWa Thanks for the suggestion. As horizontal auto-scaling becomes more powerful in the current kubernetes releases we should consider dropping our custom route and tying it back to the HPA. I'm interested in hearing your thoughts on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants