Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document the MaxUnavailableStatefulSet feature #32596

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
25 changes: 24 additions & 1 deletion content/en/docs/concepts/workloads/controllers/statefulset.md
Original file line number Diff line number Diff line change
Expand Up @@ -266,7 +266,9 @@ in the same order as Pod termination (from the largest ordinal to the smallest),
each Pod one at a time.

The Kubernetes control plane waits until an updated Pod is Running and Ready prior
to updating its predecessor. If you have set `.spec.minReadySeconds` (see [Minimum Ready Seconds](#minimum-ready-seconds)), the control plane additionally waits that amount of time after the Pod turns ready, before moving on.
to updating its predecessor. If you have set `.spec.minReadySeconds` (see
[Minimum Ready Seconds](#minimum-ready-seconds)), the control plane additionally waits that
amount of time after the Pod turns ready, before moving on.

### Partitioned rolling updates {#partitions}

Expand All @@ -280,6 +282,27 @@ updates to its `.spec.template` will not be propagated to its Pods.
In most cases you will not need to use a partition, but they are useful if you want to stage an
update, roll out a canary, or perform a phased roll out.

### Maximum unavailable Pods

{{< feature-state for_k8s_version="v1.24" state="alpha" >}}

You can control the maximum number of Pods that can be unavailable during an update
by specifying the `.spec.updateStrategy.rollingUpdate.maxUnavailable` field.
The value can be an absolute number (for example, `5`) or a percentage of desired
Pods (for example, `10%`). Absolute number is calculated from the percentage value
by rounding it up. This field cannot be 0. The default setting is 1.

This field applies to all Pods in the range `0` to `replicas - 1`. If there is any
unavailable Pod in the range `0` to `replicas - 1`, it will be counted towards
`maxUnavailable`.

{{< note >}}
The `maxUnavailable` field is in Alpha stage and it is honored only by API servers
that are running with the `MaxUnavailableStatefulSet`
[feature gate](/docs/reference/commmand-line-tools-reference/feature-gates/)
enabled.
{{< /note >}}

### Forced rollback

When using [Rolling Updates](#rolling-updates) with the default
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,7 @@ different Kubernetes components.
| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | |
| `LogarithmicScaleDown` | `false` | Alpha | 1.21 | 1.21 |
| `LogarithmicScaleDown` | `true` | Beta | 1.22 | |
| `MaxUnavailableStatefulSet` | `false` | Alpha | 1.24 | |
| `MemoryManager` | `false` | Alpha | 1.21 | 1.21 |
| `MemoryManager` | `true` | Beta | 1.22 | |
| `MemoryQoS` | `false` | Alpha | 1.22 | |
Expand Down Expand Up @@ -941,6 +942,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
filesystem walk for better performance and accuracy.
- `LogarithmicScaleDown`: Enable semi-random selection of pods to evict on controller scaledown
based on logarithmic bucketing of pod timestamps.
- `MaxUnavailableStatefulSet`: Enables setting the `maxUnavailable` field for the
[rolling update strategy](/docs/concepts/workloads/controllers/statefulset/#rolling-updates)
of a StatefulSet. The field specifies the maximum number of Pods
that can be unavailable during the update.
- `MemoryManager`: Allows setting memory affinity for a container based on
NUMA topology.
- `MemoryQoS`: Enable memory protection and usage throttle on pod / container using cgroup v2 memory controller.
Expand Down