diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 13590450bf611..3f82ad5d3d593 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -1,4 +1,4 @@
-
+
+
\ No newline at end of file
diff --git a/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/index.md b/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/index.md
new file mode 100644
index 0000000000000..d3b7102efb075
--- /dev/null
+++ b/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/index.md
@@ -0,0 +1,93 @@
+---
+layout: blog
+title: 'Kubernetes 1.26: Device Manager graduates to GA'
+date: 2022-12-19
+slug: devicemanager-ga
+---
+
+**Author:** Swati Sehgal (Red Hat)
+
+The Device Plugin framework was introduced in the Kubernetes v1.8 release as a vendor
+independent framework to enable discovery, advertisement and allocation of external
+devices without modifying core Kubernetes. The feature graduated to Beta in v1.10.
+With the recent release of Kubernetes v1.26, Device Manager is now generally
+available (GA).
+
+Within the kubelet, the Device Manager facilitates communication with device plugins
+using gRPC through Unix sockets. Device Manager and Device plugins both act as gRPC
+servers and clients by serving and connecting to the exposed gRPC services respectively.
+Device plugins serve a gRPC service that kubelet connects to for device discovery,
+advertisement (as extended resources) and allocation. Device Manager connects to
+the `Registration` gRPC service served by kubelet to register itself with kubelet.
+
+Please refer to the documentation for an [example](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#example-pod) on how a pod can request a device exposed to the cluster by a device plugin.
+
+Here are some example implementations of device plugins:
+- [AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin)
+- [Collection of Intel device plugins for Kubernetes](https://github.com/intel/intel-device-plugins-for-kubernetes)
+- [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin)
+- [SRIOV network device plugin for Kubernetes](https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin)
+
+## Noteworthy developments since Device Plugin framework introduction
+
+### Kubelet APIs moved to kubelet staging repo
+External facing `deviceplugin` API packages moved from `k8s.io/kubernetes/pkg/kubelet/apis/`
+to `k8s.io/kubelet/pkg/apis/` in v1.17. Refer to [Move external facing kubelet apis to staging](https://github.com/kubernetes/kubernetes/pull/83551) for more details on the rationale behind this change.
+
+### Device Plugin API updates
+Additional gRPC endpoints introduced:
+ 1. `GetDevicePluginOptions` is used by device plugins to communicate
+ options to the `DeviceManager` in order to indicate if `PreStartContainer`,
+ `GetPreferredAllocation` or other future optional calls are supported and
+ can be called before making devices available to the container.
+ 1. `GetPreferredAllocation` allows a device plugin to forward allocation
+ preferrence to the `DeviceManager` so it can incorporate this information
+ into its allocation decisions. The `DeviceManager` will call out to a
+ plugin at pod admission time asking for a preferred device allocation
+ of a given size from a list of available devices to make a more informed
+ decision. E.g. Specifying inter-device constraints to indicate preferrence
+ on best-connected set of devices when allocating devices to a container.
+ 1. `PreStartContainer` is called before each container start if indicated by
+ device plugins during registration phase. It allows Device Plugins to run device
+ specific operations on the Devices requested. E.g. reconfiguring or
+ reprogramming FPGAs before the container starts running.
+
+Pull Requests that introduced these changes are here:
+1. [Invoke preStart RPC call before container start, if desired by plugin](https://github.com/kubernetes/kubernetes/pull/58282)
+1. [Add GetPreferredAllocation() call to the v1beta1 device plugin API](https://github.com/kubernetes/kubernetes/pull/92665)
+
+With introduction of the above endpoints the interaction between Device Manager in
+kubelet and Device Manager can be shown as below:
+
+{{< figure src="deviceplugin-framework-overview.svg" alt="Representation of the Device Plugin framework showing the relationship between the kubelet and a device plugin" class="diagram-large" caption="Device Plugin framework Overview" >}}
+
+### Change in semantics of device plugin registration process
+Device plugin code was refactored to separate 'plugin' package under the `devicemanager`
+package to lay the groundwork for introducing a `v1beta2` device plugin API. This would
+allow adding support in `devicemanager` to service multiple device plugin APIs at the
+same time.
+
+With this refactoring work, it is now mandatory for a device plugin to start serving its gRPC
+service before registering itself with kubelet. Previously, these two operations were asynchronous
+and device plugin could register itself before starting its gRPC server which is no longer the
+case. For more details, refer to [PR #109016](https://github.com/kubernetes/kubernetes/pull/109016) and [Issue #112395](https://github.com/kubernetes/kubernetes/issues/112395).
+
+### Dynamic resource allocation
+In Kubernetes 1.26, inspired by how [Persistent Volumes](/docs/concepts/storage/persistent-volumes)
+are handled in Kubernetes, [Dynamic Resource Allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/)
+has been introduced to cater to devices that have more sophisticated resource requirements like:
+
+1. Decouple device initialization and allocation from the pod lifecycle.
+1. Facilitate dynamic sharing of devices between containers and pods.
+1. Support custom resource-specific parameters
+1. Enable resource-specific setup and cleanup actions
+1. Enable support for Network-attached resources, not just node-local resources
+
+## Is the Device Plugin API stable now?
+No, the Device Plugin API is still not stable; the latest Device Plugin API version
+available is `v1beta1`. There are plans in the community to introduce `v1beta2` API
+to service multiple plugin APIs at once. A per-API call with request/response types
+would allow adding support for newer API versions without explicitly bumping the API.
+
+In addition to that, there are existing proposals in the community to introduce additional
+endpoints [KEP-3162: Add Deallocate and PostStopContainer to Device Manager API](https://github.com/kubernetes/enhancements/issues/3162).
diff --git a/content/en/blog/_posts/2022-12-20-validating-admission-policies-alpha/index.md b/content/en/blog/_posts/2022-12-20-validating-admission-policies-alpha/index.md
new file mode 100644
index 0000000000000..fbbee4ab6dfb8
--- /dev/null
+++ b/content/en/blog/_posts/2022-12-20-validating-admission-policies-alpha/index.md
@@ -0,0 +1,160 @@
+---
+layout: blog
+title: "Kubernetes 1.26: Introducing Validating Admission Policies"
+date: 2022-12-20
+slug: validating-admission-policies-alpha
+---
+
+**Authors:** Joe Betz (Google), Cici Huang (Google)
+
+In Kubernetes 1.26, the 1st alpha release of validating admission policies is
+available!
+
+Validating admission policies use the [Common Expression
+Language](https://github.com/google/cel-spec) (CEL) to offer a declarative,
+in-process alternative to [validating admission
+webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks).
+
+CEL was first introduced to Kubernetes for the [Validation rules for
+CustomResourceDefinitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules).
+This enhancement expands the use of CEL in Kubernetes to support a far wider
+range of admission use cases.
+
+Admission webhooks can be burdensome to develop and operate. Webhook developers
+must implement and maintain a webhook binary to handle admission requests. Also,
+admission webhooks are complex to operate. Each webhook must be deployed,
+monitored and have a well defined upgrade and rollback plan. To make matters
+worse, if a webhook times out or becomes unavailable, the Kubernetes control
+plane can become unavailable. This enhancement avoids much of this complexity of
+admission webhooks by embedding CEL expressions into Kubernetes resources
+instead of calling out to a remote webhook binary.
+
+For example, to set a limit on how many replicas a Deployment can have.
+Start by defining a validation policy:
+
+```yaml
+apiVersion: admissionregistration.k8s.io/v1alpha1
+kind: ValidatingAdmissionPolicy
+metadata:
+ name: "demo-policy.example.com"
+spec:
+ matchConstraints:
+ resourceRules:
+ - apiGroups: ["apps"]
+ apiVersions: ["v1"]
+ operations: ["CREATE", "UPDATE"]
+ resources: ["deployments"]
+ validations:
+ - expression: "object.spec.replicas <= 5"
+```
+
+The `expression` field contains the CEL expression that is used to validate
+admission requests. `matchConstraints` declares what types of requests this
+`ValidatingAdmissionPolicy` is may validate.
+
+Next bind the policy to the appropriate resources:
+
+```yaml
+apiVersion: admissionregistration.k8s.io/v1alpha1
+kind: ValidatingAdmissionPolicyBinding
+metadata:
+ name: "demo-binding-test.example.com"
+spec:
+ policyName: "demo-policy.example.com"
+ matchResources:
+ namespaceSelector:
+ matchExpressions:
+ - key: environment
+ operator: In
+ values:
+ - test
+```
+
+This `ValidatingAdmissionPolicyBinding` resource binds the above policy only to
+namespaces where the `environment` label is set to `test`. Once this binding
+is created, the kube-apiserver will begin enforcing this admission policy.
+
+To emphasize how much simpler this approach is than admission webhooks, if this example
+were instead implemented with a webhook, an entire binary would need to be
+developed and maintained just to perform a `<=` check. In our review of a wide
+range of admission webhooks used in production, the vast majority performed
+relatively simple checks, all of which can easily be expressed using CEL.
+
+Validation admission policies are highly configurable, enabling policy authors
+to define policies that can be parameterized and scoped to resources as needed
+by cluster administrators.
+
+For example, the above admission policy can be modified to make it configurable:
+
+```yaml
+apiVersion: admissionregistration.k8s.io/v1alpha1
+kind: ValidatingAdmissionPolicy
+metadata:
+ name: "demo-policy.example.com"
+spec:
+ paramKind:
+ apiVersion: rules.example.com/v1 # You also need a CustomResourceDefinition for this API
+ kind: ReplicaLimit
+ matchConstraints:
+ resourceRules:
+ - apiGroups: ["apps"]
+ apiVersions: ["v1"]
+ operations: ["CREATE", "UPDATE"]
+ resources: ["deployments"]
+ validations:
+ - expression: "object.spec.replicas <= params.maxReplicas"
+```
+
+Here, `paramKind` defines the resources used to configure the policy and the
+`expression` uses the `params` variable to access the parameter resource.
+
+This allows multiple bindings to be defined, each configured differently. For
+example:
+
+```yaml
+apiVersion: admissionregistration.k8s.io/v1alpha1
+kind: ValidatingAdmissionPolicyBinding
+metadata:
+ name: "demo-binding-production.example.com"
+spec:
+ policyName: "demo-policy.example.com"
+ paramRef:
+ name: "demo-params-production.example.com"
+ matchResources:
+ namespaceSelector:
+ matchExpressions:
+ - key: environment
+ operator: In
+ values:
+ - production
+```
+
+```yaml
+apiVersion: rules.example.com/v1 # defined via a CustomResourceDefinition
+kind: ReplicaLimit
+metadata:
+ name: "demo-params-production.example.com"
+maxReplicas: 1000
+```
+
+This binding and parameter resource pair limit deployments in namespaces with the
+`environment` label set to `production` to a max of 1000 replicas.
+
+You can then use a separate binding and parameter pair to set a different limit
+for namespaces in the `test` environment.
+
+I hope this has given you a glimpse of what is possible with validating
+admission policies! There are many features that we have not yet touched on.
+
+To learn more, read
+[Validating Admission Policy](/docs/reference/access-authn-authz/validating-admission-policy/).
+
+We are working hard to add more features to admission policies and make the
+enhancement easier to use. Try it out, send us your feedback and help us build
+a simpler alternative to admission webhooks!
+
+## How do I get involved?
+
+If you want to get involved in development of admission policies, discuss enhancement
+roadmaps, or report a bug, you can get in touch with developers at
+[SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/index.md b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/index.md
new file mode 100644
index 0000000000000..58bb57366b266
--- /dev/null
+++ b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/index.md
@@ -0,0 +1,129 @@
+---
+layout: blog
+title: 'Kubernetes v1.26: GA Support for Kubelet Credential Providers'
+date: 2022-12-22
+slug: kubelet-credential-providers
+---
+
+**Authors:** Andrew Sy Kim (Google), Dixita Narang (Google)
+
+Kubernetes v1.26 introduced generally available (GA) support for [_kubelet credential
+provider plugins_]( /docs/tasks/kubelet-credential-provider/kubelet-credential-provider/),
+offering an extensible plugin framework to dynamically fetch credentials
+for any container image registry.
+
+## Background
+
+Kubernetes supports the ability to dynamically fetch credentials for a container registry service.
+Prior to Kubernetes v1.20, this capability was compiled into the kubelet and only available for
+Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry.
+
+{{< figure src="kubelet-credential-providers-in-tree.png" caption="Figure 1: Kubelet built-in credential provider support for Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry." >}}
+
+Kubernetes v1.20 introduced alpha support for kubelet credential providers plugins,
+which provides a mechanism for the kubelet to dynamically authenticate and pull images
+for arbitrary container registries - whether these are public registries, managed services,
+or even a self-hosted registry.
+In Kubernetes v1.26, this feature is now GA
+
+{{< figure src="kubelet-credential-providers-plugin.png" caption="Figure 2: Kubelet credential provider overview" >}}
+
+## Why is it important?
+
+Prior to Kubernetes v1.20, if you wanted to dynamically fetch credentials for image registries
+other than ACR (Azure Container Registry), ECR (Elastic Container Registry), or GCR
+(Google Container Registry), you needed to modify the kubelet code.
+The new plugin mechanism can be used in any cluster, and lets you authenticate to new registries without
+any changes to Kubernetes itself. Any cloud provider or vendor can publish a plugin that lets you authenticate with their image registry.
+
+## How it works
+
+The kubelet and the exec plugin binary communicate through stdio (stdin, stdout, and stderr) by sending and receiving
+json-serialized api-versioned types. If the exec plugin is enabled and the kubelet requires authentication information for an image
+that matches against a plugin, the kubelet will execute the plugin binary, passing the `CredentialProviderRequest` API via stdin. Then
+the exec plugin communicates with the container registry to dynamically fetch the credentials and returns the credentials in an
+encoded response of the `CredentialProviderResponse` API to the kubelet via stdout.
+
+{{< figure src="kubelet-credential-providers-how-it-works.png" caption="Figure 3: Kubelet credential provider plugin flow" >}}
+
+On receiving credentials from the kubelet, the plugin can also indicate how long credentials can be cached for, to prevent unnecessary
+execution of the plugin by the kubelet for subsequent image pull requests to the same registry. In cases where the cache duration
+is not specified by the plugin, a default cache duration can be specified by the kubelet (more details below).
+
+```json
+{
+ "apiVersion": "kubelet.k8s.io/v1",
+ "kind": "CredentialProviderResponse",
+ "auth": {
+ "cacheDuration": "6h",
+ "private-registry.io/my-app": {
+ "username": "exampleuser",
+ "password": "token12345"
+ }
+ }
+}
+```
+
+In addition, the plugin can specify the scope in which cached credentials are valid for. This is specified through the `cacheKeyType` field
+in `CredentialProviderResponse`. When the value is `Image`, the kubelet will only use cached credentials for future image pulls that exactly
+match the image of the first request. When the value is `Registry`, the kubelet will use cached credentials for any subsequent image pulls
+destined for the same registry host but using different paths (for example, `gcr.io/foo/bar` and `gcr.io/bar/foo` refer to different images
+from the same registry). Lastly, when the value is `Global`, the kubelet will use returned credentials for all images that match against
+the plugin, including images that can map to different registry hosts (for example, gcr.io vs k8s.gcr.io). The `cacheKeyType` field is required by plugin
+implementations.
+
+```json
+{
+ "apiVersion": "kubelet.k8s.io/v1",
+ "kind": "CredentialProviderResponse",
+ "auth": {
+ "cacheKeyType": "Registry",
+ "private-registry.io/my-app": {
+ "username": "exampleuser",
+ "password": "token12345"
+ }
+ }
+}
+```
+
+## Using kubelet credential providers
+
+You can configure credential providers by installing the exec plugin(s) into
+a local directory accessible by the kubelet on every node. Then you set two command line arguments for the kubelet:
+* `--image-credential-provider-config`: the path to the credential provider plugin config file.
+* `--image-credential-provider-bin-dir`: the path to the directory where credential provider plugin binaries are located.
+
+The configuration file passed into `--image-credential-provider-config` is read by the kubelet to determine which exec plugins should be invoked for a container image used by a Pod.
+Note that the name of each _provider_ must match the name of the binary located in the local directory specified in `--image-credential-provider-bin-dir`, otherwise the kubelet
+cannot locate the path of the plugin to invoke.
+
+```yaml
+kind: CredentialProviderConfig
+apiVersion: kubelet.config.k8s.io/v1
+providers:
+- name: auth-provider-gcp
+ apiVersion: credentialprovider.kubelet.k8s.io/v1
+ matchImages:
+ - "container.cloud.google.com"
+ - "gcr.io"
+ - "*.gcr.io"
+ - "*.pkg.dev"
+ args:
+ - get-credentials
+ - --v=3
+ defaultCacheDuration: 1m
+```
+
+Below is an overview of how the Kubernetes project is using kubelet credential providers for end-to-end testing.
+
+{{< figure src="kubelet-credential-providers-enabling.png" caption="Figure 4: Kubelet credential provider configuration used for Kubernetes e2e testing" >}}
+
+For more configuration details, see [Kubelet Credential Providers](https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/).
+
+## Getting Involved
+
+Come join SIG Node if you want to report bugs or have feature requests for the Kubelet Credential Provider. You can reach us through the following ways:
+* Slack: [#sig-node](https://kubernetes.slack.com/messages/sig-node)
+* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node)
+* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnode)
+* [Biweekly meetings](https://github.com/kubernetes/community/tree/master/sig-node#meetings)
diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-enabling.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-enabling.png
new file mode 100644
index 0000000000000..5aa0886e90686
Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-enabling.png differ
diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-how-it-works.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-how-it-works.png
new file mode 100644
index 0000000000000..11054229f88ca
Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-how-it-works.png differ
diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-in-tree.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-in-tree.png
new file mode 100644
index 0000000000000..f26b42d45e8e7
Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-in-tree.png differ
diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-plugin.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-plugin.png
new file mode 100644
index 0000000000000..2aeedb738f445
Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-plugin.png differ
diff --git a/content/en/blog/_posts/2022-12-23-fsgroup-on-mount.md b/content/en/blog/_posts/2022-12-23-fsgroup-on-mount.md
new file mode 100644
index 0000000000000..671334c4891ac
--- /dev/null
+++ b/content/en/blog/_posts/2022-12-23-fsgroup-on-mount.md
@@ -0,0 +1,72 @@
+---
+layout: blog
+title: "Kubernetes 1.26: Support for Passing Pod fsGroup to CSI Drivers At Mount Time"
+date: 2022-12-23
+slug: kubernetes-12-06-fsgroup-on-mount
+---
+
+**Authors:** Fabio Bertinatto (Red Hat), Hemant Kumar (Red Hat)
+
+Delegation of `fsGroup` to CSI drivers was first introduced as alpha in Kubernetes 1.22,
+and graduated to beta in Kubernetes 1.25.
+For Kubernetes 1.26, we are happy to announce that this feature has graduated to
+General Availability (GA).
+
+In this release, if you specify a `fsGroup` in the
+[security context](/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod),
+for a (Linux) Pod, all processes in the pod's containers are part of the additional group
+that you specified.
+
+In previous Kubernetes releases, the kubelet would *always* apply the
+`fsGroup` ownership and permission changes to files in the volume according to the policy
+you specified in the Pod's `.spec.securityContext.fsGroupChangePolicy` field.
+
+Starting with Kubernetes 1.26, CSI drivers have the option to apply the `fsGroup` settings during
+volume mount time, which frees the kubelet from changing the permissions of files and directories
+in those volumes.
+
+## How does it work?
+
+CSI drivers that support this feature should advertise the
+[`VOLUME_MOUNT_GROUP`](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetcapabilities) node capability.
+
+After recognizing this information, the kubelet passes the `fsGroup` information to
+the CSI driver during pod startup. This is done through the
+[`NodeStageVolumeRequest`](https://github.com/container-storage-interface/spec/blob/v1.7.0/spec.md#nodestagevolume) and
+[`NodePublishVolumeRequest`](https://github.com/container-storage-interface/spec/blob/v1.7.0/spec.md#nodepublishvolume)
+CSI calls.
+
+Consequently, the CSI driver is expected to apply the `fsGroup` to the files in the volume using a
+_mount option_. As an example, [Azure File CSIDriver](https://github.com/kubernetes-sigs/azurefile-csi-driver) utilizes the `gid` mount option to map
+the `fsGroup` information to all the files in the volume.
+
+It should be noted that in the example above the kubelet refrains from directly
+applying the permission changes into the files and directories in that volume files.
+Additionally, two policy definitions no longer have an effect: neither
+`.spec.fsGroupPolicy` for the CSIDriver object, nor
+`.spec.securityContext.fsGroupChangePolicy` for the Pod.
+
+For more details about the inner workings of this feature, check out the
+[enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2317-fsgroup-on-mount/)
+and the [CSI Driver `fsGroup` Support](https://kubernetes-csi.github.io/docs/support-fsgroup.html)
+in the CSI developer documentation.
+
+## Why is it important?
+
+Without this feature, applying the fsGroup information to files is not possible in certain storage environments.
+
+For instance, Azure File does not support a concept of POSIX-style ownership and permissions
+of files. The CSI driver is only able to set the file permissions at the volume level.
+
+## How do I use it?
+
+This feature should be mostly transparent to users. If you maintain a CSI driver that should
+support this feature, read
+[CSI Driver `fsGroup` Support](https://kubernetes-csi.github.io/docs/support-fsgroup.html)
+for more information on how to support this feature in your CSI driver.
+
+Existing CSI drivers that do not support this feature will continue to work as usual:
+they will not receive any `fsGroup` information from the kubelet. In addition to that,
+the kubelet will continue to perform the ownership and permissions changes to files
+for those volumes, according to the policies specified in `.spec.fsGroupPolicy` for the
+CSIDriver and `.spec.securityContext.fsGroupChangePolicy` for the relevant Pod.
diff --git a/content/en/blog/_posts/2022-12-27-cpumanager-goes-GA.md b/content/en/blog/_posts/2022-12-27-cpumanager-goes-GA.md
new file mode 100644
index 0000000000000..d1edc4575b019
--- /dev/null
+++ b/content/en/blog/_posts/2022-12-27-cpumanager-goes-GA.md
@@ -0,0 +1,71 @@
+---
+layout: blog
+title: 'Kubernetes v1.26: CPUManager goes GA'
+date: 2022-12-27
+slug: cpumanager-ga
+---
+
+**Author:**
+Francesco Romani (Red Hat)
+
+The CPU Manager is a part of the kubelet, the Kubernetes node agent, which enables the user to allocate exclusive CPUs to containers.
+Since Kubernetes v1.10, where it [graduated to Beta](/blog/2018/07/24/feature-highlight-cpu-manager/), the CPU Manager proved itself reliable and
+fulfilled its role of allocating exclusive CPUs to containers, so adoption has steadily grown making it a staple component of performance-critical
+and low-latency setups. Over time, most changes were about bugfixes or internal refactoring, with the following noteworthy user-visible changes:
+
+- [support explicit reservation of CPUs](https://github.com/Kubernetes/Kubernetes/pull/83592): it was already possible to request to reserve a given
+ number of CPUs for system resources, including the kubelet itself, which will not be used for exclusive CPU allocation. Now it is possible to also
+ explicitly select which CPUs to reserve instead of letting the kubelet pick them up automatically.
+- [report the exclusively allocated CPUs](https://github.com/Kubernetes/Kubernetes/pull/97415) to containers, much like is already done for devices,
+ using the kubelet-local [PodResources API](/docs/concepts/extend-Kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
+- [optimize the usage of system resources](https://github.com/Kubernetes/Kubernetes/pull/101771), eliminating unnecessary sysfs changes.
+
+The CPU Manager reached the point on which it "just works", so in Kubernetes v1.26 it has graduated to generally available (GA).
+
+## Customization options for CPU Manager {#cpu-managed-customization}
+
+The CPU Manager supports two operation modes, configured using its _policies_. With the `none` policy, the CPU Manager allocates CPUs to containers
+without any specific constraint except the (optional) quota set in the Pod spec.
+With the `static` policy, then provided that the pod is in the Guaranteed QoS class and every container in that Pod requests an integer amount of vCPU cores,
+then the CPU Manager allocates CPUs exclusively. Exclusive assignment means that other containers (whether from the same Pod, or from a different Pod) do not
+get scheduled onto that CPU.
+
+This simple operational model served the user base pretty well, but as the CPU Manager matured more and more, users started to look at more elaborate use
+cases and how to better support them.
+
+Rather than add more policies, the community realized that pretty much all the novel use cases are some variation of the behavior enabled by the `static`
+CPU Manager policy. Hence, it was decided to add [options to tune the behavior of the static policy](https://github.com/Kubernetes/enhancements/tree/master/keps/sig-node/2625-cpumanager-policies-thread-placement#proposed-change).
+The options have a varying degree of maturity, like any other Kubernetes feature, and in order to be accepted, each new option provides a backward
+compatible behavior when disabled, and to document how to interact with each other, should they interact at all.
+
+This enabled the Kubernetes project to graduate to GA the CPU Manager core component and core CPU allocation algorithms to GA,
+while also enabling a new age of experimentation in this area.
+In Kubernetes v1.26, the CPU Manager supports [three different policy options](/docs/tasks/administer-cluster/cpu-management-policies.md#static-policy-options):
+
+`full-pcpus-only`
+: restrict the CPU Manager core allocation algorithm to full physical cores only, reducing noisy neighbor issues from hardware technologies that allow sharing cores.
+
+`distribute-cpus-across-numa`
+: drive the CPU Manager to evenly distribute CPUs across NUMA nodes, for cases where more than one NUMA node is required to satisfy the allocation.
+
+`align-by-socket`
+: change how the CPU Manager allocates CPUs to a container: consider CPUs to be aligned at the socket boundary, instead of NUMA node boundary.
+
+## Further development
+
+After graduating the main CPU Manager feature, each existing policy option will follow their graduation process, independent from CPU Manager and from each other option.
+There is room for new options to be added, but there's also a growing demand for even more flexibility than what the CPU Manager, and its policy options, currently grant.
+
+Conversations are in progress in the community about splitting the CPU Manager and the other resource managers currently part of the kubelet executable
+into pluggable, independent kubelet plugins. If you are interested in this effort, please join the conversation on SIG Node communication channels (Slack, mailing list, weekly meeting).
+
+## Further reading
+
+Please check out the [Control CPU Management Policies on the Node](/docs/tasks/administer-cluster/cpu-management-policies/)
+task page to learn more about the CPU Manager, and how it fits in relation to the other node-level resource managers.
+
+## Getting involved
+
+This feature is driven by the [SIG Node](https://github.com/Kubernetes/community/blob/master/sig-node/README.md) community.
+Please join us to connect with the community and share your ideas and feedback around the above feature and
+beyond. We look forward to hearing from you!
diff --git a/content/en/blog/_posts/2022-12-29-scalable-job-tracking-ga/index.md b/content/en/blog/_posts/2022-12-29-scalable-job-tracking-ga/index.md
new file mode 100644
index 0000000000000..6d0685e43cae9
--- /dev/null
+++ b/content/en/blog/_posts/2022-12-29-scalable-job-tracking-ga/index.md
@@ -0,0 +1,155 @@
+---
+layout: blog
+title: "Kubernetes 1.26: Job Tracking, to Support Massively Parallel Batch Workloads, Is Generally Available"
+date: 2022-12-29
+slug: "scalable-job-tracking-ga"
+---
+
+**Authors:** Aldo Culquicondor (Google)
+
+The Kubernetes 1.26 release includes a stable implementation of the [Job](/docs/concepts/workloads/controllers/job/)
+controller that can reliably track a large amount of Jobs with high levels of
+parallelism. [SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps)
+and [WG Batch](https://github.com/kubernetes/community/tree/master/wg-batch)
+have worked on this foundational improvement since Kubernetes 1.22. After
+multiple iterations and scale verifications, this is now the default
+implementation of the Job controller.
+
+Paired with the Indexed [completion mode](/docs/concepts/workloads/controllers/job/#completion-mode),
+the Job controller can handle massively parallel batch Jobs, supporting up to
+100k concurrent Pods.
+
+The new implementation also made possible the development of [Pod failure policy](/docs/concepts/workloads/controllers/job/#pod-failure-policy),
+which is in beta in the 1.26 release.
+
+## How do I use this feature?
+
+To use Job tracking with finalizers, upgrade to Kubernetes 1.25 or newer and
+create new Jobs. You can also use this feature in v1.23 and v1.24, if you have the
+ability to enable the `JobTrackingWithFinalizers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
+
+If your cluster runs Kubernetes 1.26, Job tracking with finalizers is a stable
+feature. For v1.25, it's behind that feature gate, and your cluster administrators may have
+explicitly disabled it - for example, if you have a policy of not using
+beta features.
+
+Jobs created before the upgrade will still be tracked using the legacy behavior.
+This is to avoid retroactively adding finalizers to running Pods, which might
+introduce race conditions.
+
+For maximum performance on large Jobs, the Kubernetes project recommends
+using the [Indexed completion mode](/docs/concepts/workloads/controllers/job/#completion-mode).
+In this mode, the control plane is able to track Job progress with less API
+calls.
+
+If you are a developer of operator(s) for batch, [HPC](https://en.wikipedia.org/wiki/High-performance_computing),
+[AI](https://en.wikipedia.org/wiki/Artificial_intelligence), [ML](https://en.wikipedia.org/wiki/Machine_learning)
+or related workloads, we encourage you to use the Job API to delegate accurate
+progress tracking to Kubernetes. If there is something missing in the Job API
+that forces you to manage plain Pods, the [Working Group Batch](https://github.com/kubernetes/community/tree/master/wg-batch)
+welcomes your feedback and contributions.
+
+### Deprecation notices
+
+During the development of the feature, the control plane added the annotation
+[`batch.kubernetes.io/job-tracking`](/docs/reference/labels-annotations-taints/#batch-kubernetes-io-job-tracking)
+to the Jobs that were created when the feature was enabled.
+This allowed a safe transition for older Jobs, but it was never meant to stay.
+
+In the 1.26 release, we deprecated the annotation `batch.kubernetes.io/job-tracking`
+and the control plane will stop adding it in Kubernetes 1.27.
+Along with that change, we will remove the legacy Job tracking implementation.
+As a result, the Job controller will track all Jobs using finalizers and it will
+ignore Pods that don't have the aforementioned finalizer.
+
+Before you upgrade your cluster to 1.27, we recommend that you verify that there
+are no running Jobs that don't have the annotation, or you wait for those jobs
+to complete.
+Otherwise, you might observe the control plane recreating some Pods.
+We expect that this shouldn't affect any users, as the feature is enabled by
+default since Kubernetes 1.25, giving enough buffer for old jobs to complete.
+
+## What problem does the new implementation solve?
+
+Generally, Kubernetes workload controllers, such as ReplicaSet or StatefulSet,
+rely on the existence of Pods or other objects in the API to determine the
+status of the workload and whether replacements are needed.
+For example, if a Pod that belonged to a ReplicaSet terminates or ceases to
+exist, the ReplicaSet controller needs to create a replacement Pod to satisfy
+the desired number of replicas (`.spec.replicas`).
+
+Since its inception, the Job controller also relied on the existence of Pods in
+the API to track Job status. A Job has [completion](/docs/concepts/workloads/controllers/job/#completion-mode)
+and [failure handling](/docs/concepts/workloads/controllers/job/#handling-pod-and-container-failures)
+policies, requiring the end state of a finished Pod to determine whether to
+create a replacement Pod or mark the Job as completed or failed. As a result,
+the Job controller depended on Pods, even terminated ones, to remain in the API
+in order to keep track of the status.
+
+This dependency made the tracking of Job status unreliable, because Pods can be
+deleted from the API for a number of reasons, including:
+- The garbage collector removing orphan Pods when a Node goes down.
+- The garbage collector removing terminated Pods when they reach a threshold.
+- The Kubernetes scheduler preempting a Pod to accomodate higher priority Pods.
+- The taint manager evicting a Pod that doesn't tolerate a `NoExecute` taint.
+- External controllers, not included as part of Kubernetes, or humans deleting
+ Pods.
+
+### The new implementation
+
+When a controller needs to take an action on objects before they are removed, it
+should add a [finalizer](/docs/concepts/overview/working-with-objects/finalizers/)
+to the objects that it manages.
+A finalizer prevents the objects from being deleted from the API until the
+finalizers are removed. Once the controller is done with the cleanup and
+accounting for the deleted object, it can remove the finalizer from the object and the
+control plane removes the object from the API.
+
+This is what the new Job controller is doing: adding a finalizer during Pod
+creation, and removing the finalizer after the Pod has terminated and has been
+accounted for in the Job status. However, it wasn't that simple.
+
+The main challenge is that there are at least two objects involved: the Pod
+and the Job. While the finalizer lives in the Pod object, the accounting lives
+in the Job object. There is no mechanism to atomically remove the finalizer in
+the Pod and update the counters in the Job status. Additionally, there could be
+more than one terminated Pod at a given time.
+
+To solve this problem, we implemented a three staged approach, each translating
+to an API call.
+1. For each terminated Pod, add the unique ID (UID) of the Pod into short-lived
+ lists stored in the `.status` of the owning Job
+ ([.status.uncountedTerminatedPods](/docs/reference/kubernetes-api/workload-resources/job-v1/#JobStatus)).
+2. Remove the finalizer from the Pods(s).
+3. Atomically do the following operations:
+ - remove UIDs from the short-lived lists
+ - increment the overall `succeeded` and `failed` counters in the `status` of
+ the Job.
+
+Additional complications come from the fact that the Job controller might
+receive the results of the API changes in steps 1 and 2 out of order. We solved
+this by adding an in-memory cache for removed finalizers.
+
+Still, we faced some issues during the beta stage, leaving some pods stuck
+with finalizers in some conditions ([#108645](https://github.com/kubernetes/kubernetes/issues/108645),
+[#109485](https://github.com/kubernetes/kubernetes/issues/109485), and
+[#111646](https://github.com/kubernetes/kubernetes/pull/111646)). As a result,
+we decided to switch that feature gate to be disabled by default for the 1.23
+and 1.24 releases.
+
+Once resolved, we re-enabled the feature for the 1.25 release. Since then, we
+have received reports from our customers running tens of thousands of Pods at a
+time in their clusters through the Job API. Seeing this success, we decided to
+graduate the feature to stable in 1.26, as part of our long term commitment to
+make the Job API the best way to run large batch Jobs in a Kubernetes cluster.
+
+To learn more about the feature, you can read the [KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/2307-job-tracking-without-lingering-pods).
+
+## Acknowledgments
+
+As with any Kubernetes feature, multiple people contributed to getting this
+done, from testing and filing bugs to reviewing code.
+
+On behalf of SIG Apps, I would like to especially thank Jordan Liggitt (Google)
+for helping me debug and brainstorm solutions for more than one race condition
+and Maciej Szulik (Red Hat) for his thorough reviews.
diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-overview.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-overview.png
new file mode 100644
index 0000000000000..c6cdbef25ff99
Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-overview.png differ
diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-with-terminating-pod.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-with-terminating-pod.png
new file mode 100644
index 0000000000000..b5a516a01d2d0
Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-with-terminating-pod.png differ
diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/index.md b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/index.md
new file mode 100644
index 0000000000000..91ecd167ccf6f
--- /dev/null
+++ b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/index.md
@@ -0,0 +1,117 @@
+---
+layout: blog
+title: "Kubernetes v1.26: Advancements in Kubernetes Traffic Engineering"
+date: 2022-12-30
+slug: advancements-in-kubernetes-traffic-engineering
+---
+
+**Authors:** Andrew Sy Kim (Google)
+
+Kubernetes v1.26 includes significant advancements in network traffic engineering with the graduation of
+two features (Service internal traffic policy support, and EndpointSlice terminating conditions) to GA,
+and a third feature (Proxy terminating endpoints) to beta. The combination of these enhancements aims
+to address short-comings in traffic engineering that people face today, and unlock new capabilities for the future.
+
+## Traffic Loss from Load Balancers During Rolling Updates
+
+Prior to Kubernetes v1.26, clusters could experience [loss of traffic](https://github.com/kubernetes/kubernetes/issues/85643)
+from Service load balancers during rolling updates when setting the `externalTrafficPolicy` field to `Local`.
+There are a lot of moving parts at play here so a quick overview of how Kubernetes manages load balancers might help!
+
+In Kubernetes, you can create a Service with `type: LoadBalancer` to expose an application externally with a load balancer.
+The load balancer implementation varies between clusters and platforms, but the Service provides a generic abstraction
+representing the load balancer that is consistent across all Kubernetes installations.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ selector:
+ app.kubernetes.io/name: my-app
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+ type: LoadBalancer
+```
+
+Under the hood, Kubernetes allocates a NodePort for the Service, which is then used by kube-proxy to provide a
+network data path from the NodePort to the Pod. A controller will then add all available Nodes in the cluster
+to the load balancer’s backend pool, using the designated NodePort for the Service as the backend target port.
+
+{{< figure src="traffic-engineering-service-load-balancer.png" caption="Figure 1: Overview of Service load balancers" >}}
+
+Oftentimes it is beneficial to set `externalTrafficPolicy: Local` for Services, to avoid extra hops between
+Nodes that are not running healthy Pods backing that Service. When using `externalTrafficPolicy: Local`,
+an additional NodePort is allocated for health checking purposes, such that Nodes that do not contain healthy
+Pods are excluded from the backend pool for a load balancer.
+
+{{< figure src="traffic-engineering-lb-healthy.png" caption="Figure 2: Load balancer traffic to a healthy Node, when externalTrafficPolicy is Local" >}}
+
+One such scenario where traffic can be lost is when a Node loses all Pods for a Service,
+but the external load balancer has not probed the health check NodePort yet. The likelihood of this situation
+is largely dependent on the health checking interval configured on the load balancer. The larger the interval,
+the more likely this will happen, since the load balancer will continue to send traffic to a node
+even after kube-proxy has removed forwarding rules for that Service. This also occurrs when Pods start terminating
+during rolling updates. Since Kubernetes does not consider terminating Pods as “Ready”, traffic can be loss
+when there are only terminating Pods on any given Node during a rolling update.
+
+{{< figure src="traffic-engineering-lb-without-proxy-terminating-endpoints.png" caption="Figure 3: Load balancer traffic to terminating endpoints, when externalTrafficPolicy is Local" >}}
+
+Starting in Kubernetes v1.26, kube-proxy enables the `ProxyTerminatingEndpoints` feature by default, which
+adds automatic failover and routing to terminating endpoints in scenarios where the traffic would otherwise
+be dropped. More specifically, when there is a rolling update and a Node only contains terminating Pods,
+kube-proxy will route traffic to the terminating Pods based on their readiness. In addition, kube-proxy will
+actively fail the health check NodePort if there are only terminating Pods available. By doing so,
+kube-proxy alerts the external load balancer that new connections should not be sent to that Node but will
+gracefully handle requests for existing connections.
+
+{{< figure src="traffic-engineering-lb-with-proxy-terminating-endpoints.png" caption="Figure 4: Load Balancer traffic to terminating endpoints with ProxyTerminatingEndpoints enabled, when externalTrafficPolicy is Local" >}}
+
+### EndpointSlice Conditions
+
+In order to support this new capability in kube-proxy, the EndpointSlice API introduced new conditions for endpoints:
+`serving` and `terminating`.
+
+{{< figure src="endpointslice-overview.png" caption="Figure 5: Overview of EndpointSlice conditions" >}}
+
+The `serving` condition is semantically identical to `ready`, except that it can be `true` or `false`
+while a Pod is terminating, unlike `ready` which will always be `false` for terminating Pods for compatibility reasons.
+The `terminating` condition is true for Pods undergoing termination (non-empty deletionTimestamp), false otherwise.
+
+The addition of these two conditions enables consumers of this API to understand Pod states that were previously not possible.
+For example, we can now track "ready" and "not ready" Pods that are also terminating.
+
+{{< figure src="endpointslice-with-terminating-pod.png" caption="Figure 6: EndpointSlice conditions with a terminating Pod" >}}
+
+Consumers of the EndpointSlice API, such as Kube-proxy and Ingress Controllers, can now use these conditions to coordinate connection draining
+events, by continuing to forward traffic for existing connections but rerouting new connections to other non-terminating endpoints.
+
+## Optimizing Internal Node-Local Traffic
+
+Similar to how Services can set `externalTrafficPolicy: Local` to avoid extra hops for externally sourced traffic, Kubernetes
+now supports `internalTrafficPolicy: Local`, to enable the same optimization for traffic originating within the cluster, specifically
+for traffic using the Service Cluster IP as the destination address. This feature graduated to Beta in Kubernetes v1.24 and is graduating to GA in v1.26.
+
+Services default the `internalTrafficPolicy` field to `Cluster`, where traffic is randomly distributed to all endpoints.
+
+{{< figure src="service-internal-traffic-policy-cluster.png" caption="Figure 7: Service routing when internalTrafficPolicy is Cluster" >}}
+
+When `internalTrafficPolicy` is set to `Local`, kube-proxy will forward internal traffic for a Service only if there is an available endpoint
+that is local to the same Node.
+
+{{< figure src="service-internal-traffic-policy-local.png" caption="Figure 8: Service routing when internalTrafficPolicy is Local" >}}
+
+{{< caution >}}
+When using `internalTrafficPoliy: Local`, traffic will be dropped by kube-proxy when no local endpoints are available.
+{{< /caution >}}
+
+## Getting Involved
+
+If you're interested in future discussions on Kubernetes traffic engineering, you can get involved in SIG Network through the following ways:
+* Slack: [#sig-network](https://kubernetes.slack.com/messages/sig-network)
+* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-network)
+* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnetwork)
+* [Biweekly meetings](https://github.com/kubernetes/community/tree/master/sig-network#meetings)
diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-cluster.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-cluster.png
new file mode 100644
index 0000000000000..e0f477aa2e39e
Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-cluster.png differ
diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-local.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-local.png
new file mode 100644
index 0000000000000..407a0db0ed8f8
Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-local.png differ
diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-healthy.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-healthy.png
new file mode 100644
index 0000000000000..74ac7f4f5c931
Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-healthy.png differ
diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-with-proxy-terminating-endpoints.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-with-proxy-terminating-endpoints.png
new file mode 100644
index 0000000000000..0faa5d960a526
Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-with-proxy-terminating-endpoints.png differ
diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-without-proxy-terminating-endpoints.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-without-proxy-terminating-endpoints.png
new file mode 100644
index 0000000000000..43db9c9efb9a6
Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-without-proxy-terminating-endpoints.png differ
diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-service-load-balancer.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-service-load-balancer.png
new file mode 100644
index 0000000000000..a4e58c6207cb3
Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-service-load-balancer.png differ
diff --git a/content/en/blog/_posts/2023-01-02-cross-namespace-data-sources-alpha.md b/content/en/blog/_posts/2023-01-02-cross-namespace-data-sources-alpha.md
new file mode 100644
index 0000000000000..2f7cd683e029a
--- /dev/null
+++ b/content/en/blog/_posts/2023-01-02-cross-namespace-data-sources-alpha.md
@@ -0,0 +1,159 @@
+---
+layout: blog
+title: "Kubernetes v1.26: Alpha support for cross-namespace storage data sources"
+date: 2023-01-02
+slug: cross-namespace-data-sources-alpha
+---
+
+**Author:** Takafumi Takahashi (Hitachi Vantara)
+
+Kubernetes v1.26, released last month, introduced an alpha feature that
+lets you specify a data source for a PersistentVolumeClaim, even where the source
+data belong to a different namespace.
+With the new feature enabled, you specify a namespace in the `dataSourceRef` field of
+a new PersistentVolumeClaim. Once Kubernetes checks that access is OK, the new
+PersistentVolume can populate its data from the storage source specified in that other
+namespace.
+Before Kubernetes v1.26, provided your cluster had the `AnyVolumeDataSource` feature enabled,
+you could already provision new volumes from a data source in the **same**
+namespace.
+However, that only worked for the data source in the same namespace,
+therefore users couldn't provision a PersistentVolume with a claim
+in one namespace from a data source in other namespace.
+To solve this problem, Kubernetes v1.26 added a new alpha `namespace` field
+to `dataSourceRef` field in PersistentVolumeClaim the API.
+
+## How it works
+
+Once the csi-provisioner finds that a data source is specified with a `dataSourceRef` that
+has a non-empty namespace name,
+it checks all reference grants within the namespace that's specified by the`.spec.dataSourceRef.namespace`
+field of the PersistentVolumeClaim, in order to see if access to the data source is allowed.
+If any ReferenceGrant allows access, the csi-provisioner provisions a volume from the data source.
+
+## Trying it out
+
+The following things are required to use cross namespace volume provisioning:
+
+* Enable the `AnyVolumeDataSource` and `CrossNamespaceVolumeDataSource` [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) for the kube-apiserver and kube-controller-manager
+* Install a CRD for the specific `VolumeSnapShot` controller
+* Install the CSI Provisioner controller and enable the `CrossNamespaceVolumeDataSource` feature gate
+* Install the CSI driver
+* Install a CRD for ReferenceGrants
+
+## Putting it all together
+
+To see how this works, you can install the sample and try it out.
+This sample do to create PVC in dev namespace from VolumeSnapshot in prod namespace.
+That is a simple example. For real world use, you might want to use a more complex approach.
+
+### Assumptions for this example {#example-assumptions}
+
+* Your Kubernetes cluster was deployed with `AnyVolumeDataSource` and `CrossNamespaceVolumeDataSource` feature gates enabled
+* There are two namespaces, dev and prod
+* CSI driver is being deployed
+* There is an existing VolumeSnapshot named `new-snapshot-demo` in the _prod_ namespace
+* The ReferenceGrant CRD (from the Gateway API project) is already deployed
+
+### Grant ReferenceGrants read permission to the CSI Provisioner
+
+Access to ReferenceGrants is only needed when the CSI driver
+has the `CrossNamespaceVolumeDataSource` controller capability.
+For this example, the external-provisioner needs **get**, **list**, and **watch**
+permissions for `referencegrants` (API group `gateway.networking.k8s.io`).
+
+```yaml
+ - apiGroups: ["gateway.networking.k8s.io"]
+ resources: ["referencegrants"]
+ verbs: ["get", "list", "watch"]
+```
+
+### Enable the CrossNamespaceVolumeDataSource feature gate for the CSI Provisioner
+
+Add `--feature-gates=CrossNamespaceVolumeDataSource=true` to the csi-provisioner command line.
+For example, use this manifest snippet to redefine the container:
+
+```yaml
+ - args:
+ - -v=5
+ - --csi-address=/csi/csi.sock
+ - --feature-gates=Topology=true
+ - --feature-gates=CrossNamespaceVolumeDataSource=true
+ image: csi-provisioner:latest
+ imagePullPolicy: IfNotPresent
+ name: csi-provisioner
+```
+
+### Create a ReferenceGrant
+
+Here's a manifest for an example ReferenceGrant.
+
+```yaml
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: ReferenceGrant
+metadata:
+ name: allow-prod-pvc
+ namespace: prod
+spec:
+ from:
+ - group: ""
+ kind: PersistentVolumeClaim
+ namespace: dev
+ to:
+ - group: snapshot.storage.k8s.io
+ kind: VolumeSnapshot
+ name: new-snapshot-demo
+```
+
+### Create a PersistentVolumeClaim by using cross namespace data source
+
+Kubernetes creates a PersistentVolumeClaim on dev and the CSI driver populates
+the PersistentVolume used on dev from snapshots on prod.
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: example-pvc
+ namespace: dev
+spec:
+ storageClassName: example
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ dataSourceRef:
+ apiGroup: snapshot.storage.k8s.io
+ kind: VolumeSnapshot
+ name: new-snapshot-demo
+ namespace: prod
+ volumeMode: Filesystem
+```
+
+## How can I learn more?
+
+The enhancement proposal,
+[Provision volumes from cross-namespace snapshots](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3294-provision-volumes-from-cross-namespace-snapshots), includes lots of detail about the history and technical implementation of this feature.
+
+Please get involved by joining the [Kubernetes Storage Special Interest Group (SIG)](https://github.com/kubernetes/community/tree/master/sig-storage)
+to help us enhance this feature.
+There are a lot of good ideas already and we'd be thrilled to have more!
+
+## Acknowledgments
+
+It takes a wonderful group to make wonderful software.
+Special thanks to the following people for the insightful reviews,
+thorough consideration and valuable contribution to the CrossNamespaceVolumeDataSouce feature:
+
+* Michelle Au (msau42)
+* Xing Yang (xing-yang)
+* Masaki Kimura (mkimuram)
+* Tim Hockin (thockin)
+* Ben Swartzlander (bswartz)
+* Rob Scott (robscott)
+* John Griffith (j-griffith)
+* Michael Henriksen (mhenriks)
+* Mustafa Elbehery (Elbehery)
+
+It’s been a joy to work with y'all on this.
diff --git a/content/en/blog/_posts/2023-01-05-retroactive-default-storage-class.md b/content/en/blog/_posts/2023-01-05-retroactive-default-storage-class.md
new file mode 100644
index 0000000000000..1a6f7374b9a08
--- /dev/null
+++ b/content/en/blog/_posts/2023-01-05-retroactive-default-storage-class.md
@@ -0,0 +1,170 @@
+---
+layout: blog
+title: "Kubernetes 1.26: Retroactive Default StorageClass"
+date: 2023-01-05
+slug: retroactive-default-storage-class
+---
+
+**Author:** Roman Bednář (Red Hat)
+
+The v1.25 release of Kubernetes introduced an alpha feature to change how a default StorageClass was assigned to a PersistentVolumeClaim (PVC).
+With the feature enabled, you no longer need to create a default StorageClass first and PVC second to assign the class. Additionally, any PVCs without a StorageClass assigned can be updated later.
+This feature was graduated to beta in Kubernetes 1.26.
+
+You can read [retroactive default StorageClass assignment](/docs/concepts/storage/persistent-volumes/#retroactive-default-storageclass-assignment) in the Kubernetes documentation for more details about how to use that,
+or you can read on to learn about why the Kubernetes project is making this change.
+
+## Why did StorageClass assignment need improvements
+
+Users might already be familiar with a similar feature that assigns default StorageClasses to **new** PVCs at the time of creation. This is currently handled by the [admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass).
+
+But what if there wasn't a default StorageClass defined at the time of PVC creation?
+Users would end up with a PVC that would never be assigned a class.
+As a result, no storage would be provisioned, and the PVC would be somewhat "stuck" at this point.
+Generally, two main scenarios could result in "stuck" PVCs and cause problems later down the road.
+Let's take a closer look at each of them.
+
+### Changing default StorageClass
+
+With the alpha feature enabled, there were two options admins had when they wanted to change the default StorageClass:
+
+1. Creating a new StorageClass as default before removing the old one associated with the PVC.
+This would result in having two defaults for a short period.
+At this point, if a user were to create a PersistentVolumeClaim with storageClassName set to null (implying default StorageClass), the newest default StorageClass would be chosen and assigned to this PVC.
+
+2. Removing the old default first and creating a new default StorageClass.
+This would result in having no default for a short time.
+Subsequently, if a user were to create a PersistentVolumeClaim with storageClassName set to null (implying default StorageClass), the PVC would be in Pending state forever.
+The user would have to fix this by deleting the PVC and recreating it once the default StorageClass was available.
+
+
+### Resource ordering during cluster installation
+
+If a cluster installation tool needed to create resources that required storage, for example, an image registry, it was difficult to get the ordering right.
+This is because any Pods that required storage would rely on the presence of a default StorageClass and would fail to be created if it wasn't defined.
+
+## What changed
+
+We've changed the PersistentVolume (PV) controller to assign a default StorageClass to any unbound PersistentVolumeClaim that has the storageClassName set to null.
+We've also modified the PersistentVolumeClaim admission within the API server to allow the change of values from an unset value to an actual StorageClass name.
+
+### Null `storageClassName` versus `storageClassName: ""` - does it matter? { #null-vs-empty-string }
+
+Before this feature was introduced, those values were equal in terms of behavior. Any PersistentVolumeClaim with the storageClassName set to null or "" would bind to an existing PersistentVolume resource with storageClassName also set to null or "".
+
+With this new feature enabled we wanted to maintain this behavior but also be able to update the StorageClass name.
+With these constraints in mind, the feature changes the semantics of null. If a default StorageClass is present, null would translate to "Give me a default" and "" would mean "Give me PersistentVolume that also has "" StorageClass name." In the absence of a StorageClass, the behavior would remain unchanged.
+
+Summarizing the above, we've changed the semantics of null so that its behavior depends on the presence or absence of a definition of default StorageClass.
+
+The tables below show all these cases to better describe when PVC binds and when its StorageClass gets updated.
+
+
+
PVC binding behavior with Retroactive default StorageClass
+
+
+
+
PVC storageClassName = ""
+
PVC storageClassName = null
+
+
+
+
+
Without default class
+
PV storageClassName = ""
+
binds
+
binds
+
+
+
PV without storageClassName
+
binds
+
binds
+
+
+
With default class
+
PV storageClassName = ""
+
binds
+
class updates
+
+
+
PV without storageClassName
+
binds
+
class updates
+
+
+
+
+## How to use it
+
+If you want to test the feature whilst it's alpha, you need to enable the relevant feature gate in the kube-controller-manager and the kube-apiserver. Use the `--feature-gates` command line argument:
+
+```
+--feature-gates="...,RetroactiveDefaultStorageClass=true"
+```
+
+### Test drive
+
+If you would like to see the feature in action and verify it works fine in your cluster here's what you can try:
+
+1. Define a basic PersistentVolumeClaim:
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: pvc-1
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ ```
+
+2. Create the PersistentVolumeClaim when there is no default StorageClass. The PVC won't provision or bind (unless there is an existing, suitable PV already present) and will remain in Pending state.
+
+ ```
+ $ kc get pvc
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ pvc-1 Pending
+ ```
+
+3. Configure one StorageClass as default.
+
+ ```
+ $ kc patch sc -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
+ storageclass.storage.k8s.io/my-storageclass patched
+ ```
+
+4. Verify that PersistentVolumeClaims is now provisioned correctly and was updated retroactively with new default StorageClass.
+
+ ```
+ $ kc get pvc
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ pvc-1 Bound pvc-06a964ca-f997-4780-8627-b5c3bf5a87d8 1Gi RWO my-storageclass 87m
+ ```
+
+### New metrics
+
+To help you see that the feature is working as expected we also introduced a new retroactive_storageclass_total metric to show how many times that the PV controller attempted to update PersistentVolumeClaim, and retroactive_storageclass_errors_total to show how many of those attempts failed.
+
+## Getting involved
+
+We always welcome new contributors so if you would like to get involved you can join our [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
+
+If you would like to share feedback, you can do so on our [public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5).
+
+Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order):
+
+- Deep Debroy ([ddebroy](https://github.com/ddebroy))
+- Divya Mohan ([divya-mohan0209](https://github.com/divya-mohan0209))
+- Jan Šafránek ([jsafrane](https://github.com/jsafrane/))
+- Joe Betz ([jpbetz](https://github.com/jpbetz))
+- Jordan Liggitt ([liggitt](https://github.com/liggitt))
+- Michelle Au ([msau42](https://github.com/msau42))
+- Seokho Son ([seokho-son](https://github.com/seokho-son))
+- Shannon Kularathna ([shannonxtreme](https://github.com/shannonxtreme))
+- Tim Bannister ([sftim](https://github.com/sftim))
+- Tim Hockin ([thockin](https://github.com/thockin))
+- Wojciech Tyczynski ([wojtek-t](https://github.com/wojtek-t))
+- Xing Yang ([xing-yang](https://github.com/xing-yang))
diff --git a/content/en/blog/_posts/2023-01-06-unhealthy-pod-eviction-policy-for-pdb.md b/content/en/blog/_posts/2023-01-06-unhealthy-pod-eviction-policy-for-pdb.md
new file mode 100644
index 0000000000000..09c03926b7031
--- /dev/null
+++ b/content/en/blog/_posts/2023-01-06-unhealthy-pod-eviction-policy-for-pdb.md
@@ -0,0 +1,107 @@
+---
+layout: blog
+title: "Kubernetes 1.26: Eviction policy for unhealthy pods guarded by PodDisruptionBudgets"
+date: 2023-01-06
+slug: "unhealthy-pod-eviction-policy-for-pdbs"
+---
+
+**Authors:** Filip Křepinský (Red Hat), Morten Torkildsen (Google), Ravi Gudimetla (Apple)
+
+
+Ensuring the disruptions to your applications do not affect its availability isn't a simple
+task. Last month's release of Kubernetes v1.26 lets you specify an _unhealthy pod eviction policy_
+for [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets) (PDBs)
+to help you maintain that availability during node management operations.
+In this article, we will dive deeper into what modifications were introduced for PDBs to
+give application owners greater flexibility in managing disruptions.
+
+## What problems does this solve?
+
+API-initiated eviction of pods respects PodDisruptionBudgets (PDBs). This means that a requested [voluntary disruption](https://kubernetes.io/docs/concepts/scheduling-eviction/#pod-disruption)
+via an eviction to a Pod, should not disrupt a guarded application and `.status.currentHealthy` of a PDB should not fall
+below `.status.desiredHealthy`. Running pods that are [Unhealthy](/docs/tasks/run-application/configure-pdb/#healthiness-of-a-pod)
+do not count towards the PDB status, but eviction of these is only possible in case the application
+is not disrupted. This helps disrupted or not yet started application to achieve availability
+as soon as possible without additional downtime that would be caused by evictions.
+
+Unfortunately, this poses a problem for cluster administrators that would like to drain nodes
+without any manual interventions. Misbehaving applications with pods in `CrashLoopBackOff`
+state (due to a bug or misconfiguration) or pods that are simply failing to become ready
+make this task much harder. Any eviction request will fail due to violation of a PDB,
+when all pods of an application are unhealthy. Draining of a node cannot make any progress
+in that case.
+
+On the other hand there are users that depend on the existing behavior, in order to:
+- prevent data-loss that would be caused by deleting pods that are guarding an underlying resource or storage
+- achieve the best availability possible for their application
+
+Kubernetes 1.26 introduced a new experimental field to the PodDisruptionBudget API: `.spec.unhealthyPodEvictionPolicy`.
+When enabled, this field lets you support both of those requirements.
+
+## How does it work?
+
+API-initiated eviction is the process that triggers graceful pod termination.
+The process can be initiated either by calling the API directly,
+by using a `kubectl drain` command, or other actors in the cluster.
+During this process every pod removal is consulted with appropriate PDBs,
+to ensure that a sufficient number of pods is always running in the cluster.
+
+The following policies allow PDB authors to have a greater control how the process deals with unhealthy pods.
+
+There are two policies `IfHealthyBudget` and `AlwaysAllow` to choose from.
+
+The former, `IfHealthyBudget`, follows the existing behavior to achieve the best availability
+that you get by default. Unhealthy pods can be disrupted only if their application
+has a minimum available `.status.desiredHealthy` number of pods.
+
+By setting the `spec.unhealthyPodEvictionPolicy` field of your PDB to `AlwaysAllow`,
+you are choosing the best effort availability for your application.
+With this policy it is always possible to evict unhealthy pods.
+This will make it easier to maintain and upgrade your clusters.
+
+We think that `AlwaysAllow` will often be a better choice, but for some critical workloads you may
+still prefer to protect even unhealthy Pods from node drains or other forms of API-initiated
+eviction.
+
+## How do I use it?
+
+This is an alpha feature, which means you have to enable the `PDBUnhealthyPodEvictionPolicy`
+[feature gate](/docs/reference/command-line-tools-reference/feature-gates/),
+with the command line argument `--feature-gates=PDBUnhealthyPodEvictionPolicy=true`
+to the kube-apiserver.
+
+Here's an example. Assume that you've enabled the feature gate in your cluster, and that you
+already defined a Deployment that runs a plain webserver. You labelled the Pods for that
+Deployment with `app: nginx`.
+You want to limit avoidable disruption, and you know that best effort availability is
+sufficient for this app.
+You decide to allow evictions even if those webserver pods are unhealthy.
+You create a PDB to guard this application, with the `AlwaysAllow` policy for evicting
+unhealthy pods:
+
+```yaml
+apiVersion: policy/v1
+kind: PodDisruptionBudget
+metadata:
+ name: nginx-pdb
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ maxUnavailable: 1
+ unhealthyPodEvictionPolicy: AlwaysAllow
+```
+
+
+## How can I learn more?
+
+
+- Read the KEP: [Unhealthy Pod Eviction Policy for PDBs](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3017-pod-healthy-policy-for-pdb)
+- Read the documentation: [Unhealthy Pod Eviction Policy](/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy) for PodDisruptionBudgets
+- Review the Kubernetes documentation for [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets), [draining of Nodes](/docs/tasks/administer-cluster/safely-drain-node/) and [evictions](/docs/concepts/scheduling-eviction/api-eviction/)
+
+
+## How do I get involved?
+
+If you have any feedback, please reach out to us in the [#sig-apps](https://kubernetes.slack.com/archives/C18NZM5K9) channel on Slack (visit https://slack.k8s.io/ for an invitation if you need one), or on the SIG Apps mailing list: kubernetes-sig-apps@googlegroups.com
+
diff --git a/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/decision-tree.svg b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/decision-tree.svg
new file mode 100644
index 0000000000000..c9e57f34b6c5f
--- /dev/null
+++ b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/decision-tree.svg
@@ -0,0 +1,3 @@
+
+
+
\ No newline at end of file
diff --git a/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/index.md b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/index.md
new file mode 100644
index 0000000000000..7e1ec725c607f
--- /dev/null
+++ b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/index.md
@@ -0,0 +1,329 @@
+---
+layout: blog
+title: "Protect Your Mission-Critical Pods From Eviction With PriorityClass"
+date: 2023-01-12
+slug: protect-mission-critical-pods-priorityclass
+description: "Pod priority and preemption help to make sure that mission-critical pods are up in the event of a resource crunch by deciding order of scheduling and eviction."
+---
+
+
+**Author:** Sunny Bhambhani (InfraCloud Technologies)
+
+Kubernetes has been widely adopted, and many organizations use it as their de-facto orchestration engine for running workloads that need to be created and deleted frequently.
+
+Therefore, proper scheduling of the pods is key to ensuring that application pods are up and running within the Kubernetes cluster without any issues. This article delves into the use cases around resource management by leveraging the [PriorityClass](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) object to protect mission-critical or high-priority pods from getting evicted and making sure that the application pods are up, running, and serving traffic.
+
+## Resource management in Kubernetes
+
+The control plane consists of multiple components, out of which the scheduler (usually the built-in [kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/)) is one of the components which is responsible for assigning a node to a pod.
+
+Whenever a pod is created, it enters a "pending" state, after which the scheduler determines which node is best suited for the placement of the new pod.
+
+In the background, the scheduler runs as an infinite loop looking for pods without a `nodeName` set that are [ready for scheduling](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/). For each Pod that needs scheduling, the scheduler tries to decide which node should run that Pod.
+
+If the scheduler cannot find any node, the pod remains in the pending state, which is not ideal.
+
+{{< note >}}
+To name a few, `nodeSelector` , `taints and tolerations` , `nodeAffinity` , the rank of nodes based on available resources (for example, CPU and memory), and several other criteria are used to determine the pod's placement.
+{{< /note >}}
+
+The below diagram, from point number 1 through 4, explains the request flow:
+
+{{< figure src=kube-scheduler.svg alt="A diagram showing the scheduling of three Pods that a client has directly created." title="Scheduling in Kubernetes">}}
+
+## Typical use cases
+
+Below are some real-life scenarios where control over the scheduling and eviction of pods may be required.
+
+1. Let's say the pod you plan to deploy is critical, and you have some resource constraints. An example would be the DaemonSet of an infrastructure component like Grafana Loki. The Loki pods must run before other pods can on every node. In such cases, you could ensure resource availability by manually identifying and deleting the pods that are not required or by adding a new node to the cluster. Both these approaches are unsuitable since the former would be tedious to execute, and the latter could involve an expenditure of time and money.
+
+
+2. Another use case could be a single cluster that holds the pods for the below environments with associated priorities:
+ - Production (`prod`): top priority
+ - Preproduction (`preprod`): intermediate priority
+ - Development (`dev`): least priority
+
+ In the event of high resource consumption in the cluster, there is competition for CPU and memory resources on the nodes. While cluster-level autoscaling _may_ add more nodes, it takes time. In the interim, if there are no further nodes to scale the cluster, some Pods could remain in a Pending state, or the service could be degraded as they compete for resources. If the kubelet does evict a Pod from the node, that eviction would be random because the kubelet doesn’t have any special information about which Pods to evict and which to keep.
+
+3. A third example could be a microservice backed by a queuing application or a database running into a resource crunch and the queue or database getting evicted. In such a case, all the other services would be rendered useless until the database can serve traffic again.
+
+There can also be other scenarios where you want to control the order of scheduling or order of eviction of pods.
+
+## PriorityClasses in Kubernetes
+
+PriorityClass is a cluster-wide API object in Kubernetes and part of the `scheduling.k8s.io/v1` API group. It contains a mapping of the PriorityClass name (defined in `.metadata.name`) and an integer value (defined in `.value`). This represents the value that the scheduler uses to determine Pod's relative priority.
+
+Additionally, when you create a cluster using kubeadm or a managed Kubernetes service (for example, Azure Kubernetes Service), Kubernetes uses PriorityClasses to safeguard the pods that are hosted on the control plane nodes. This ensures that critical cluster components such as CoreDNS and kube-proxy can run even if resources are constrained.
+
+This availability of pods is achieved through the use of a special PriorityClass that ensures the pods are up and running and that the overall cluster is not affected.
+
+```console
+$ kubectl get priorityclass
+NAME VALUE GLOBAL-DEFAULT AGE
+system-cluster-critical 2000000000 false 82m
+system-node-critical 2000001000 false 82m
+```
+
+The diagram below shows exactly how it works with the help of an example, which will be detailed in the upcoming section.
+
+{{< figure src="decision-tree.svg" alt="A flow chart that illustrates how the kube-scheduler prioritizes new Pods and potentially preempts existing Pods" title="Pod scheduling and preemption">}}
+
+### Pod priority and preemption
+
+[Pod preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) is a Kubernetes feature that allows the cluster to preempt pods (removing an existing Pod in favor of a new Pod) on the basis of priority. [Pod priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority) indicates the importance of a pod relative to other pods while scheduling. If there aren't enough resources to run all the current pods, the scheduler tries to evict lower-priority pods over high-priority ones.
+
+Also, when a healthy cluster experiences a node failure, typically, lower-priority pods get preempted to create room for higher-priority pods on the available node. This happens even if the cluster can bring up a new node automatically since pod creation is usually much faster than bringing up a new node.
+
+### PriorityClass requirements
+
+Before you set up PriorityClasses, there are a few things to consider.
+
+1. Decide which PriorityClasses are needed. For instance, based on environment, type of pods, type of applications, etc.
+2. The default PriorityClass resource for your cluster. The pods without a `priorityClassName` will be treated as priority 0.
+3. Use a consistent naming convention for all PriorityClasses.
+4. Make sure that the pods for your workloads are running with the right PriorityClass.
+
+## PriorityClass hands-on example
+
+Let’s say there are 3 application pods: one for prod, one for preprod, and one for development. Below are three sample YAML manifest files for each of those.
+
+```yaml
+---
+# development
+apiVersion: v1
+kind: Pod
+metadata:
+ name: dev-nginx
+ labels:
+ env: dev
+spec:
+ containers:
+ - name: dev-nginx
+ image: nginx
+ resources:
+ requests:
+ memory: "256Mi"
+ cpu: "0.2"
+ limits:
+ memory: ".5Gi"
+ cpu: "0.5"
+```
+
+```yaml
+---
+# preproduction
+apiVersion: v1
+kind: Pod
+metadata:
+ name: preprod-nginx
+ labels:
+ env: preprod
+spec:
+ containers:
+ - name: preprod-nginx
+ image: nginx
+ resources:
+ requests:
+ memory: "1.5Gi"
+ cpu: "1.5"
+ limits:
+ memory: "2Gi"
+ cpu: "2"
+```
+
+```yaml
+---
+# production
+apiVersion: v1
+kind: Pod
+metadata:
+ name: prod-nginx
+ labels:
+ env: prod
+spec:
+ containers:
+ - name: prod-nginx
+ image: nginx
+ resources:
+ requests:
+ memory: "2Gi"
+ cpu: "2"
+ limits:
+ memory: "2Gi"
+ cpu: "2"
+```
+
+You can create these pods with the `kubectl create -f ` command, and then check their status
+using the `kubectl get pods` command. You can see if they are up and look ready to serve traffic:
+
+```console
+$ kubectl get pods --show-labels
+NAME READY STATUS RESTARTS AGE LABELS
+dev-nginx 1/1 Running 0 55s env=dev
+preprod-nginx 1/1 Running 0 55s env=preprod
+prod-nginx 0/1 Pending 0 55s env=prod
+```
+
+Bad news. The pod for the Production environment is still Pending and isn't serving any traffic.
+
+Let's see why this is happening:
+```console
+$ kubectl get events
+...
+...
+5s Warning FailedScheduling pod/prod-nginx 0/2 nodes are available: 1 Insufficient cpu, 2 Insufficient memory.
+```
+
+In this example, there is only one worker node, and that node has a resource crunch.
+
+Now, let's look at how PriorityClass can help in this situation since prod should be given higher priority than the other environments.
+
+## PriorityClass API
+
+Before creating PriorityClasses based on these requirements, let's see what a basic manifest for a PriorityClass looks like and outline some prerequisites:
+
+```yaml
+apiVersion: scheduling.k8s.io/v1
+kind: PriorityClass
+metadata:
+ name: PRIORITYCLASS_NAME
+value: 0 # any integer value between -1000000000 to 1000000000
+description: >-
+ (Optional) description goes here!
+globalDefault: false # or true. Only one PriorityClass can be the global default.
+```
+
+Below are some prerequisites for PriorityClasses:
+
+- The name of a PriorityClass must be a valid DNS subdomain name.
+- When you make your own PriorityClass, the name should not start with `system-`, as those names are
+ reserved by Kubernetes itself (for example, they are used for two built-in PriorityClasses).
+- Its absolute value should be between -1000000000 to 1000000000 (1 billion).
+- Larger numbers are reserved by PriorityClasses such as `system-cluster-critical`
+ (this Pod is critically important to the cluster) and `system-node-critical` (the node
+ critically relies on this Pod).
+ `system-node-critical` is a higher priority than `system-cluster-critical`, because a
+ cluster-critical Pod can only work well if the node where it is running has all its node-level
+ critical requirements met.
+- There are two optional fields:
+ - `globalDefault`: When true, this PriorityClass is used for pods where a `priorityClassName` is not specified.
+ Only one PriorityClass with `globalDefault` set to true can exist in a cluster.
+ If there is no PriorityClass defined with globalDefault set to true, all the pods with no priorityClassName defined will be treated with 0 priority (i.e. the least priority).
+ - `description`: A string with a meaningful value so that people know when to use this PriorityClass.
+
+{{< note >}}
+Adding a PriorityClass with `globalDefault` set to `true` does not mean it will apply the same to the existing pods that are already running. This will be applicable only to the pods that came into existence after the PriorityClass was created.
+{{< /note >}}
+
+### PriorityClass in action
+
+Here's an example. Next, create some environment-specific PriorityClasses:
+
+```yaml
+apiVersion: scheduling.k8s.io/v1
+kind: PriorityClass
+metadata:
+ name: dev-pc
+value: 1000000
+globalDefault: false
+description: >-
+ (Optional) This priority class should only be used for all development pods.
+```
+
+```yaml
+apiVersion: scheduling.k8s.io/v1
+kind: PriorityClass
+metadata:
+ name: preprod-pc
+value: 2000000
+globalDefault: false
+description: >-
+ (Optional) This priority class should only be used for all preprod pods.
+```
+
+```yaml
+apiVersion: scheduling.k8s.io/v1
+kind: PriorityClass
+metadata:
+ name: prod-pc
+value: 4000000
+globalDefault: false
+description: >-
+ (Optional) This priority class should only be used for all prod pods.
+```
+
+Use `kubectl create -f ` command to create a pc and `kubectl get pc` to check its status.
+
+```console
+$ kubectl get pc
+NAME VALUE GLOBAL-DEFAULT AGE
+dev-pc 1000000 false 3m13s
+preprod-pc 2000000 false 2m3s
+prod-pc 4000000 false 7s
+system-cluster-critical 2000000000 false 82m
+system-node-critical 2000001000 false 82m
+```
+
+The new PriorityClasses are in place now. A small change is needed in the pod manifest or pod template (in a ReplicaSet or Deployment). In other words, you need to specify the priority class name at `.spec.priorityClassName` (which is a string value).
+
+First update the previous production pod manifest file to have a PriorityClass assigned, then delete the Production pod and recreate it. You can't edit the priority class for a Pod that already exists.
+
+In my cluster, when I tried this, here's what happened.
+First, that change seems successful; the status of pods has been updated:
+
+```console
+$ kubectl get pods --show-labels
+NAME READY STATUS RESTARTS AGE LABELS
+dev-nginx 1/1 Terminating 0 55s env=dev
+preprod-nginx 1/1 Running 0 55s env=preprod
+prod-nginx 0/1 Pending 0 55s env=prod
+```
+
+The dev-nginx pod is getting terminated. Once that is successfully terminated and there are enough resources for the prod pod, the control plane can schedule the prod pod:
+
+```console
+Warning FailedScheduling pod/prod-nginx 0/2 nodes are available: 1 Insufficient cpu, 2 Insufficient memory.
+Normal Preempted pod/dev-nginx by default/prod-nginx on node node01
+Normal Killing pod/dev-nginx Stopping container dev-nginx
+Normal Scheduled pod/prod-nginx Successfully assigned default/prod-nginx to node01
+Normal Pulling pod/prod-nginx Pulling image "nginx"
+Normal Pulled pod/prod-nginx Successfully pulled image "nginx"
+Normal Created pod/prod-nginx Created container prod-nginx
+Normal Started pod/prod-nginx Started container prod-nginx
+```
+
+## Enforcement
+
+When you set up PriorityClasses, they exist just how you defined them. However, people
+(and tools) that make changes to your cluster are free to set any PriorityClass, or to not
+set any PriorityClass at all.
+However, you can use other Kubernetes features to make sure that the priorities you wanted
+are actually applied.
+
+As an alpha feature, you can define a [ValidatingAdmissionPolicy](/blog/2022/12/20/validating-admission-policies-alpha/) and a ValidatingAdmissionPolicyBinding so that, for example,
+Pods that go into the `prod` namespace must use the `prod-pc` PriorityClass.
+With another ValidatingAdmissionPolicyBinding you ensure that the `preprod` namespace
+uses the `preprod-pc` PriorityClass, and so on.
+In *any* cluster, you can enforce similar controls using external projects such as
+[Kyverno](https://kyverno.io/) or [Gatekeeper](https://open-policy-agent.github.io/gatekeeper/),
+through validating admission webhooks.
+
+However you do it, Kubernetes gives you options to make sure that the PriorityClasses are
+used how you wanted them to be, or perhaps just to
+[warn](https://open-policy-agent.github.io/gatekeeper/website/docs/violations/#warn-enforcement-action)
+users when they pick an unsuitable option.
+
+## Summary
+
+The above example and its events show you what this feature of Kubernetes brings to the table, along with several scenarios where you can use this feature. To reiterate, this helps ensure that mission-critical pods are up and available to serve the traffic and, in the case of a resource crunch, determines cluster behavior.
+
+It gives you some power to decide the order of scheduling and order of [preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) for Pods. Therefore, you need to define the PriorityClasses sensibly.
+For example, if you have a cluster autoscaler to add nodes on demand,
+make sure to run it with the `system-cluster-critical` PriorityClass. You don't want to
+get in a situation where the autoscaler has been preempted and there are no new nodes
+coming online.
+
+If you have any queries or feedback, feel free to reach out to me on [LinkedIn](http://www.linkedin.com/in/sunnybhambhani).
+
+
+
diff --git a/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/kube-scheduler.svg b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/kube-scheduler.svg
new file mode 100644
index 0000000000000..53f5c1fb7b7a3
--- /dev/null
+++ b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/kube-scheduler.svg
@@ -0,0 +1,4 @@
+
+
+
+
\ No newline at end of file
diff --git a/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Example.png b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Example.png
new file mode 100644
index 0000000000000..175c21a889626
Binary files /dev/null and b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Example.png differ
diff --git a/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Microservices.png b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Microservices.png
new file mode 100644
index 0000000000000..da0f60a5054a4
Binary files /dev/null and b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Microservices.png differ
diff --git a/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/index.md b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/index.md
new file mode 100644
index 0000000000000..8ecad96975dd3
--- /dev/null
+++ b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/index.md
@@ -0,0 +1,84 @@
+---
+layout: blog
+title: Consider All Microservices Vulnerable — And Monitor Their Behavior
+date: 2023-01-20
+slug: security-behavior-analysis
+---
+
+**Author:**
+David Hadas (IBM Research Labs)
+
+_This post warns Devops from a false sense of security. Following security best practices when developing and configuring microservices do not result in non-vulnerable microservices. The post shows that although all deployed microservices are vulnerable, there is much that can be done to ensure microservices are not exploited. It explains how analyzing the behavior of clients and services from a security standpoint, named here **"Security-Behavior Analysis"**, can protect the deployed vulnerable microservices. It points to [Guard](http://knative.dev/security-guard), an open source project offering security-behavior monitoring and control of Kubernetes microservices presumed vulnerable._
+
+As cyber attacks continue to intensify in sophistication, organizations deploying cloud services continue to grow their cyber investments aiming to produce safe and non-vulnerable services. However, the year-by-year growth in cyber investments does not result in a parallel reduction in cyber incidents. Instead, the number of cyber incidents continues to grow annually. Evidently, organizations are doomed to fail in this struggle - no matter how much effort is made to detect and remove cyber weaknesses from deployed services, it seems offenders always have the upper hand.
+
+Considering the current spread of offensive tools, sophistication of offensive players, and ever-growing cyber financial gains to offenders, any cyber strategy that relies on constructing a non-vulnerable, weakness-free service in 2023 is clearly too naïve. It seems the only viable strategy is to:
+
+➥ **Admit that your services are vulnerable!**
+
+In other words, consciously accept that you will never create completely invulnerable services. If your opponents find even a single weakness as an entry-point, you lose! Admitting that in spite of your best efforts, all your services are still vulnerable is an important first step. Next, this post discusses what you can do about it...
+
+## How to protect microservices from being exploited
+
+Being vulnerable does not necessarily mean that your service will be exploited. Though your services are vulnerable in some ways unknown to you, offenders still need to identify these vulnerabilities and then exploit them. If offenders fail to exploit your service vulnerabilities, you win! In other words, having a vulnerability that can’t be exploited, represents a risk that can’t be realized.
+
+{{< figure src="Example.png" alt="Image of an example of offender gaining foothold in a service" class="diagram-large" caption="Figure 1. An Offender gaining foothold in a vulnerable service" >}}
+
+The above diagram shows an example in which the offender does not yet have a foothold in the service; that is, it is assumed that your service does not run code controlled by the offender on day 1. In our example the service has vulnerabilities in the API exposed to clients. To gain an initial foothold the offender uses a malicious client to try and exploit one of the service API vulnerabilities. The malicious client sends an exploit that triggers some unplanned behavior of the service.
+
+More specifically, let’s assume the service is vulnerable to an SQL injection. The developer failed to sanitize the user input properly, thereby allowing clients to send values that would change the intended behavior. In our example, if a client sends a query string with key “username” and value of _“tom or 1=1”_, the client will receive the data of all users. Exploiting this vulnerability requires the client to send an irregular string as the value. Note that benign users will not be sending a string with spaces or with the equal sign character as a username, instead they will normally send legal usernames which for example may be defined as a short sequence of characters a-z. No legal username can trigger service unplanned behavior.
+
+In this simple example, one can already identify several opportunities to detect and block an attempt to exploit the vulnerability (un)intentionally left behind by the developer, making the vulnerability unexploitable. First, the malicious client behavior differs from the behavior of benign clients, as it sends irregular requests. If such a change in behavior is detected and blocked, the exploit will never reach the service. Second, the service behavior in response to the exploit differs from the service behavior in response to a regular request. Such behavior may include making subsequent irregular calls to other services such as a data store, taking irregular time to respond, and/or responding to the malicious client with an irregular response (for example, containing much more data than normally sent in case of benign clients making regular requests). Service behavioral changes, if detected, will also allow blocking the exploit in different stages of the exploitation attempt.
+
+More generally:
+
+- Monitoring the behavior of clients can help detect and block exploits against service API vulnerabilities. In fact, deploying efficient client behavior monitoring makes many vulnerabilities unexploitable and others very hard to achieve. To succeed, the offender needs to create an exploit undetectable from regular requests.
+
+- Monitoring the behavior of services can help detect services as they are being exploited regardless of the attack vector used. Efficient service behavior monitoring limits what an attacker may be able to achieve as the offender needs to ensure the service behavior is undetectable from regular service behavior.
+
+Combining both approaches may add a protection layer to the deployed vulnerable services, drastically decreasing the probability for anyone to successfully exploit any of the deployed vulnerable services. Next, let us identify four use cases where you need to use security-behavior monitoring.
+
+## Use cases
+
+One can identify the following four different stages in the life of any service from a security standpoint. In each stage, security-behavior monitoring is required to meet different challenges:
+
+Service State | Use case | What do you need in order to cope with this use case?
+------------- | ------------- | -----------------------------------------
+**Normal** | **No known vulnerabilities:** The service owner is normally not aware of any known vulnerabilities in the service image or configuration. Yet, it is reasonable to assume that the service has weaknesses. | **Provide generic protection against any unknown, zero-day, service vulnerabilities** - Detect/block irregular patterns sent as part of incoming client requests that may be used as exploits.
+**Vulnerable** | **An applicable CVE is published:** The service owner is required to release a new non-vulnerable revision of the service. Research shows that in practice this process of removing a known vulnerability may take many weeks to accomplish (2 months on average). | **Add protection based on the CVE analysis** - Detect/block incoming requests that include specific patterns that may be used to exploit the discovered vulnerability. Continue to offer services, although the service has a known vulnerability.
+**Exploitable** | **A known exploit is published:** The service owner needs a way to filter incoming requests that contain the known exploit. | **Add protection based on a known exploit signature** - Detect/block incoming client requests that carry signatures identifying the exploit. Continue to offer services, although the presence of an exploit.
+**Misused** | **An offender misuses pods backing the service:** The offender can follow an attack pattern enabling him/her to misuse pods. The service owner needs to restart any compromised pods while using non compromised pods to continue offering the service. Note that once a pod is restarted, the offender needs to repeat the attack pattern before he/she may again misuse it. | **Identify and restart instances of the component that is being misused** - At any given time, some backing pods may be compromised and misused, while others behave as designed. Detect/remove the misused pods while allowing other pods to continue servicing client requests.
+
+Fortunately, microservice architecture is well suited to security-behavior monitoring as discussed next.
+
+## Security-Behavior of microservices versus monoliths {#microservices-vs-monoliths}
+
+Kubernetes is often used to support workloads designed with microservice architecture. By design, microservices aim to follow the UNIX philosophy of "Do One Thing And Do It Well". Each microservice has a bounded context and a clear interface. In other words, you can expect the microservice clients to send relatively regular requests and the microservice to present a relatively regular behavior as a response to these requests. Consequently, a microservice architecture is an excellent candidate for security-behavior monitoring.
+
+{{< figure src="Microservices.png" alt="Image showing why microservices are well suited for security-behavior monitoring" class="diagram-large" caption="Figure 2. Microservices are well suited for security-behavior monitoring" >}}
+
+The diagram above clarifies how dividing a monolithic service to a set of microservices improves our ability to perform security-behavior monitoring and control. In a monolithic service approach, different client requests are intertwined, resulting in a diminished ability to identify irregular client behaviors. Without prior knowledge, an observer of the intertwined client requests will find it hard to distinguish between types of requests and their related characteristics. Further, internal client requests are not exposed to the observer. Lastly, the aggregated behavior of the monolithic service is a compound of the many different internal behaviors of its components, making it hard to identify irregular service behavior.
+
+In a microservice environment, each microservice is expected by design to offer a more well-defined service and serve better defined type of requests. This makes it easier for an observer to identify irregular client behavior and irregular service behavior. Further, a microservice design exposes the internal requests and internal services which offer more security-behavior data to identify irregularities by an observer. Overall, this makes the microservice design pattern better suited for security-behavior monitoring and control.
+
+## Security-Behavior monitoring on Kubernetes
+
+Kubernetes deployments seeking to add Security-Behavior may use [Guard](http://knative.dev/security-guard), developed under the CNCF project Knative. Guard is integrated into the full Knative automation suite that runs on top of Kubernetes. Alternatively, **you can deploy Guard as a standalone tool** to protect any HTTP-based workload on Kubernetes.
+
+See:
+
+- [Guard](https://github.com/knative-sandbox/security-guard) on Github, for using Guard as a standalone tool.
+- The Knative automation suite - Read about Knative, in the blog post [Opinionated Kubernetes](https://davidhadas.wordpress.com/2022/08/29/knative-an-opinionated-kubernetes) which describes how Knative simplifies and unifies the way web services are deployed on Kubernetes.
+- You may contact Guard maintainers on the [SIG Security](https://kubernetes.slack.com/archives/C019LFTGNQ3) Slack channel or on the Knative community [security](https://knative.slack.com/archives/CBYV1E0TG) Slack channel. The Knative community channel will move soon to the [CNCF Slack](https://communityinviter.com/apps/cloud-native/cncf) under the name `#knative-security`.
+
+The goal of this post is to invite the Kubernetes community to action and introduce Security-Behavior monitoring and control to help secure Kubernetes based deployments. Hopefully, the community as a follow up will:
+
+1. Analyze the cyber challenges presented for different Kubernetes use cases
+1. Add appropriate security documentation for users on how to introduce Security-Behavior monitoring and control.
+1. Consider how to integrate with tools that can help users monitor and control their vulnerable services.
+
+## Getting involved
+
+You are welcome to get involved and join the effort to develop security behavior monitoring
+and control for Kubernetes; to share feedback and contribute to code or documentation;
+and to make or suggest improvements of any kind.
diff --git a/content/en/docs/concepts/architecture/garbage-collection.md b/content/en/docs/concepts/architecture/garbage-collection.md
index 70fd8423de086..a6e4290710563 100644
--- a/content/en/docs/concepts/architecture/garbage-collection.md
+++ b/content/en/docs/concepts/architecture/garbage-collection.md
@@ -144,7 +144,7 @@ which you can define:
* `MinAge`: the minimum age at which the kubelet can garbage collect a
container. Disable by setting to `0`.
- * `MaxPerPodContainer`: the maximum number of dead containers each Pod pair
+ * `MaxPerPodContainer`: the maximum number of dead containers each Pod
can have. Disable by setting to less than `0`.
* `MaxContainers`: the maximum number of dead containers the cluster can have.
Disable by setting to less than `0`.
diff --git a/content/en/docs/concepts/architecture/leases.md b/content/en/docs/concepts/architecture/leases.md
index f7fbd3906da61..2eb2fdc2cb605 100644
--- a/content/en/docs/concepts/architecture/leases.md
+++ b/content/en/docs/concepts/architecture/leases.md
@@ -6,7 +6,7 @@ weight: 30
-Distrbuted systems often have a need for "leases", which provides a mechanism to lock shared resources and coordinate activity between nodes.
+Distributed systems often have a need for "leases", which provides a mechanism to lock shared resources and coordinate activity between nodes.
In Kubernetes, the "lease" concept is represented by `Lease` objects in the `coordination.k8s.io` API group, which are used for system-critical
capabilities like node heart beats and component-level leader election.
diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md
index d36d82174b70d..9cf68b6b84150 100644
--- a/content/en/docs/concepts/architecture/nodes.md
+++ b/content/en/docs/concepts/architecture/nodes.md
@@ -9,7 +9,7 @@ weight: 10
-Kubernetes runs your workload by placing containers into Pods to run on _Nodes_.
+Kubernetes runs your {{< glossary_tooltip text="workload" term_id="workload" >}} by placing containers into Pods to run on _Nodes_.
A node may be a virtual or physical machine, depending on the cluster. Each node
is managed by the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}
@@ -454,50 +454,6 @@ Message: Pod was terminated in response to imminent node shutdown.
{{< /note >}}
-## Non Graceful node shutdown {#non-graceful-node-shutdown}
-
-{{< feature-state state="beta" for_k8s_version="v1.26" >}}
-
-A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
-either because the command does not trigger the inhibitor locks mechanism used by
-kubelet or because of a user error, i.e., the ShutdownGracePeriod and
-ShutdownGracePeriodCriticalPods are not configured properly. Please refer to above
-section [Graceful Node Shutdown](#graceful-node-shutdown) for more details.
-
-When a node is shutdown but not detected by kubelet's Node Shutdown Manager, the pods
-that are part of a StatefulSet will be stuck in terminating status on
-the shutdown node and cannot move to a new running node. This is because kubelet on
-the shutdown node is not available to delete the pods so the StatefulSet cannot
-create a new pod with the same name. If there are volumes used by the pods, the
-VolumeAttachments will not be deleted from the original shutdown node so the volumes
-used by these pods cannot be attached to a new running node. As a result, the
-application running on the StatefulSet cannot function properly. If the original
-shutdown node comes up, the pods will be deleted by kubelet and new pods will be
-created on a different running node. If the original shutdown node does not come up,
-these pods will be stuck in terminating status on the shutdown node forever.
-
-To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
-or `NoSchedule` effect to a Node marking it out-of-service.
-If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-is enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint, the
-pods on the node will be forcefully deleted if there are no matching tolerations on it and volume
-detach operations for the pods terminating on the node will happen immediately. This allows the
-Pods on the out-of-service node to recover quickly on a different node.
-
-During a non-graceful shutdown, Pods are terminated in the two phases:
-
-1. Force delete the Pods that do not have matching `out-of-service` tolerations.
-2. Immediately perform detach volume operation for such pods.
-
-{{< note >}}
-- Before adding the taint `node.kubernetes.io/out-of-service` , it should be verified
- that the node is already in shutdown or power off state (not in the middle of
- restarting).
-- The user is required to manually remove the out-of-service taint after the pods are
- moved to a new node and the user has checked that the shutdown node has been
- recovered since the user was the one who originally added the taint.
-{{< /note >}}
-
### Pod Priority based graceful node shutdown {#pod-priority-graceful-node-shutdown}
{{< feature-state state="alpha" for_k8s_version="v1.23" >}}
@@ -596,6 +552,50 @@ the feature is Beta and is enabled by default.
Metrics `graceful_shutdown_start_time_seconds` and `graceful_shutdown_end_time_seconds`
are emitted under the kubelet subsystem to monitor node shutdowns.
+## Non Graceful node shutdown {#non-graceful-node-shutdown}
+
+{{< feature-state state="beta" for_k8s_version="v1.26" >}}
+
+A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
+either because the command does not trigger the inhibitor locks mechanism used by
+kubelet or because of a user error, i.e., the ShutdownGracePeriod and
+ShutdownGracePeriodCriticalPods are not configured properly. Please refer to above
+section [Graceful Node Shutdown](#graceful-node-shutdown) for more details.
+
+When a node is shutdown but not detected by kubelet's Node Shutdown Manager, the pods
+that are part of a {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} will be stuck in terminating status on
+the shutdown node and cannot move to a new running node. This is because kubelet on
+the shutdown node is not available to delete the pods so the StatefulSet cannot
+create a new pod with the same name. If there are volumes used by the pods, the
+VolumeAttachments will not be deleted from the original shutdown node so the volumes
+used by these pods cannot be attached to a new running node. As a result, the
+application running on the StatefulSet cannot function properly. If the original
+shutdown node comes up, the pods will be deleted by kubelet and new pods will be
+created on a different running node. If the original shutdown node does not come up,
+these pods will be stuck in terminating status on the shutdown node forever.
+
+To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
+or `NoSchedule` effect to a Node marking it out-of-service.
+If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
+is enabled on {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}, and a Node is marked out-of-service with this taint, the
+pods on the node will be forcefully deleted if there are no matching tolerations on it and volume
+detach operations for the pods terminating on the node will happen immediately. This allows the
+Pods on the out-of-service node to recover quickly on a different node.
+
+During a non-graceful shutdown, Pods are terminated in the two phases:
+
+1. Force delete the Pods that do not have matching `out-of-service` tolerations.
+2. Immediately perform detach volume operation for such pods.
+
+{{< note >}}
+- Before adding the taint `node.kubernetes.io/out-of-service` , it should be verified
+ that the node is already in shutdown or power off state (not in the middle of
+ restarting).
+- The user is required to manually remove the out-of-service taint after the pods are
+ moved to a new node and the user has checked that the shutdown node has been
+ recovered since the user was the one who originally added the taint.
+{{< /note >}}
+
## Swap memory management {#swap-memory}
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}
@@ -646,9 +646,11 @@ see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
## {{% heading "whatsnext" %}}
-* Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node.
-* Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
-* Read the [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node)
- section of the architecture design document.
-* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
+Learn more about the following:
+ * [Components](/docs/concepts/overview/components/#node-components) that make up a node.
+ * [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
+ * [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) section of the architecture design document.
+ * [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
+ * [Node Resource Managers](/docs/concepts/policy/node-resource-managers/).
+ * [Resource Management for Windows nodes](/docs/concepts/configuration/windows-resource-management/).
diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md
index c90715da09956..0dbbe9b6deb67 100644
--- a/content/en/docs/concepts/cluster-administration/manage-deployment.md
+++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md
@@ -1,20 +1,26 @@
---
-reviewers:
-- janetkuo
title: Managing Resources
content_type: concept
+reviewers:
+- janetkuo
weight: 40
---
-You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features that we will discuss in more depth are [configuration files](/docs/concepts/configuration/overview/) and [labels](/docs/concepts/overview/working-with-objects/labels/).
+You've deployed your application and exposed it via a service. Now what? Kubernetes provides a
+number of tools to help you manage your application deployment, including scaling and updating.
+Among the features that we will discuss in more depth are
+[configuration files](/docs/concepts/configuration/overview/) and
+[labels](/docs/concepts/overview/working-with-objects/labels/).
## Organizing resource configurations
-Many applications require multiple resources to be created, such as a Deployment and a Service. Management of multiple resources can be simplified by grouping them together in the same file (separated by `---` in YAML). For example:
+Many applications require multiple resources to be created, such as a Deployment and a Service.
+Management of multiple resources can be simplified by grouping them together in the same file
+(separated by `---` in YAML). For example:
{{< codenew file="application/nginx-app.yaml" >}}
@@ -24,89 +30,99 @@ Multiple resources can be created the same way as a single resource:
kubectl apply -f https://k8s.io/examples/application/nginx-app.yaml
```
-```shell
+```none
service/my-nginx-svc created
deployment.apps/my-nginx created
```
-The resources will be created in the order they appear in the file. Therefore, it's best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the controller(s), such as Deployment.
+The resources will be created in the order they appear in the file. Therefore, it's best to
+specify the service first, since that will ensure the scheduler can spread the pods associated
+with the service as they are created by the controller(s), such as Deployment.
`kubectl apply` also accepts multiple `-f` arguments:
```shell
-kubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
+kubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \
+ -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
```
-And a directory can be specified rather than or in addition to individual files:
-```shell
-kubectl apply -f https://k8s.io/examples/application/nginx/
-```
-
-`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
-
-It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together.
+It is a recommended practice to put resources related to the same microservice or application tier
+into the same file, and to group all of the files associated with your application in the same
+directory. If the tiers of your application bind to each other using DNS, you can deploy all of
+the components of your stack together.
-A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into GitHub:
+A URL can also be specified as a configuration source, which is handy for deploying directly from
+configuration files checked into GitHub:
```shell
-kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/nginx/nginx-deployment.yaml
+kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
```
-```shell
+```none
deployment.apps/my-nginx created
```
## Bulk operations in kubectl
-Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created:
+Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract
+resource names from configuration files in order to perform other operations, in particular to
+delete the same resources you created:
```shell
kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml
```
-```shell
+```none
deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted
```
-In the case of two resources, you can specify both resources on the command line using the resource/name syntax:
+In the case of two resources, you can specify both resources on the command line using the
+resource/name syntax:
```shell
kubectl delete deployments/my-nginx services/my-nginx-svc
```
-For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using `-l` or `--selector`, to filter resources by their labels:
+For larger numbers of resources, you'll find it easier to specify the selector (label query)
+specified using `-l` or `--selector`, to filter resources by their labels:
```shell
kubectl delete deployment,services -l app=nginx
```
-```shell
+```none
deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted
```
-Because `kubectl` outputs resource names in the same syntax it accepts, you can chain operations using `$()` or `xargs`:
+Because `kubectl` outputs resource names in the same syntax it accepts, you can chain operations
+using `$()` or `xargs`:
```shell
kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service)
kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service | xargs -i kubectl get {}
```
-```shell
+```none
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx-svc LoadBalancer 10.0.0.208 80/TCP 0s
```
-With the above commands, we first create resources under `examples/application/nginx/` and print the resources created with `-o name` output format
-(print each resource as resource/name). Then we `grep` only the "service", and then print it with `kubectl get`.
+With the above commands, we first create resources under `examples/application/nginx/` and print
+the resources created with `-o name` output format (print each resource as resource/name).
+Then we `grep` only the "service", and then print it with `kubectl get`.
-If you happen to organize your resources across several subdirectories within a particular directory, you can recursively perform the operations on the subdirectories also, by specifying `--recursive` or `-R` alongside the `--filename,-f` flag.
+If you happen to organize your resources across several subdirectories within a particular
+directory, you can recursively perform the operations on the subdirectories also, by specifying
+`--recursive` or `-R` alongside the `--filename,-f` flag.
-For instance, assume there is a directory `project/k8s/development` that holds all of the {{< glossary_tooltip text="manifests" term_id="manifest" >}} needed for the development environment, organized by resource type:
+For instance, assume there is a directory `project/k8s/development` that holds all of the
+{{< glossary_tooltip text="manifests" term_id="manifest" >}} needed for the development environment,
+organized by resource type:
-```
+```none
project/k8s/development
├── configmap
│ └── my-configmap.yaml
@@ -116,13 +132,15 @@ project/k8s/development
└── my-pvc.yaml
```
-By default, performing a bulk operation on `project/k8s/development` will stop at the first level of the directory, not processing any subdirectories. If we had tried to create the resources in this directory using the following command, we would have encountered an error:
+By default, performing a bulk operation on `project/k8s/development` will stop at the first level
+of the directory, not processing any subdirectories. If we had tried to create the resources in
+this directory using the following command, we would have encountered an error:
```shell
kubectl apply -f project/k8s/development
```
-```shell
+```none
error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)
```
@@ -132,13 +150,14 @@ Instead, specify the `--recursive` or `-R` flag with the `--filename,-f` flag as
kubectl apply -f project/k8s/development --recursive
```
-```shell
+```none
configmap/my-config created
deployment.apps/my-deployment created
persistentvolumeclaim/my-pvc created
```
-The `--recursive` flag works with any operation that accepts the `--filename,-f` flag such as: `kubectl {create,get,delete,describe,rollout}` etc.
+The `--recursive` flag works with any operation that accepts the `--filename,-f` flag such as:
+`kubectl {create,get,delete,describe,rollout}` etc.
The `--recursive` flag also works when multiple `-f` arguments are provided:
@@ -146,7 +165,7 @@ The `--recursive` flag also works when multiple `-f` arguments are provided:
kubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive
```
-```shell
+```none
namespace/development created
namespace/staging created
configmap/my-config created
@@ -154,36 +173,41 @@ deployment.apps/my-deployment created
persistentvolumeclaim/my-pvc created
```
-If you're interested in learning more about `kubectl`, go ahead and read [Command line tool (kubectl)](/docs/reference/kubectl/).
+If you're interested in learning more about `kubectl`, go ahead and read
+[Command line tool (kubectl)](/docs/reference/kubectl/).
## Using labels effectively
-The examples we've used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
+The examples we've used so far apply at most a single label to any resource. There are many
+scenarios where multiple labels should be used to distinguish sets from one another.
-For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/master/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
+For instance, different applications would use different values for the `app` label, but a
+multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/master/guestbook/),
+would additionally need to distinguish each tier. The frontend could carry the following labels:
```yaml
- labels:
- app: guestbook
- tier: frontend
+labels:
+ app: guestbook
+ tier: frontend
```
-while the Redis master and slave would have different `tier` labels, and perhaps even an additional `role` label:
+while the Redis master and slave would have different `tier` labels, and perhaps even an
+additional `role` label:
```yaml
- labels:
- app: guestbook
- tier: backend
- role: master
+labels:
+ app: guestbook
+ tier: backend
+ role: master
```
and
```yaml
- labels:
- app: guestbook
- tier: backend
- role: slave
+labels:
+ app: guestbook
+ tier: backend
+ role: slave
```
The labels allow us to slice and dice our resources along any dimension specified by a label:
@@ -193,7 +217,7 @@ kubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml
kubectl get pods -Lapp -Ltier -Lrole
```
-```shell
+```none
NAME READY STATUS RESTARTS AGE APP TIER ROLE
guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend
guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend
@@ -208,7 +232,8 @@ my-nginx-o0ef1 1/1 Running 0 29m nginx
```shell
kubectl get pods -lapp=guestbook,role=slave
```
-```shell
+
+```none
NAME READY STATUS RESTARTS AGE
guestbook-redis-slave-2q2yf 1/1 Running 0 3m
guestbook-redis-slave-qgazl 1/1 Running 0 3m
@@ -216,62 +241,72 @@ guestbook-redis-slave-qgazl 1/1 Running 0 3m
## Canary deployments
-Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. It is common practice to deploy a *canary* of a new application release (specified via image tag in the pod template) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out.
+Another scenario where multiple labels are needed is to distinguish deployments of different
+releases or configurations of the same component. It is common practice to deploy a *canary* of a
+new application release (specified via image tag in the pod template) side by side with the
+previous release so that the new release can receive live production traffic before fully rolling
+it out.
For instance, you can use a `track` label to differentiate different releases.
The primary, stable release would have a `track` label with value as `stable`:
-```yaml
- name: frontend
- replicas: 3
- ...
- labels:
- app: guestbook
- tier: frontend
- track: stable
- ...
- image: gb-frontend:v3
+```none
+name: frontend
+replicas: 3
+...
+labels:
+ app: guestbook
+ tier: frontend
+ track: stable
+...
+image: gb-frontend:v3
```
-and then you can create a new release of the guestbook frontend that carries the `track` label with different value (i.e. `canary`), so that two sets of pods would not overlap:
+and then you can create a new release of the guestbook frontend that carries the `track` label
+with different value (i.e. `canary`), so that two sets of pods would not overlap:
-```yaml
- name: frontend-canary
- replicas: 1
- ...
- labels:
- app: guestbook
- tier: frontend
- track: canary
- ...
- image: gb-frontend:v4
+```none
+name: frontend-canary
+replicas: 1
+...
+labels:
+ app: guestbook
+ tier: frontend
+ track: canary
+...
+image: gb-frontend:v4
```
-
-The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the `track` label), so that the traffic will be redirected to both applications:
+The frontend service would span both sets of replicas by selecting the common subset of their
+labels (i.e. omitting the `track` label), so that the traffic will be redirected to both
+applications:
```yaml
- selector:
- app: guestbook
- tier: frontend
+selector:
+ app: guestbook
+ tier: frontend
```
-You can tweak the number of replicas of the stable and canary releases to determine the ratio of each release that will receive live production traffic (in this case, 3:1).
-Once you're confident, you can update the stable track to the new application release and remove the canary one.
+You can tweak the number of replicas of the stable and canary releases to determine the ratio of
+each release that will receive live production traffic (in this case, 3:1).
+Once you're confident, you can update the stable track to the new application release and remove
+the canary one.
-For a more concrete example, check the [tutorial of deploying Ghost](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary).
+For a more concrete example, check the
+[tutorial of deploying Ghost](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary).
## Updating labels
-Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`.
+Sometimes existing pods and other resources need to be relabeled before creating new resources.
+This can be done with `kubectl label`.
For example, if you want to label all your nginx pods as frontend tier, run:
```shell
kubectl label pods -l app=nginx tier=fe
```
-```shell
+```none
pod/my-nginx-2035384211-j5fhi labeled
pod/my-nginx-2035384211-u2c7e labeled
pod/my-nginx-2035384211-u3t6x labeled
@@ -283,20 +318,25 @@ To see the pods you labeled, run:
```shell
kubectl get pods -l app=nginx -L tier
```
-```shell
+
+```none
NAME READY STATUS RESTARTS AGE TIER
my-nginx-2035384211-j5fhi 1/1 Running 0 23m fe
my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe
my-nginx-2035384211-u3t6x 1/1 Running 0 23m fe
```
-This outputs all "app=nginx" pods, with an additional label column of pods' tier (specified with `-L` or `--label-columns`).
+This outputs all "app=nginx" pods, with an additional label column of pods' tier (specified with
+`-L` or `--label-columns`).
-For more information, please see [labels](/docs/concepts/overview/working-with-objects/labels/) and [kubectl label](/docs/reference/generated/kubectl/kubectl-commands/#label).
+For more information, please see [labels](/docs/concepts/overview/working-with-objects/labels/)
+and [kubectl label](/docs/reference/generated/kubectl/kubectl-commands/#label).
## Updating annotations
-Sometimes you would want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools, libraries, etc. This can be done with `kubectl annotate`. For example:
+Sometimes you would want to attach annotations to resources. Annotations are arbitrary
+non-identifying metadata for retrieval by API clients such as tools, libraries, etc.
+This can be done with `kubectl annotate`. For example:
```shell
kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'
@@ -312,17 +352,19 @@ metadata:
...
```
-For more information, please see [annotations](/docs/concepts/overview/working-with-objects/annotations/) and [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands/#annotate) document.
+For more information, see [annotations](/docs/concepts/overview/working-with-objects/annotations/)
+and [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands/#annotate) document.
## Scaling your application
-When load on your application grows or shrinks, use `kubectl` to scale your application. For instance, to decrease the number of nginx replicas from 3 to 1, do:
+When load on your application grows or shrinks, use `kubectl` to scale your application.
+For instance, to decrease the number of nginx replicas from 3 to 1, do:
```shell
kubectl scale deployment/my-nginx --replicas=1
```
-```shell
+```none
deployment.apps/my-nginx scaled
```
@@ -332,25 +374,27 @@ Now you only have one pod managed by the deployment.
kubectl get pods -l app=nginx
```
-```shell
+```none
NAME READY STATUS RESTARTS AGE
my-nginx-2035384211-j5fhi 1/1 Running 0 30m
```
-To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:
+To have the system automatically choose the number of nginx replicas as needed,
+ranging from 1 to 3, do:
```shell
kubectl autoscale deployment/my-nginx --min=1 --max=3
```
-```shell
+```none
horizontalpodautoscaler.autoscaling/my-nginx autoscaled
```
Now your nginx replicas will be scaled up and down as needed, automatically.
-For more information, please see [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale), [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) and [horizontal pod autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) document.
-
+For more information, please see [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale),
+[kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) and
+[horizontal pod autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) document.
## In-place updates of resources
@@ -361,20 +405,34 @@ Sometimes it's necessary to make narrow, non-disruptive updates to resources you
It is suggested to maintain a set of configuration files in source control
(see [configuration as code](https://martinfowler.com/bliki/InfrastructureAsCode.html)),
so that they can be maintained and versioned along with the code for the resources they configure.
-Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) to push your configuration changes to the cluster.
+Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply)
+to push your configuration changes to the cluster.
-This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified.
+This command will compare the version of the configuration that you're pushing with the previous
+version and apply the changes you've made, without overwriting any automated changes to properties
+you haven't specified.
```shell
kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
+```
+
+```none
deployment.apps/my-nginx configured
```
-Note that `kubectl apply` attaches an annotation to the resource in order to determine the changes to the configuration since the previous invocation. When it's invoked, `kubectl apply` does a three-way diff between the previous configuration, the provided input and the current configuration of the resource, in order to determine how to modify the resource.
+Note that `kubectl apply` attaches an annotation to the resource in order to determine the changes
+to the configuration since the previous invocation. When it's invoked, `kubectl apply` does a
+three-way diff between the previous configuration, the provided input and the current
+configuration of the resource, in order to determine how to modify the resource.
-Currently, resources are created without this annotation, so the first invocation of `kubectl apply` will fall back to a two-way diff between the provided input and the current configuration of the resource. During this first invocation, it cannot detect the deletion of properties set when the resource was created. For this reason, it will not remove them.
+Currently, resources are created without this annotation, so the first invocation of `kubectl
+apply` will fall back to a two-way diff between the provided input and the current configuration
+of the resource. During this first invocation, it cannot detect the deletion of properties set
+when the resource was created. For this reason, it will not remove them.
-All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as `kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to `kubectl apply` to detect and perform deletions using a three-way diff.
+All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as
+`kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to
+`kubectl apply` to detect and perform deletions using a three-way diff.
### kubectl edit
@@ -384,7 +442,8 @@ Alternatively, you may also update resources with `kubectl edit`:
kubectl edit deployment/my-nginx
```
-This is equivalent to first `get` the resource, edit it in text editor, and then `apply` the resource with the updated version:
+This is equivalent to first `get` the resource, edit it in text editor, and then `apply` the
+resource with the updated version:
```shell
kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml
@@ -397,7 +456,8 @@ deployment.apps/my-nginx configured
rm /tmp/nginx.yaml
```
-This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables.
+This allows you to do more significant changes more easily. Note that you can specify the editor
+with your `EDITOR` or `KUBE_EDITOR` environment variables.
For more information, please see [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) document.
@@ -411,20 +471,25 @@ and
## Disruptive updates
-In some cases, you may need to update resource fields that cannot be updated once initialized, or you may want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file:
+In some cases, you may need to update resource fields that cannot be updated once initialized, or
+you may want to make a recursive change immediately, such as to fix broken pods created by a
+Deployment. To change such fields, use `replace --force`, which deletes and re-creates the
+resource. In this case, you can modify your original configuration file:
```shell
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
```
-```shell
+```none
deployment.apps/my-nginx deleted
deployment.apps/my-nginx replaced
```
## Updating your application without a service outage
-At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
+At some point, you'll eventually need to update your deployed application, typically by specifying
+a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several
+update operations, each of which is applicable to different scenarios.
We'll guide you through how to create and update applications with Deployments.
@@ -434,7 +499,7 @@ Let's say you were running version 1.14.2 of nginx:
kubectl create deployment my-nginx --image=nginx:1.14.2
```
-```shell
+```none
deployment.apps/my-nginx created
```
@@ -444,24 +509,24 @@ with 3 replicas (so the old and new revisions can coexist):
kubectl scale deployment my-nginx --current-replicas=1 --replicas=3
```
-```
+```none
deployment.apps/my-nginx scaled
```
-To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1` using the previous kubectl commands.
+To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2`
+to `nginx:1.16.1` using the previous kubectl commands.
```shell
kubectl edit deployment/my-nginx
```
-That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/).
-
-
+That's it! The Deployment will declaratively update the deployed nginx application progressively
+behind the scene. It ensures that only a certain number of old replicas may be down while they are
+being updated, and only a certain number of new replicas may be created above the desired number
+of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/).
## {{% heading "whatsnext" %}}
-
- Learn about [how to use `kubectl` for application introspection and debugging](/docs/tasks/debug/debug-application/debug-running-pod/).
- See [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/).
-
diff --git a/content/en/docs/concepts/cluster-administration/system-traces.md b/content/en/docs/concepts/cluster-administration/system-traces.md
index 664e8951bfa47..04bd58ce38b92 100644
--- a/content/en/docs/concepts/cluster-administration/system-traces.md
+++ b/content/en/docs/concepts/cluster-administration/system-traces.md
@@ -84,7 +84,7 @@ The kubelet CRI interface and authenticated http servers are instrumented to gen
trace spans. As with the apiserver, the endpoint and sampling rate are configurable.
Trace context propagation is also configured. A parent span's sampling decision is always respected.
A provided tracing configuration sampling rate will apply to spans without a parent.
-Enabled without a configured endpoint, the default OpenTelemetry Collector reciever address of "localhost:4317" is set.
+Enabled without a configured endpoint, the default OpenTelemetry Collector receiver address of "localhost:4317" is set.
#### Enabling tracing in the kubelet
diff --git a/content/en/docs/concepts/configuration/overview.md b/content/en/docs/concepts/configuration/overview.md
index 33e04e1a914a3..7266f2b7ebc2a 100644
--- a/content/en/docs/concepts/configuration/overview.md
+++ b/content/en/docs/concepts/configuration/overview.md
@@ -102,13 +102,13 @@ to others, please don't hesitate to file an issue or submit a PR.
See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app
for examples of this approach.
-A Service can be made to span multiple Deployments by omitting release-specific labels from its
-selector. When you need to update a running service without downtime, use a
-[Deployment](/docs/concepts/workloads/controllers/deployment/).
+ A Service can be made to span multiple Deployments by omitting release-specific labels from its
+ selector. When you need to update a running service without downtime, use a
+ [Deployment](/docs/concepts/workloads/controllers/deployment/).
-A desired state of an object is described by a Deployment, and if changes to that spec are
-_applied_, the deployment controller changes the actual state to the desired state at a controlled
-rate.
+ A desired state of an object is described by a Deployment, and if changes to that spec are
+ _applied_, the deployment controller changes the actual state to the desired state at a controlled
+ rate.
- Use the [Kubernetes common labels](/docs/concepts/overview/working-with-objects/common-labels/)
for common use cases. These standardized labels enrich the metadata in a way that allows tools,
diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md
index 52408e6022bdd..437bbce57a10e 100644
--- a/content/en/docs/concepts/configuration/secret.md
+++ b/content/en/docs/concepts/configuration/secret.md
@@ -490,13 +490,6 @@ the kubelet on each node to authenticate to that repository. You can configure
_image pull secrets_ to make this possible. These secrets are configured at the Pod
level.
-The `imagePullSecrets` field for a Pod is a list of references to Secrets in the same namespace
-as the Pod.
-You can use an `imagePullSecrets` to pass image registry access credentials to
-the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod.
-See `PodSpec` in the [Pod API reference](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)
-for more information about the `imagePullSecrets` field.
-
#### Using imagePullSecrets
The `imagePullSecrets` field is a list of references to secrets in the same namespace.
diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md
index a135d1d1b6d46..6e00eb46b67a7 100644
--- a/content/en/docs/concepts/containers/images.md
+++ b/content/en/docs/concepts/containers/images.md
@@ -15,8 +15,7 @@ software dependencies. Container images are executable software bundles that can
standalone and that make very well defined assumptions about their runtime environment.
You typically create a container image of your application and push it to a registry
-before referring to it in a
-{{< glossary_tooltip text="Pod" term_id="pod" >}}
+before referring to it in a {{< glossary_tooltip text="Pod" term_id="pod" >}}.
This page provides an outline of the container image concept.
@@ -36,8 +35,8 @@ and possibly a port number as well; for example: `fictional.registry.example:104
If you don't specify a registry hostname, Kubernetes assumes that you mean the Docker public registry.
-After the image name part you can add a _tag_ (in the same way you would when using with commands like `docker` or `podman`).
-Tags let you identify different versions of the same series of images.
+After the image name part you can add a _tag_ (in the same way you would when using with commands
+like `docker` or `podman`). Tags let you identify different versions of the same series of images.
Image tags consist of lowercase and uppercase letters, digits, underscores (`_`),
periods (`.`), and dashes (`-`).
@@ -69,10 +68,10 @@ these values have:
`Always`
: every time the kubelet launches a container, the kubelet queries the container
image registry to resolve the name to an image
- [digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier). If the kubelet has a
- container image with that exact digest cached locally, the kubelet uses its cached
- image; otherwise, the kubelet pulls the image with the resolved digest,
- and uses that image to launch the container.
+ [digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier).
+ If the kubelet has a container image with that exact digest cached locally, the kubelet uses its
+ cached image; otherwise, the kubelet pulls the image with the resolved digest, and uses that image
+ to launch the container.
`Never`
: the kubelet does not try fetching the image. If the image is somehow already present
@@ -97,7 +96,11 @@ the image's digest;
replace `:` with `@`
(for example, `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`).
-When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely identifies a specific version of the image, so Kubernetes runs the same code every time it starts a container with that image name and digest specified. Specifying an image by digest fixes the code that you run so that a change at the registry cannot lead to that mix of versions.
+When using image tags, if the image registry were to change the code that the tag on that image
+represents, you might end up with a mix of Pods running the old and new code. An image digest
+uniquely identifies a specific version of the image, so Kubernetes runs the same code every time
+it starts a container with that image name and digest specified. Specifying an image by digest
+fixes the code that you run so that a change at the registry cannot lead to that mix of versions.
There are third-party [admission controllers](/docs/reference/access-authn-authz/admission-controllers/)
that mutate Pods (and pod templates) when they are created, so that the
@@ -137,8 +140,8 @@ If you would like to always force a pull, you can do one of the following:
Kubernetes will set the policy to `Always` when you submit the Pod.
- Omit the `imagePullPolicy` and the tag for the image to use;
Kubernetes will set the policy to `Always` when you submit the Pod.
-- Enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller.
-
+- Enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)
+ admission controller.
### ImagePullBackOff
@@ -156,35 +159,46 @@ which is 300 seconds (5 minutes).
## Multi-architecture images with image indexes
-As well as providing binary images, a container registry can also serve a [container image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md). An image index can point to multiple [image manifests](https://github.com/opencontainers/image-spec/blob/master/manifest.md) for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
+As well as providing binary images, a container registry can also serve a
+[container image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md).
+An image index can point to multiple [image manifests](https://github.com/opencontainers/image-spec/blob/master/manifest.md)
+for architecture-specific versions of a container. The idea is that you can have a name for an image
+(for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to
+fetch the right binary image for the machine architecture they are using.
-Kubernetes itself typically names container images with a suffix `-$(ARCH)`. For backward compatibility, please generate the older images with suffixes. The idea is to generate say `pause` image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes.
+Kubernetes itself typically names container images with a suffix `-$(ARCH)`. For backward
+compatibility, please generate the older images with suffixes. The idea is to generate say `pause`
+image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards
+compatible for older configurations or YAML files which may have hard coded the images with
+suffixes.
## Using a private registry
Private registries may require keys to read images from them.
Credentials can be provided in several ways:
- - Configuring Nodes to Authenticate to a Private Registry
- - all pods can read any configured private registries
- - requires node configuration by cluster administrator
- - Kubelet Credential Provider to dynamically fetch credentials for private registries
- - kubelet can be configured to use credential provider exec plugin
- for the respective private registry.
- - Pre-pulled Images
- - all pods can use any images cached on a node
- - requires root access to all nodes to set up
- - Specifying ImagePullSecrets on a Pod
- - only pods which provide own keys can access the private registry
- - Vendor-specific or local extensions
- - if you're using a custom node configuration, you (or your cloud
- provider) can implement your mechanism for authenticating the node
- to the container registry.
+
+- Configuring Nodes to Authenticate to a Private Registry
+ - all pods can read any configured private registries
+ - requires node configuration by cluster administrator
+- Kubelet Credential Provider to dynamically fetch credentials for private registries
+ - kubelet can be configured to use credential provider exec plugin
+ for the respective private registry.
+- Pre-pulled Images
+ - all pods can use any images cached on a node
+ - requires root access to all nodes to set up
+- Specifying ImagePullSecrets on a Pod
+ - only pods which provide own keys can access the private registry
+- Vendor-specific or local extensions
+ - if you're using a custom node configuration, you (or your cloud
+ provider) can implement your mechanism for authenticating the node
+ to the container registry.
These options are explained in more detail below.
### Configuring nodes to authenticate to a private registry
-Specific instructions for setting credentials depends on the container runtime and registry you chose to use. You should refer to your solution's documentation for the most accurate information.
+Specific instructions for setting credentials depends on the container runtime and registry you
+chose to use. You should refer to your solution's documentation for the most accurate information.
For an example of configuring a private container image registry, see the
[Pull an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry)
@@ -269,7 +283,6 @@ If now a container specifies an image `my-registry.io/images/subpath/my-image`
to be pulled, then the kubelet will try to download them from both
authentication sources if one of them fails.
-
### Pre-pulled images
{{< note >}}
@@ -285,7 +298,8 @@ then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication,
you must ensure all nodes in the cluster have the same pre-pulled images.
-This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.
+This can be used to preload certain images for speed or as an alternative to authenticating to a
+private registry.
All pods will have read access to any pre-pulled images.
@@ -307,13 +321,18 @@ to the registry, as well as its hostname.
Run the following command, substituting the appropriate uppercase values:
```shell
-kubectl create secret docker-registry --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
+kubectl create secret docker-registry \
+ --docker-server=DOCKER_REGISTRY_SERVER \
+ --docker-username=DOCKER_USER \
+ --docker-password=DOCKER_PASSWORD \
+ --docker-email=DOCKER_EMAIL
```
If you already have a Docker credentials file then, rather than using the above
command, you can import the credentials file as a Kubernetes
{{< glossary_tooltip text="Secrets" term_id="secret" >}}.
-[Create a Secret based on existing Docker credentials](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials) explains how to set this up.
+[Create a Secret based on existing Docker credentials](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials)
+explains how to set this up.
This is particularly useful if you are using multiple private container
registries, as `kubectl create secret docker-registry` creates a Secret that
@@ -358,7 +377,8 @@ This needs to be done for each pod that is using a private registry.
However, setting of this field can be automated by setting the imagePullSecrets
in a [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/) resource.
-Check [Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) for detailed instructions.
+Check [Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
+for detailed instructions.
You can use this in conjunction with a per-node `.docker/config.json`. The credentials
will be merged.
@@ -371,7 +391,8 @@ common use cases and suggested solutions.
1. Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
- Use public images from a public registry
- No configuration required.
- - Some cloud providers automatically cache or mirror public images, which improves availability and reduces the time to pull images.
+ - Some cloud providers automatically cache or mirror public images, which improves
+ availability and reduces the time to pull images.
1. Cluster running some proprietary images which should be hidden to those outside the company, but
visible to all cluster users.
- Use a hosted private registry
@@ -382,15 +403,17 @@ common use cases and suggested solutions.
- It will work better with cluster autoscaling than manual node configuration.
- Or, on a cluster where changing the node configuration is inconvenient, use `imagePullSecrets`.
1. Cluster with proprietary images, a few of which require stricter access control.
- - Ensure [AlwaysPullImages admission controller](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) is active. Otherwise, all Pods potentially have access to all images.
+ - Ensure [AlwaysPullImages admission controller](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)
+ is active. Otherwise, all Pods potentially have access to all images.
- Move sensitive data into a "Secret" resource, instead of packaging it in an image.
1. A multi-tenant cluster where each tenant needs own private registry.
- - Ensure [AlwaysPullImages admission controller](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) is active. Otherwise, all Pods of all tenants potentially have access to all images.
+ - Ensure [AlwaysPullImages admission controller](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)
+ is active. Otherwise, all Pods of all tenants potentially have access to all images.
- Run a private registry with authorization required.
- - Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
+ - Generate registry credential for each tenant, put into secret, and populate secret to each
+ tenant namespace.
- The tenant adds that secret to imagePullSecrets of each namespace.
-
If you need access to multiple registries, you can create one secret for each registry.
## {{% heading "whatsnext" %}}
@@ -398,3 +421,4 @@ If you need access to multiple registries, you can create one secret for each re
* Read the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md).
* Learn about [container image garbage collection](/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection).
* Learn more about [pulling an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry).
+
diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md
index 6645984559bb1..b26a25af04d9e 100644
--- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md
+++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md
@@ -87,60 +87,65 @@ spec:
The general workflow of a device plugin includes the following steps:
-* Initialization. During this phase, the device plugin performs vendor specific
+1. Initialization. During this phase, the device plugin performs vendor-specific
initialization and setup to make sure the devices are in a ready state.
-* The plugin starts a gRPC service, with a Unix socket under host path
+1. The plugin starts a gRPC service, with a Unix socket under the host path
`/var/lib/kubelet/device-plugins/`, that implements the following interfaces:
- ```gRPC
- service DevicePlugin {
- // GetDevicePluginOptions returns options to be communicated with Device Manager.
- rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}
-
- // ListAndWatch returns a stream of List of Devices
- // Whenever a Device state change or a Device disappears, ListAndWatch
- // returns the new list
- rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}
-
- // Allocate is called during container creation so that the Device
- // Plugin can run device specific operations and instruct Kubelet
- // of the steps to make the Device available in the container
- rpc Allocate(AllocateRequest) returns (AllocateResponse) {}
-
- // GetPreferredAllocation returns a preferred set of devices to allocate
- // from a list of available ones. The resulting preferred allocation is not
- // guaranteed to be the allocation ultimately performed by the
- // devicemanager. It is only designed to help the devicemanager make a more
- // informed allocation decision when possible.
- rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}
-
- // PreStartContainer is called, if indicated by Device Plugin during registeration phase,
- // before each container start. Device plugin can run device specific operations
- // such as resetting the device before making devices available to the container.
- rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}
- }
- ```
-
- {{< note >}}
- Plugins are not required to provide useful implementations for
- `GetPreferredAllocation()` or `PreStartContainer()`. Flags indicating which
- (if any) of these calls are available should be set in the `DevicePluginOptions`
- message sent back by a call to `GetDevicePluginOptions()`. The `kubelet` will
- always call `GetDevicePluginOptions()` to see which optional functions are
- available, before calling any of them directly.
- {{< /note >}}
-
-* The plugin registers itself with the kubelet through the Unix socket at host
+ ```gRPC
+ service DevicePlugin {
+ // GetDevicePluginOptions returns options to be communicated with Device Manager.
+ rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}
+
+ // ListAndWatch returns a stream of List of Devices
+ // Whenever a Device state change or a Device disappears, ListAndWatch
+ // returns the new list
+ rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}
+
+ // Allocate is called during container creation so that the Device
+ // Plugin can run device specific operations and instruct Kubelet
+ // of the steps to make the Device available in the container
+ rpc Allocate(AllocateRequest) returns (AllocateResponse) {}
+
+ // GetPreferredAllocation returns a preferred set of devices to allocate
+ // from a list of available ones. The resulting preferred allocation is not
+ // guaranteed to be the allocation ultimately performed by the
+ // devicemanager. It is only designed to help the devicemanager make a more
+ // informed allocation decision when possible.
+ rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}
+
+ // PreStartContainer is called, if indicated by Device Plugin during registeration phase,
+ // before each container start. Device plugin can run device specific operations
+ // such as resetting the device before making devices available to the container.
+ rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}
+ }
+ ```
+
+ {{< note >}}
+ Plugins are not required to provide useful implementations for
+ `GetPreferredAllocation()` or `PreStartContainer()`. Flags indicating
+ the availability of these calls, if any, should be set in the `DevicePluginOptions`
+ message sent back by a call to `GetDevicePluginOptions()`. The `kubelet` will
+ always call `GetDevicePluginOptions()` to see which optional functions are
+ available, before calling any of them directly.
+ {{< /note >}}
+
+1. The plugin registers itself with the kubelet through the Unix socket at host
path `/var/lib/kubelet/device-plugins/kubelet.sock`.
-* After successfully registering itself, the device plugin runs in serving mode, during which it keeps
- monitoring device health and reports back to the kubelet upon any device state changes.
- It is also responsible for serving `Allocate` gRPC requests. During `Allocate`, the device plugin may
- do device-specific preparation; for example, GPU cleanup or QRNG initialization.
- If the operations succeed, the device plugin returns an `AllocateResponse` that contains container
- runtime configurations for accessing the allocated devices. The kubelet passes this information
- to the container runtime.
+ {{< note >}}
+ The ordering of the workflow is important. A plugin MUST start serving gRPC
+ service before registering itself with kubelet for successful registration.
+ {{< /note >}}
+
+1. After successfully registering itself, the device plugin runs in serving mode, during which it keeps
+ monitoring device health and reports back to the kubelet upon any device state changes.
+ It is also responsible for serving `Allocate` gRPC requests. During `Allocate`, the device plugin may
+ do device-specific preparation; for example, GPU cleanup or QRNG initialization.
+ If the operations succeed, the device plugin returns an `AllocateResponse` that contains container
+ runtime configurations for accessing the allocated devices. The kubelet passes this information
+ to the container runtime.
### Handling kubelet restarts
@@ -172,11 +177,11 @@ Beta graduation of this feature. Because of this, kubelet upgrades should be sea
but there still may be changes in the API before stabilization making upgrades not
guaranteed to be non-breaking.
-{{< caution >}}
+{{< note >}}
Although the Device Manager component of Kubernetes is a generally available feature,
the _device plugin API_ is not stable. For information on the device plugin API and
version compatibility, read [Device Plugin API versions](/docs/reference/node/device-plugin-api-versions/).
-{{< caution >}}
+{{< /note >}}
As a project, Kubernetes recommends that device plugin developers:
diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
index 4fc30f96b1490..5c6cfa7fc5842 100644
--- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
+++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
@@ -109,7 +109,8 @@ If you want to enable `hostPort` support, you must specify `portMappings capabil
},
{
"type": "portmap",
- "capabilities": {"portMappings": true}
+ "capabilities": {"portMappings": true},
+ "externalSetMarkChain": "KUBE-MARK-MASQ"
}
]
}
diff --git a/content/en/docs/concepts/extend-kubernetes/operator.md b/content/en/docs/concepts/extend-kubernetes/operator.md
index d740d481dd6a7..69b13915603a3 100644
--- a/content/en/docs/concepts/extend-kubernetes/operator.md
+++ b/content/en/docs/concepts/extend-kubernetes/operator.md
@@ -119,6 +119,7 @@ operator.
* [kubebuilder](https://book.kubebuilder.io/)
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (.NET operator SDK)
* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
+* [Mast](https://docs.ansi.services/mast/user_guide/operator/)
* [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html) along with WebHooks that
you implement yourself
* [Operator Framework](https://operatorframework.io)
diff --git a/content/en/docs/concepts/overview/_index.md b/content/en/docs/concepts/overview/_index.md
index 72abe7298a0fd..f5952d39b7482 100644
--- a/content/en/docs/concepts/overview/_index.md
+++ b/content/en/docs/concepts/overview/_index.md
@@ -76,11 +76,11 @@ Containers have become popular because they provide extra benefits, such as:
applications from infrastructure.
* Observability: not only surfaces OS-level information and metrics, but also
application health and other signals.
-* Environmental consistency across development, testing, and production: Runs
+* Environmental consistency across development, testing, and production: runs
the same on a laptop as it does in the cloud.
-* Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-premises,
+* Cloud and OS distribution portability: runs on Ubuntu, RHEL, CoreOS, on-premises,
on major public clouds, and anywhere else.
-* Application-centric management: Raises the level of abstraction from running an
+* Application-centric management: raises the level of abstraction from running an
OS on virtual hardware to running an application on an OS using logical resources.
* Loosely coupled, distributed, elastic, liberated micro-services: applications are
broken into smaller, independent pieces and can be deployed and managed dynamically –
diff --git a/content/en/docs/concepts/overview/working-with-objects/common-labels.md b/content/en/docs/concepts/overview/working-with-objects/common-labels.md
index b4ccb7a652c6a..c6bda86afefcd 100644
--- a/content/en/docs/concepts/overview/working-with-objects/common-labels.md
+++ b/content/en/docs/concepts/overview/working-with-objects/common-labels.md
@@ -37,7 +37,7 @@ on every resource object.
| ----------------------------------- | --------------------- | -------- | ---- |
| `app.kubernetes.io/name` | The name of the application | `mysql` | string |
| `app.kubernetes.io/instance` | A unique name identifying the instance of an application | `mysql-abcxzy` | string |
-| `app.kubernetes.io/version` | The current version of the application (e.g., a semantic version, revision hash, etc.) | `5.7.21` | string |
+| `app.kubernetes.io/version` | The current version of the application (e.g., a [SemVer 1.0](https://semver.org/spec/v1.0.0.html), revision hash, etc.) | `5.7.21` | string |
| `app.kubernetes.io/component` | The component within the architecture | `database` | string |
| `app.kubernetes.io/part-of` | The name of a higher level application this one is part of | `wordpress` | string |
| `app.kubernetes.io/managed-by` | The tool being used to manage the operation of an application | `helm` | string |
@@ -171,4 +171,3 @@ metadata:
```
With the MySQL `StatefulSet` and `Service` you'll notice information about both MySQL and WordPress, the broader application, are included.
-
diff --git a/content/en/docs/concepts/overview/working-with-objects/labels.md b/content/en/docs/concepts/overview/working-with-objects/labels.md
index 55b1a5d032ad7..477ce6f2f5ca5 100644
--- a/content/en/docs/concepts/overview/working-with-objects/labels.md
+++ b/content/en/docs/concepts/overview/working-with-objects/labels.md
@@ -9,9 +9,12 @@ weight: 40
_Labels_ are key/value pairs that are attached to objects, such as pods.
-Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system.
-Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time.
-Each object can have a set of key/value labels defined. Each Key must be unique for a given object.
+Labels are intended to be used to specify identifying attributes of objects
+that are meaningful and relevant to users, but do not directly imply semantics
+to the core system. Labels can be used to organize and to select subsets of
+objects. Labels can be attached to objects at creation time and subsequently
+added and modified at any time. Each object can have a set of key/value labels
+defined. Each Key must be unique for a given object.
```json
"metadata": {
@@ -30,37 +33,56 @@ and CLIs. Non-identifying information should be recorded using
## Motivation
-Labels enable users to map their own organizational structures onto system objects in a loosely coupled fashion, without requiring clients to store these mappings.
+Labels enable users to map their own organizational structures onto system objects
+in a loosely coupled fashion, without requiring clients to store these mappings.
-Service deployments and batch processing pipelines are often multi-dimensional entities (e.g., multiple partitions or deployments, multiple release tracks, multiple tiers, multiple micro-services per tier). Management often requires cross-cutting operations, which breaks encapsulation of strictly hierarchical representations, especially rigid hierarchies determined by the infrastructure rather than by users.
+Service deployments and batch processing pipelines are often multi-dimensional entities
+(e.g., multiple partitions or deployments, multiple release tracks, multiple tiers,
+multiple micro-services per tier). Management often requires cross-cutting operations,
+which breaks encapsulation of strictly hierarchical representations, especially rigid
+hierarchies determined by the infrastructure rather than by users.
Example labels:
- * `"release" : "stable"`, `"release" : "canary"`
- * `"environment" : "dev"`, `"environment" : "qa"`, `"environment" : "production"`
- * `"tier" : "frontend"`, `"tier" : "backend"`, `"tier" : "cache"`
- * `"partition" : "customerA"`, `"partition" : "customerB"`
- * `"track" : "daily"`, `"track" : "weekly"`
+* `"release" : "stable"`, `"release" : "canary"`
+* `"environment" : "dev"`, `"environment" : "qa"`, `"environment" : "production"`
+* `"tier" : "frontend"`, `"tier" : "backend"`, `"tier" : "cache"`
+* `"partition" : "customerA"`, `"partition" : "customerB"`
+* `"track" : "daily"`, `"track" : "weekly"`
-These are examples of [commonly used labels](/docs/concepts/overview/working-with-objects/common-labels/); you are free to develop your own conventions. Keep in mind that label Key must be unique for a given object.
+These are examples of
+[commonly used labels](/docs/concepts/overview/working-with-objects/common-labels/);
+you are free to develop your own conventions.
+Keep in mind that label Key must be unique for a given object.
## Syntax and character set
-_Labels_ are key/value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (`/`). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (`.`), not longer than 253 characters in total, followed by a slash (`/`).
+_Labels_ are key/value pairs. Valid label keys have two segments: an optional
+prefix and name, separated by a slash (`/`). The name segment is required and
+must be 63 characters or less, beginning and ending with an alphanumeric
+character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`),
+and alphanumerics between. The prefix is optional. If specified, the prefix
+must be a DNS subdomain: a series of DNS labels separated by dots (`.`),
+not longer than 253 characters in total, followed by a slash (`/`).
-If the prefix is omitted, the label Key is presumed to be private to the user. Automated system components (e.g. `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl`, or other third-party automation) which add labels to end-user objects must specify a prefix.
+If the prefix is omitted, the label Key is presumed to be private to the user.
+Automated system components (e.g. `kube-scheduler`, `kube-controller-manager`,
+`kube-apiserver`, `kubectl`, or other third-party automation) which add labels
+to end-user objects must specify a prefix.
-The `kubernetes.io/` and `k8s.io/` prefixes are [reserved](/docs/reference/labels-annotations-taints/) for Kubernetes core components.
+The `kubernetes.io/` and `k8s.io/` prefixes are
+[reserved](/docs/reference/labels-annotations-taints/) for Kubernetes core components.
Valid label value:
+
* must be 63 characters or less (can be empty),
* unless empty, must begin and end with an alphanumeric character (`[a-z0-9A-Z]`),
* could contain dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between.
-For example, here's the configuration file for a Pod that has two labels `environment: production` and `app: nginx` :
+For example, here's the configuration file for a Pod that has two labels
+`environment: production` and `app: nginx`:
```yaml
-
apiVersion: v1
kind: Pod
metadata:
@@ -74,34 +96,43 @@ spec:
image: nginx:1.14.2
ports:
- containerPort: 80
-
```
## Label selectors
-Unlike [names and UIDs](/docs/concepts/overview/working-with-objects/names/), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
+Unlike [names and UIDs](/docs/concepts/overview/working-with-objects/names/), labels
+do not provide uniqueness. In general, we expect many objects to carry the same label(s).
-Via a _label selector_, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.
+Via a _label selector_, the client/user can identify a set of objects.
+The label selector is the core grouping primitive in Kubernetes.
The API currently supports two types of selectors: _equality-based_ and _set-based_.
-A label selector can be made of multiple _requirements_ which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical _AND_ (`&&`) operator.
+A label selector can be made of multiple _requirements_ which are comma-separated.
+In the case of multiple requirements, all must be satisfied so the comma separator
+acts as a logical _AND_ (`&&`) operator.
The semantics of empty or non-specified selectors are dependent on the context,
and API types that use selectors should document the validity and meaning of
them.
{{< note >}}
-For some API types, such as ReplicaSets, the label selectors of two instances must not overlap within a namespace, or the controller can see that as conflicting instructions and fail to determine how many replicas should be present.
+For some API types, such as ReplicaSets, the label selectors of two instances must
+not overlap within a namespace, or the controller can see that as conflicting
+instructions and fail to determine how many replicas should be present.
{{< /note >}}
{{< caution >}}
-For both equality-based and set-based conditions there is no logical _OR_ (`||`) operator. Ensure your filter statements are structured accordingly.
+For both equality-based and set-based conditions there is no logical _OR_ (`||`) operator.
+Ensure your filter statements are structured accordingly.
{{< /caution >}}
### _Equality-based_ requirement
-_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well.
-Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. For example:
+_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values.
+Matching objects must satisfy all of the specified label constraints, though they may
+have additional labels as well. Three kinds of operators are admitted `=`,`==`,`!=`.
+The first two represent _equality_ (and are synonyms), while the latter represents _inequality_.
+For example:
```
environment = production
@@ -109,8 +140,9 @@ tier != frontend
```
The former selects all resources with key equal to `environment` and value equal to `production`.
-The latter selects all resources with key equal to `tier` and value distinct from `frontend`, and all resources with no labels with the `tier` key.
-One could filter for resources in `production` excluding `frontend` using the comma operator: `environment=production,tier!=frontend`
+The latter selects all resources with key equal to `tier` and value distinct from `frontend`,
+and all resources with no labels with the `tier` key. One could filter for resources in `production`
+excluding `frontend` using the comma operator: `environment=production,tier!=frontend`
One usage scenario for equality-based label requirement is for Pods to specify
node selection criteria. For example, the sample Pod below selects nodes with
@@ -134,7 +166,9 @@ spec:
### _Set-based_ requirement
-_Set-based_ label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: `in`,`notin` and `exists` (only the key identifier). For example:
+_Set-based_ label requirements allow filtering keys according to a set of values.
+Three kinds of operators are supported: `in`,`notin` and `exists` (only the key identifier).
+For example:
```
environment in (production, qa)
@@ -143,27 +177,38 @@ partition
!partition
```
-* The first example selects all resources with key equal to `environment` and value equal to `production` or `qa`.
-* The second example selects all resources with key equal to `tier` and values other than `frontend` and `backend`, and all resources with no labels with the `tier` key.
-* The third example selects all resources including a label with key `partition`; no values are checked.
-* The fourth example selects all resources without a label with key `partition`; no values are checked.
-
-Similarly the comma separator acts as an _AND_ operator. So filtering resources with a `partition` key (no matter the value) and with `environment` different than `qa` can be achieved using `partition,environment notin (qa)`.
-The _set-based_ label selector is a general form of equality since `environment=production` is equivalent to `environment in (production)`; similarly for `!=` and `notin`.
-
-_Set-based_ requirements can be mixed with _equality-based_ requirements. For example: `partition in (customerA, customerB),environment!=qa`.
-
+- The first example selects all resources with key equal to `environment` and value
+ equal to `production` or `qa`.
+- The second example selects all resources with key equal to `tier` and values other
+ than `frontend` and `backend`, and all resources with no labels with the `tier` key.
+- The third example selects all resources including a label with key `partition`;
+ no values are checked.
+- The fourth example selects all resources without a label with key `partition`;
+ no values are checked.
+
+Similarly the comma separator acts as an _AND_ operator. So filtering resources
+with a `partition` key (no matter the value) and with `environment` different
+than `qa` can be achieved using `partition,environment notin (qa)`.
+The _set-based_ label selector is a general form of equality since
+`environment=production` is equivalent to `environment in (production)`;
+similarly for `!=` and `notin`.
+
+_Set-based_ requirements can be mixed with _equality-based_ requirements.
+For example: `partition in (customerA, customerB),environment!=qa`.
## API
### LIST and WATCH filtering
-LIST and WATCH operations may specify label selectors to filter the sets of objects returned using a query parameter. Both requirements are permitted (presented here as they would appear in a URL query string):
+LIST and WATCH operations may specify label selectors to filter the sets of objects
+returned using a query parameter. Both requirements are permitted
+(presented here as they would appear in a URL query string):
- * _equality-based_ requirements: `?labelSelector=environment%3Dproduction,tier%3Dfrontend`
- * _set-based_ requirements: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29`
+* _equality-based_ requirements: `?labelSelector=environment%3Dproduction,tier%3Dfrontend`
+* _set-based_ requirements: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29`
-Both label selector styles can be used to list or watch resources via a REST client. For example, targeting `apiserver` with `kubectl` and using _equality-based_ one may write:
+Both label selector styles can be used to list or watch resources via a REST client.
+For example, targeting `apiserver` with `kubectl` and using _equality-based_ one may write:
```shell
kubectl get pods -l environment=production,tier=frontend
@@ -175,7 +220,8 @@ or using _set-based_ requirements:
kubectl get pods -l 'environment in (production),tier in (frontend)'
```
-As already mentioned _set-based_ requirements are more expressive. For instance, they can implement the _OR_ operator on values:
+As already mentioned _set-based_ requirements are more expressive.
+For instance, they can implement the _OR_ operator on values:
```shell
kubectl get pods -l 'environment in (production, qa)'
@@ -196,15 +242,19 @@ also use label selectors to specify sets of other resources, such as
#### Service and ReplicationController
-The set of pods that a `service` targets is defined with a label selector. Similarly, the population of pods that a `replicationcontroller` should manage is also defined with a label selector.
+The set of pods that a `service` targets is defined with a label selector.
+Similarly, the population of pods that a `replicationcontroller` should
+manage is also defined with a label selector.
-Labels selectors for both objects are defined in `json` or `yaml` files using maps, and only _equality-based_ requirement selectors are supported:
+Labels selectors for both objects are defined in `json` or `yaml` files using maps,
+and only _equality-based_ requirement selectors are supported:
```json
"selector": {
"component" : "redis",
}
```
+
or
```yaml
@@ -212,7 +262,8 @@ selector:
component: redis
```
-this selector (respectively in `json` or `yaml` format) is equivalent to `component=redis` or `component in (redis)`.
+This selector (respectively in `json` or `yaml` format) is equivalent to
+`component=redis` or `component in (redis)`.
#### Resources that support set-based requirements
@@ -231,9 +282,25 @@ selector:
- {key: environment, operator: NotIn, values: [dev]}
```
-`matchLabels` is a map of `{key,value}` pairs. A single `{key,value}` in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value". `matchExpressions` is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both `matchLabels` and `matchExpressions` are ANDed together -- they must all be satisfied in order to match.
+`matchLabels` is a map of `{key,value}` pairs. A single `{key,value}` in the
+`matchLabels` map is equivalent to an element of `matchExpressions`, whose `key`
+field is "key", the `operator` is "In", and the `values` array contains only "value".
+`matchExpressions` is a list of pod selector requirements. Valid operators include
+In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of
+In and NotIn. All of the requirements, from both `matchLabels` and `matchExpressions`
+are ANDed together -- they must all be satisfied in order to match.
#### Selecting sets of nodes
-One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule.
-See the documentation on [node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information.
+One use case for selecting over labels is to constrain the set of nodes onto which
+a pod can schedule. See the documentation on
+[node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information.
+
+## {{% heading "whatsnext" %}}
+
+- Learn how to [add a label to a node](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node)
+- Find [Well-known labels, Annotations and Taints](/docs/reference/labels-annotations-taints/)
+- See [Recommended labels](/docs/concepts/overview/working-with-objects/common-labels/)
+- [Enforce Pod Security Standards with Namespace Labels](/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/)
+- [Use Labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) to manage deployments.
+- Read a blog on [Writing a Controller for Pod Labels](/blog/2021/06/21/writing-a-controller-for-pod-labels/)
diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md
index 1eb96fe4a632f..9ec8edaffda1f 100644
--- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md
+++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md
@@ -147,7 +147,7 @@ kubectl api-resources --namespaced=false
## Automatic labelling
-{{< feature-state state="beta" for_k8s_version="1.21" >}}
+{{< feature-state for_k8s_version="1.22" state="stable" >}}
The Kubernetes control plane sets an immutable {{< glossary_tooltip text="label" term_id="label" >}}
`kubernetes.io/metadata.name` on all namespaces, provided that the `NamespaceDefaultLabelName`
diff --git a/content/en/docs/concepts/policy/limit-range.md b/content/en/docs/concepts/policy/limit-range.md
index a1428b0e4b32d..e1e0d0b93a0bf 100644
--- a/content/en/docs/concepts/policy/limit-range.md
+++ b/content/en/docs/concepts/policy/limit-range.md
@@ -50,7 +50,7 @@ The name of a LimitRange object must be a valid
## LimitRange and admission checks for Pods
-A `LimitRange` does **not** check the consistency of the default values it applies. This means that a default value for the _limit_ that is set by `LimitRange` may be less than the _request_ value specified for the container in the spec that a client submits to the API server. If that happens, the final Pod will not be scheduleable.
+A `LimitRange` does **not** check the consistency of the default values it applies. This means that a default value for the _limit_ that is set by `LimitRange` may be less than the _request_ value specified for the container in the spec that a client submits to the API server. If that happens, the final Pod will not be schedulable.
For example, you define a `LimitRange` with this manifest:
diff --git a/content/en/docs/concepts/policy/pid-limiting.md b/content/en/docs/concepts/policy/pid-limiting.md
index 1e03ccf375cd8..54e1b324f9d9b 100644
--- a/content/en/docs/concepts/policy/pid-limiting.md
+++ b/content/en/docs/concepts/policy/pid-limiting.md
@@ -73,13 +73,6 @@ The value you specified declares that the specified number of process IDs will
be reserved for the system as a whole and for Kubernetes system daemons
respectively.
-{{< note >}}
-Before Kubernetes version 1.20, PID resource limiting with Node-level
-reservations required enabling the [feature
-gate](/docs/reference/command-line-tools-reference/feature-gates/)
-`SupportNodePidsLimit` to work.
-{{< /note >}}
-
## Pod PID limits
Kubernetes allows you to limit the number of processes running in a Pod. You
@@ -89,12 +82,6 @@ To configure the limit, you can specify the command line parameter `--pod-max-pi
to the kubelet, or set `PodPidsLimit` in the kubelet
[configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
-{{< note >}}
-Before Kubernetes version 1.20, PID resource limiting for Pods required enabling
-the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-`SupportPodPidsLimit` to work.
-{{< /note >}}
-
## PID based eviction
You can configure kubelet to start terminating a Pod when it is misbehaving and consuming abnormal amount of resources.
diff --git a/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md b/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md
index 5a5647985d589..be9994631559e 100644
--- a/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md
+++ b/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md
@@ -26,22 +26,7 @@ criteria that Pod should be satisfied before considered schedulable. This field
only when a Pod is created (either by the client, or mutated during admission). After creation,
each schedulingGate can be removed in arbitrary order, but addition of a new scheduling gate is disallowed.
-{{}}
-stateDiagram-v2
- s1: pod created
- s2: pod scheduling gated
- s3: pod scheduling ready
- s4: pod running
- if: empty scheduling gates?
- [*] --> s1
- s1 --> if
- s2 --> if: scheduling gate removed
- if --> s2: no
- if --> s3: yes
- s3 --> s4
- s4 --> [*]
-{{< /mermaid >}}
-
+{{< figure src="/docs/images/podSchedulingGates.svg" alt="pod-scheduling-gates-diagram" caption="Figure. Pod SchedulingGates" class="diagram-large" link="https://mermaid.live/edit#pako:eNplkktTwyAUhf8KgzuHWpukaYszutGlK3caFxQuCVMCGSDVTKf_XfKyPlhxz4HDB9wT5lYAptgHFuBRsdKxenFMClMYFIdfUdRYgbiD6ItJTEbR8wpEq5UpUfnDTf-5cbPoJjcbXdcaE61RVJIiqJvQ_Y30D-OCt-t3tFjcR5wZayiVnIGmkv4NiEfX9jijKTmmRH5jf0sRugOP0HyHUc1m6KGMFP27cM28fwSJDluPpNKaXqVJzmFNfHD2APRKSjnNFx9KhIpmzSfhVls3eHdTRrwG8QnxKfEZUUNeYTDBNbiaKRF_5dSfX-BQQQ0FpnEqQLJWhwIX5hyXsjbYl85wTINrgeC2EZd_xFQy7b_VJ6GCdd-itkxALE84dE3fAqXyIUZya6Qqe711OspVCI2ny2Vv35QqVO3-htt66ZWomAvVcZcv8yTfsiSFfJOydZoKvl_ttjLJVlJsblcJw-czwQ0zr9ZeqGDgeR77b2jD8xdtjtDn" >}}
## Usage example
To mark a Pod not-ready for scheduling, you can create it with one or more scheduling gates like this:
diff --git a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md
index 62098a0928f42..76855f5a5e57c 100644
--- a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md
+++ b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md
@@ -97,7 +97,7 @@ your cluster. Those fields are:
nodes match the node selector.
{{< note >}}
- The `minDomains` field is a beta field and enabled by default in 1.25. You can disable it by disabling the
+ The `minDomains` field is a beta field and disabled by default in 1.25. You can enable it by enabling the
`MinDomainsInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
{{< /note >}}
diff --git a/content/en/docs/concepts/security/api-server-bypass-risks.md b/content/en/docs/concepts/security/api-server-bypass-risks.md
index 7c906a203172e..90435b1eeeec2 100644
--- a/content/en/docs/concepts/security/api-server-bypass-risks.md
+++ b/content/en/docs/concepts/security/api-server-bypass-risks.md
@@ -12,7 +12,8 @@ The Kubernetes API server is the main point of entry to a cluster for external p
(users and services) interacting with it.
As part of this role, the API server has several key built-in security controls, such as
-audit logging and {{< glossary_tooltip text="admission controllers" term_id="admission-controller" >}}. However, there are ways to modify the configuration
+audit logging and {{< glossary_tooltip text="admission controllers" term_id="admission-controller" >}}.
+However, there are ways to modify the configuration
or content of the cluster that bypass these controls.
This page describes the ways in which the security controls built into the
@@ -65,7 +66,8 @@ every container running on the node.
When Kubernetes cluster users have RBAC access to `Node` object sub-resources, that access
serves as authorization to interact with the kubelet API. The exact access depends on
-which sub-resource access has been granted, as detailed in [kubelet authorization](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authorization).
+which sub-resource access has been granted, as detailed in
+[kubelet authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authorization).
Direct access to the kubelet API is not subject to admission control and is not logged
by Kubernetes audit logging. An attacker with direct access to this API may be able to
@@ -80,11 +82,12 @@ The default anonymous access doesn't make this assertion with the control plane.
### Mitigations
- Restrict access to sub-resources of the `nodes` API object using mechanisms such as
- [RBAC](/docs/reference/access-authn-authz/rbac/). Only grant this access when required,
- such as by monitoring services.
+ [RBAC](/docs/reference/access-authn-authz/rbac/). Only grant this access when required,
+ such as by monitoring services.
- Restrict access to the kubelet port. Only allow specified and trusted IP address
- ranges to access the port.
-- [Ensure that kubelet authentication is set to webhook or certificate mode](/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authentication).
+ ranges to access the port.
+- Ensure that [kubelet authentication](/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authentication).
+ is set to webhook or certificate mode.
- Ensure that the unauthenticated "read-only" Kubelet port is not enabled on the cluster.
## The etcd API
diff --git a/content/en/docs/concepts/security/multi-tenancy.md b/content/en/docs/concepts/security/multi-tenancy.md
index 8393b3a0f2d29..49355d08a6ac4 100755
--- a/content/en/docs/concepts/security/multi-tenancy.md
+++ b/content/en/docs/concepts/security/multi-tenancy.md
@@ -44,7 +44,7 @@ share clusters.
The other major form of multi-tenancy frequently involves a Software-as-a-Service (SaaS) vendor
running multiple instances of a workload for customers. This business model is so strongly
associated with this deployment style that many people call it "SaaS tenancy." However, a better
-term might be "multi-customer tenancy,” since SaaS vendors may also use other deployment models,
+term might be "multi-customer tenancy," since SaaS vendors may also use other deployment models,
and this deployment model can also be used outside of SaaS.
In this scenario, the customers do not have access to the cluster; Kubernetes is invisible from
diff --git a/content/en/docs/concepts/security/rbac-good-practices.md b/content/en/docs/concepts/security/rbac-good-practices.md
index 8b883bba9a3db..b6abde0d7494e 100644
--- a/content/en/docs/concepts/security/rbac-good-practices.md
+++ b/content/en/docs/concepts/security/rbac-good-practices.md
@@ -121,8 +121,20 @@ considered weak.
### Persistent volume creation
-As noted in the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/#volumes-and-file-systems)
-documentation, access to create PersistentVolumes can allow for escalation of access to the underlying host.
+If someone - or some application - is allowed to create arbitrary PersistentVolumes, that access
+includes the creation of `hostPath` volumes, which then means that a Pod would get access
+to the underlying host filesystem(s) on the associated node. Granting that ability is a security risk.
+
+There are many ways a container with unrestricted access to the host filesystem can escalate privileges, including
+reading data from other containers, and abusing the credentials of system services, such as Kubelet.
+
+You should only allow access to create PersistentVolume objects for:
+
+- users (cluster operators) that need this access for their work, and who you trust,
+- the Kubernetes control plane components which creates PersistentVolumes based on PersistentVolumeClaims
+ that are configured for automatic provisioning.
+ This is usually setup by the Kubernetes provider or by the operator when installing a CSI driver.
+
Where access to persistent storage is required trusted administrators should create
PersistentVolumes, and constrained users should use PersistentVolumeClaims to access that storage.
diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md
index 85680c91458c4..d67e1e3e98e9b 100644
--- a/content/en/docs/concepts/services-networking/ingress-controllers.md
+++ b/content/en/docs/concepts/services-networking/ingress-controllers.md
@@ -60,6 +60,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
* [Tyk Operator](https://github.com/TykTechnologies/tyk-operator) extends Ingress with Custom Resources to bring API Management capabilities to Ingress. Tyk Operator works with the Open Source Tyk Gateway & Tyk Cloud control plane.
* [Voyager](https://appscode.com/products/voyager) is an ingress controller for
[HAProxy](https://www.haproxy.org/#desc).
+* [Wallarm Ingress Controller](https://www.wallarm.com/solutions/waf-for-kubernetes) is an Ingress Controller that provides WAAP (WAF) and API Security capabilities.
## Using multiple Ingress controllers
diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md
index 56d3566c2ecbf..a1797f20b877f 100644
--- a/content/en/docs/concepts/services-networking/network-policies.md
+++ b/content/en/docs/concepts/services-networking/network-policies.md
@@ -11,88 +11,144 @@ description: >-
NetworkPolicies allow you to specify rules for traffic flow within your cluster, and
also between Pods and the outside world.
Your cluster must use a network plugin that supports NetworkPolicy enforcement.
+
---
-If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which allow you to specify how a {{< glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network. NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to other connections.
+If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you
+might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
+NetworkPolicies are an application-centric construct which allow you to specify how a {{<
+glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network
+"entities" (we use the word "entity" here to avoid overloading the more common terms such as
+"endpoints" and "services", which have specific Kubernetes connotations) over the network.
+NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to
+other connections.
-The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
+The entities that a Pod can communicate with are identified through a combination of the following
+3 identifiers:
1. Other pods that are allowed (exception: a pod cannot block access to itself)
2. Namespaces that are allowed
-3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
+3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed,
+ regardless of the IP address of the Pod or the node)
-When defining a pod- or namespace- based NetworkPolicy, you use a {{< glossary_tooltip text="selector" term_id="selector">}} to specify what traffic is allowed to and from the Pod(s) that match the selector.
+When defining a pod- or namespace- based NetworkPolicy, you use a
+{{< glossary_tooltip text="selector" term_id="selector">}} to specify what traffic is allowed to
+and from the Pod(s) that match the selector.
Meanwhile, when IP based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
## Prerequisites
-Network policies are implemented by the [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.
+Network policies are implemented by the [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).
+To use network policies, you must be using a networking solution which supports NetworkPolicy.
+Creating a NetworkPolicy resource without a controller that implements it will have no effect.
## The Two Sorts of Pod Isolation
-There are two sorts of isolation for a pod: isolation for egress, and isolation for ingress. They concern what connections may be established. "Isolation" here is not absolute, rather it means "some restrictions apply". The alternative, "non-isolated for $direction", means that no restrictions apply in the stated direction. The two sorts of isolation (or not) are declared independently, and are both relevant for a connection from one pod to another.
-
-By default, a pod is non-isolated for egress; all outbound connections are allowed. A pod is isolated for egress if there is any NetworkPolicy that both selects the pod and has "Egress" in its `policyTypes`; we say that such a policy applies to the pod for egress. When a pod is isolated for egress, the only allowed connections from the pod are those allowed by the `egress` list of some NetworkPolicy that applies to the pod for egress. The effects of those `egress` lists combine additively.
-
-By default, a pod is non-isolated for ingress; all inbound connections are allowed. A pod is isolated for ingress if there is any NetworkPolicy that both selects the pod and has "Ingress" in its `policyTypes`; we say that such a policy applies to the pod for ingress. When a pod is isolated for ingress, the only allowed connections into the pod are those from the pod's node and those allowed by the `ingress` list of some NetworkPolicy that applies to the pod for ingress. The effects of those `ingress` lists combine additively.
-
-Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow. Thus, order of evaluation does not affect the policy result.
-
-For a connection from a source pod to a destination pod to be allowed, both the egress policy on the source pod and the ingress policy on the destination pod need to allow the connection. If either side does not allow the connection, it will not happen.
+There are two sorts of isolation for a pod: isolation for egress, and isolation for ingress.
+They concern what connections may be established. "Isolation" here is not absolute, rather it
+means "some restrictions apply". The alternative, "non-isolated for $direction", means that no
+restrictions apply in the stated direction. The two sorts of isolation (or not) are declared
+independently, and are both relevant for a connection from one pod to another.
+
+By default, a pod is non-isolated for egress; all outbound connections are allowed.
+A pod is isolated for egress if there is any NetworkPolicy that both selects the pod and has
+"Egress" in its `policyTypes`; we say that such a policy applies to the pod for egress.
+When a pod is isolated for egress, the only allowed connections from the pod are those allowed by
+the `egress` list of some NetworkPolicy that applies to the pod for egress.
+The effects of those `egress` lists combine additively.
+
+By default, a pod is non-isolated for ingress; all inbound connections are allowed.
+A pod is isolated for ingress if there is any NetworkPolicy that both selects the pod and
+has "Ingress" in its `policyTypes`; we say that such a policy applies to the pod for ingress.
+When a pod is isolated for ingress, the only allowed connections into the pod are those from
+the pod's node and those allowed by the `ingress` list of some NetworkPolicy that applies to
+the pod for ingress. The effects of those `ingress` lists combine additively.
+
+Network policies do not conflict; they are additive. If any policy or policies apply to a given
+pod for a given direction, the connections allowed in that direction from that pod is the union of
+what the applicable policies allow. Thus, order of evaluation does not affect the policy result.
+
+For a connection from a source pod to a destination pod to be allowed, both the egress policy on
+the source pod and the ingress policy on the destination pod need to allow the connection. If
+either side does not allow the connection, it will not happen.
## The NetworkPolicy resource {#networkpolicy-resource}
-See the [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) reference for a full definition of the resource.
+See the [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io)
+reference for a full definition of the resource.
An example NetworkPolicy might look like this:
{{< codenew file="service/networking/networkpolicy.yaml" >}}
{{< note >}}
-POSTing this to the API server for your cluster will have no effect unless your chosen networking solution supports network policy.
+POSTing this to the API server for your cluster will have no effect unless your chosen networking
+solution supports network policy.
{{< /note >}}
-__Mandatory Fields__: As with all other Kubernetes config, a NetworkPolicy
-needs `apiVersion`, `kind`, and `metadata` fields. For general information
-about working with config files, see
+__Mandatory Fields__: As with all other Kubernetes config, a NetworkPolicy needs `apiVersion`,
+`kind`, and `metadata` fields. For general information about working with config files, see
[Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/),
and [Object Management](/docs/concepts/overview/working-with-objects/object-management).
-__spec__: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) has all the information needed to define a particular network policy in the given namespace.
+**spec**: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
+has all the information needed to define a particular network policy in the given namespace.
-__podSelector__: Each NetworkPolicy includes a `podSelector` which selects the grouping of pods to which the policy applies. The example policy selects pods with the label "role=db". An empty `podSelector` selects all pods in the namespace.
+**podSelector**: Each NetworkPolicy includes a `podSelector` which selects the grouping of pods to
+which the policy applies. The example policy selects pods with the label "role=db". An empty
+`podSelector` selects all pods in the namespace.
-__policyTypes__: Each NetworkPolicy includes a `policyTypes` list which may include either `Ingress`, `Egress`, or both. The `policyTypes` field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no `policyTypes` are specified on a NetworkPolicy then by default `Ingress` will always be set and `Egress` will be set if the NetworkPolicy has any egress rules.
+**policyTypes**: Each NetworkPolicy includes a `policyTypes` list which may include either
+`Ingress`, `Egress`, or both. The `policyTypes` field indicates whether or not the given policy
+applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no
+`policyTypes` are specified on a NetworkPolicy then by default `Ingress` will always be set and
+`Egress` will be set if the NetworkPolicy has any egress rules.
-__ingress__: Each NetworkPolicy may include a list of allowed `ingress` rules. Each rule allows traffic which matches both the `from` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an `ipBlock`, the second via a `namespaceSelector` and the third via a `podSelector`.
+**ingress**: Each NetworkPolicy may include a list of allowed `ingress` rules. Each rule allows
+traffic which matches both the `from` and `ports` sections. The example policy contains a single
+rule, which matches traffic on a single port, from one of three sources, the first specified via
+an `ipBlock`, the second via a `namespaceSelector` and the third via a `podSelector`.
-__egress__: Each NetworkPolicy may include a list of allowed `egress` rules. Each rule allows traffic which matches both the `to` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port to any destination in `10.0.0.0/24`.
+**egress**: Each NetworkPolicy may include a list of allowed `egress` rules. Each rule allows
+traffic which matches both the `to` and `ports` sections. The example policy contains a single
+rule, which matches traffic on a single port to any destination in `10.0.0.0/24`.
So, the example NetworkPolicy:
-1. isolates "role=db" pods in the "default" namespace for both ingress and egress traffic (if they weren't already isolated)
-2. (Ingress rules) allows connections to all pods in the "default" namespace with the label "role=db" on TCP port 6379 from:
+1. isolates `role=db` pods in the `default` namespace for both ingress and egress traffic
+ (if they weren't already isolated)
+1. (Ingress rules) allows connections to all pods in the `default` namespace with the label
+ `role=db` on TCP port 6379 from:
+
+ * any pod in the `default` namespace with the label `role=frontend`
+ * any pod in a namespace with the label `project=myproject`
+ * IP addresses in the ranges `172.17.0.0`–`172.17.0.255` and `172.17.2.0`–`172.17.255.255`
+ (ie, all of `172.17.0.0/16` except `172.17.1.0/24`)
- * any pod in the "default" namespace with the label "role=frontend"
- * any pod in a namespace with the label "project=myproject"
- * IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (ie, all of 172.17.0.0/16 except 172.17.1.0/24)
-3. (Egress rules) allows connections from any pod in the "default" namespace with the label "role=db" to CIDR 10.0.0.0/24 on TCP port 5978
+1. (Egress rules) allows connections from any pod in the `default` namespace with the label
+ `role=db` to CIDR `10.0.0.0/24` on TCP port 5978
-See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) walkthrough for further examples.
+See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/)
+walkthrough for further examples.
## Behavior of `to` and `from` selectors
-There are four kinds of selectors that can be specified in an `ingress` `from` section or `egress` `to` section:
+There are four kinds of selectors that can be specified in an `ingress` `from` section or `egress`
+`to` section:
-__podSelector__: This selects particular Pods in the same namespace as the NetworkPolicy which should be allowed as ingress sources or egress destinations.
+**podSelector**: This selects particular Pods in the same namespace as the NetworkPolicy which
+should be allowed as ingress sources or egress destinations.
-__namespaceSelector__: This selects particular namespaces for which all Pods should be allowed as ingress sources or egress destinations.
+**namespaceSelector**: This selects particular namespaces for which all Pods should be allowed as
+ingress sources or egress destinations.
-__namespaceSelector__ *and* __podSelector__: A single `to`/`from` entry that specifies both `namespaceSelector` and `podSelector` selects particular Pods within particular namespaces. Be careful to use correct YAML syntax; this policy:
+**namespaceSelector** *and* **podSelector**: A single `to`/`from` entry that specifies both
+`namespaceSelector` and `podSelector` selects particular Pods within particular namespaces. Be
+careful to use correct YAML syntax. For example:
```yaml
...
@@ -107,7 +163,8 @@ __namespaceSelector__ *and* __podSelector__: A single `to`/`from` entry that spe
...
```
-contains a single `from` element allowing connections from Pods with the label `role=client` in namespaces with the label `user=alice`. But *this* policy:
+This policy contains a single `from` element allowing connections from Pods with the label
+`role=client` in namespaces with the label `user=alice`. But the following policy is different:
```yaml
...
@@ -122,12 +179,15 @@ contains a single `from` element allowing connections from Pods with the label `
...
```
-contains two elements in the `from` array, and allows connections from Pods in the local Namespace with the label `role=client`, *or* from any Pod in any namespace with the label `user=alice`.
+It contains two elements in the `from` array, and allows connections from Pods in the local
+Namespace with the label `role=client`, *or* from any Pod in any namespace with the label
+`user=alice`.
When in doubt, use `kubectl describe` to see how Kubernetes has interpreted the policy.
-__ipBlock__: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.
+**ipBlock**: This selects particular IP CIDR ranges to allow as ingress sources or egress
+destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.
Cluster ingress and egress mechanisms often require rewriting the source or destination IP
of packets. In cases where this happens, it is not defined whether this happens before or
@@ -143,59 +203,73 @@ cluster-external IPs may or may not be subject to `ipBlock`-based policies.
## Default policies
-By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace. The following examples let you change the default behavior
+By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to
+and from pods in that namespace. The following examples let you change the default behavior
in that namespace.
### Default deny all ingress traffic
-You can create a "default" ingress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods.
+You can create a "default" ingress isolation policy for a namespace by creating a NetworkPolicy
+that selects all pods but does not allow any ingress traffic to those pods.
{{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}}
-This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.
+This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated
+for ingress. This policy does not affect isolation for egress from any pod.
### Allow all ingress traffic
-If you want to allow all incoming connections to all pods in a namespace, you can create a policy that explicitly allows that.
+If you want to allow all incoming connections to all pods in a namespace, you can create a policy
+that explicitly allows that.
{{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}}
-With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.
+With this policy in place, no additional policy or policies can cause any incoming connection to
+those pods to be denied. This policy has no effect on isolation for egress from any pod.
### Default deny all egress traffic
-You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any egress traffic from those pods.
+You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy
+that selects all pods but does not allow any egress traffic from those pods.
{{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}}
-This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not
-change the ingress isolation behavior of any pod.
+This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed
+egress traffic. This policy does not change the ingress isolation behavior of any pod.
### Allow all egress traffic
-If you want to allow all connections from all pods in a namespace, you can create a policy that explicitly allows all outgoing connections from pods in that namespace.
+If you want to allow all connections from all pods in a namespace, you can create a policy that
+explicitly allows all outgoing connections from pods in that namespace.
{{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}}
-With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.
+With this policy in place, no additional policy or policies can cause any outgoing connection from
+those pods to be denied. This policy has no effect on isolation for ingress to any pod.
### Default deny all ingress and all egress traffic
-You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by creating the following NetworkPolicy in that namespace.
+You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by
+creating the following NetworkPolicy in that namespace.
{{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}}
-This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.
+This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed
+ingress or egress traffic.
## SCTP support
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
-As a stable feature, this is enabled by default. To disable SCTP at a cluster level, you (or your cluster administrator) will need to disable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `--feature-gates=SCTPSupport=false,…`.
+As a stable feature, this is enabled by default. To disable SCTP at a cluster level, you (or your
+cluster administrator) will need to disable the `SCTPSupport`
+[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
+for the API server with `--feature-gates=SCTPSupport=false,…`.
When the feature gate is enabled, you can set the `protocol` field of a NetworkPolicy to `SCTP`.
{{< note >}}
-You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP protocol NetworkPolicies.
+You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP
+protocol NetworkPolicies.
{{< /note >}}
## Targeting a range of ports
@@ -206,33 +280,14 @@ When writing a NetworkPolicy, you can target a range of ports instead of a singl
This is achievable with the usage of the `endPort` field, as the following example:
-```yaml
-apiVersion: networking.k8s.io/v1
-kind: NetworkPolicy
-metadata:
- name: multi-port-egress
- namespace: default
-spec:
- podSelector:
- matchLabels:
- role: db
- policyTypes:
- - Egress
- egress:
- - to:
- - ipBlock:
- cidr: 10.0.0.0/24
- ports:
- - protocol: TCP
- port: 32000
- endPort: 32768
-```
+{{< codenew file="service/networking/networkpolicy-multiport-egress.yaml" >}}
The above rule allows any Pod with label `role=db` on the namespace `default` to communicate
with any IP within the range `10.0.0.0/24` over TCP, provided that the target
port is between the range 32000 and 32768.
The following restrictions apply when using this field:
+
* The `endPort` field must be equal to or greater than the `port` field.
* `endPort` can only be defined if `port` is also defined.
* Both ports must be numeric.
@@ -259,22 +314,34 @@ standardized label to target a specific namespace.
## What you can't do with network policies (at least, not yet)
-As of Kubernetes {{< skew currentVersion >}}, the following functionality does not exist in the NetworkPolicy API, but you might be able to implement workarounds using Operating System components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress controllers, Service Mesh implementations) or admission controllers. In case you are new to network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be implemented using the NetworkPolicy API.
+As of Kubernetes {{< skew currentVersion >}}, the following functionality does not exist in the
+NetworkPolicy API, but you might be able to implement workarounds using Operating System
+components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress
+controllers, Service Mesh implementations) or admission controllers. In case you are new to
+network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be
+implemented using the NetworkPolicy API.
-- Forcing internal cluster traffic to go through a common gateway (this might be best served with a service mesh or other proxy).
+- Forcing internal cluster traffic to go through a common gateway (this might be best served with
+ a service mesh or other proxy).
- Anything TLS related (use a service mesh or ingress controller for this).
-- Node specific policies (you can use CIDR notation for these, but you cannot target nodes by their Kubernetes identities specifically).
-- Targeting of services by name (you can, however, target pods or namespaces by their {{< glossary_tooltip text="labels" term_id="label" >}}, which is often a viable workaround).
+- Node specific policies (you can use CIDR notation for these, but you cannot target nodes by
+ their Kubernetes identities specifically).
+- Targeting of services by name (you can, however, target pods or namespaces by their
+ {{< glossary_tooltip text="labels" term_id="label" >}}, which is often a viable workaround).
- Creation or management of "Policy requests" that are fulfilled by a third party.
-- Default policies which are applied to all namespaces or pods (there are some third party Kubernetes distributions and projects which can do this).
+- Default policies which are applied to all namespaces or pods (there are some third party
+ Kubernetes distributions and projects which can do this).
- Advanced policy querying and reachability tooling.
- The ability to log network security events (for example connections that are blocked or accepted).
-- The ability to explicitly deny policies (currently the model for NetworkPolicies are deny by default, with only the ability to add allow rules).
-- The ability to prevent loopback or incoming host traffic (Pods cannot currently block localhost access, nor do they have the ability to block access from their resident node).
+- The ability to explicitly deny policies (currently the model for NetworkPolicies are deny by
+ default, with only the ability to add allow rules).
+- The ability to prevent loopback or incoming host traffic (Pods cannot currently block localhost
+ access, nor do they have the ability to block access from their resident node).
## {{% heading "whatsnext" %}}
-
- See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/)
walkthrough for further examples.
-- See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource.
+- See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common
+ scenarios enabled by the NetworkPolicy resource.
+
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index c98d344b8dedd..b761e056018da 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -193,7 +193,7 @@ spec:
```
Because this Service has no selector, the corresponding EndpointSlice (and
-legacy Endpoints) objects are not created automatically. You can manually map the Service
+legacy Endpoints) objects are not created automatically. You can map the Service
to the network address and port where it's running, by adding an EndpointSlice
object manually. For example:
@@ -255,6 +255,13 @@ Accessing a Service without a selector works the same as if it had a selector.
In the [example](#services-without-selectors) for a Service without a selector, traffic is routed to one of the two endpoints defined in
the EndpointSlice manifest: a TCP connection to 10.1.2.3 or 10.4.5.6, on port 9376.
+{{< note >}}
+The Kubernetes API server does not allow proxying to endpoints that are not mapped to
+pods. Actions such as `kubectl proxy ` where the service has no
+selector will fail due to this constraint. This prevents the Kubernetes API server
+from being used as a proxy to endpoints the caller may not be authorized to access.
+{{< /note >}}
+
An ExternalName Service is a special case of Service that does not have
selectors and uses DNS names instead. For more information, see the
[ExternalName](#externalname) section later in this document.
@@ -476,6 +483,8 @@ Kubernetes `ServiceTypes` allow you to specify what kind of Service you want.
* `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value
makes the Service only reachable from within the cluster. This is the
default that is used if you don't explicitly specify a `type` for a Service.
+ You can expose the service to the public with an [Ingress](docs/reference/kubernetes-api/service-resources/ingress-v1/) or the
+ [Gateway API](https://gateway-api.sigs.k8s.io/).
* [`NodePort`](#type-nodeport): Exposes the Service on each Node's IP at a static port
(the `NodePort`).
To make the node port available, Kubernetes sets up a cluster IP address,
@@ -1071,42 +1080,6 @@ in those modified security groups.
Further documentation on annotations for Elastic IPs and other common use-cases may be found
in the [AWS Load Balancer Controller documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/).
-#### Other CLB annotations on Tencent Kubernetes Engine (TKE)
-
-There are other annotations for managing Cloud Load Balancers on TKE as shown below.
-
-```yaml
- metadata:
- name: my-service
- annotations:
- # Bind Loadbalancers with specified nodes
- service.kubernetes.io/qcloud-loadbalancer-backends-label: key in (value1, value2)
-
- # ID of an existing load balancer
- service.kubernetes.io/tke-existed-lbid:lb-6swtxxxx
-
- # Custom parameters for the load balancer (LB), does not support modification of LB type yet
- service.kubernetes.io/service.extensiveParameters: ""
-
- # Custom parameters for the LB listener
- service.kubernetes.io/service.listenerParameters: ""
-
- # Specifies the type of Load balancer;
- # valid values: classic (Classic Cloud Load Balancer) or application (Application Cloud Load Balancer)
- service.kubernetes.io/loadbalance-type: xxxxx
-
- # Specifies the public network bandwidth billing method;
- # valid values: TRAFFIC_POSTPAID_BY_HOUR(bill-by-traffic) and BANDWIDTH_POSTPAID_BY_HOUR (bill-by-bandwidth).
- service.kubernetes.io/qcloud-loadbalancer-internet-charge-type: xxxxxx
-
- # Specifies the bandwidth value (value range: [1,2000] Mbps).
- service.kubernetes.io/qcloud-loadbalancer-internet-max-bandwidth-out: "10"
-
- # When this annotation is set,the loadbalancers will only register nodes
- # with pod running on it, otherwise all nodes will be registered.
- service.kubernetes.io/local-svc-only-bind-node-with-pod: true
-```
-
### Type ExternalName {#externalname}
Services of type ExternalName map a Service to a DNS name, not to a typical selector such as
diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md
index 4b265eceb7820..9ae9febe3bb83 100644
--- a/content/en/docs/concepts/storage/volumes.md
+++ b/content/en/docs/concepts/storage/volumes.md
@@ -388,7 +388,8 @@ You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to th
beforehand so that Kubernetes hosts can access them.
{{< /note >}}
-See the [fibre channel example](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel) for more details.
+See the [fibre channel example](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel)
+for more details.
### gcePersistentDisk (deprecated) {#gcepersistentdisk}
@@ -515,7 +516,9 @@ and the kubelet, set the `InTreePluginGCEUnregister` flag to `true`.
### gitRepo (deprecated) {#gitrepo}
{{< warning >}}
-The `gitRepo` volume type is deprecated. To provision a container with a git repo, mount an [EmptyDir](#emptydir) into an InitContainer that clones the repo using git, then mount the [EmptyDir](#emptydir) into the Pod's container.
+The `gitRepo` volume type is deprecated. To provision a container with a git repo, mount an
+[EmptyDir](#emptydir) into an InitContainer that clones the repo using git, then mount the
+[EmptyDir](#emptydir) into the Pod's container.
{{< /warning >}}
A `gitRepo` volume is an example of a volume plugin. This plugin
@@ -546,7 +549,7 @@ spec:
--
+
Kubernetes {{< skew currentVersion >}} does not include a `glusterfs` volume type.
The GlusterFS in-tree storage driver was deprecated in the Kubernetes v1.25 release
@@ -785,10 +788,13 @@ spec:
{{< note >}}
You must have your own NFS server running with the share exported before you can use it.
-Also note that you can't specify NFS mount options in a Pod spec. You can either set mount options server-side or use [/etc/nfsmount.conf](https://man7.org/linux/man-pages/man5/nfsmount.conf.5.html). You can also mount NFS volumes via PersistentVolumes which do allow you to set mount options.
+Also note that you can't specify NFS mount options in a Pod spec. You can either set mount options server-side or
+use [/etc/nfsmount.conf](https://man7.org/linux/man-pages/man5/nfsmount.conf.5.html).
+You can also mount NFS volumes via PersistentVolumes which do allow you to set mount options.
{{< /note >}}
-See the [NFS example](https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs) for an example of mounting NFS volumes with PersistentVolumes.
+See the [NFS example](https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs)
+for an example of mounting NFS volumes with PersistentVolumes.
### persistentVolumeClaim {#persistentvolumeclaim}
@@ -1163,7 +1169,7 @@ persistent volume:
volume expansion, the kubelet passes that data via the `NodeExpandVolume()`
call to the CSI driver. In order to use the `nodeExpandSecretRef` field, your
cluster should be running Kubernetes version 1.25 or later and you must enable
- the [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/)
+ the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
named `CSINodeExpandSecret` for each kube-apiserver and for the kubelet on every
node. You must also be using a CSI driver that supports or requires secret data during
node-initiated storage resize operations.
diff --git a/content/en/docs/concepts/windows/intro.md b/content/en/docs/concepts/windows/intro.md
index 2d0fa8d0a7856..e054688f4d52c 100644
--- a/content/en/docs/concepts/windows/intro.md
+++ b/content/en/docs/concepts/windows/intro.md
@@ -382,8 +382,6 @@ troubleshooting ideas prior to creating a ticket.
The kubeadm tool helps you to deploy a Kubernetes cluster, providing the control
plane to manage the cluster it, and nodes to run your workloads.
-[Adding Windows nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/)
-explains how to deploy Windows nodes to your cluster using kubeadm.
The Kubernetes [cluster API](https://cluster-api.sigs.k8s.io/) project also provides means to automate deployment of Windows nodes.
diff --git a/content/en/docs/concepts/windows/user-guide.md b/content/en/docs/concepts/windows/user-guide.md
index ab648e9b6ff68..df3306f01ab4d 100644
--- a/content/en/docs/concepts/windows/user-guide.md
+++ b/content/en/docs/concepts/windows/user-guide.md
@@ -22,12 +22,11 @@ This guide walks you through the steps to configure and deploy Windows container
## Before you begin
-* Create a Kubernetes cluster that includes a
-control plane and a [worker node running Windows Server](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/)
+* Create a Kubernetes cluster that includes a control plane and a worker node running Windows Server
* It is important to note that creating and deploying services and workloads on Kubernetes
-behaves in much the same way for Linux and Windows containers.
-[Kubectl commands](/docs/reference/kubectl/) to interface with the cluster are identical.
-The example in the section below is provided to jumpstart your experience with Windows containers.
+ behaves in much the same way for Linux and Windows containers.
+ [Kubectl commands](/docs/reference/kubectl/) to interface with the cluster are identical.
+ The example in the section below is provided to jumpstart your experience with Windows containers.
## Getting Started: Deploying a Windows container
diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
index d2795e0efbb32..dd327758e332b 100644
--- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md
+++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
@@ -14,44 +14,27 @@ weight: 80
A _CronJob_ creates {{< glossary_tooltip term_id="job" text="Jobs" >}} on a repeating schedule.
-One CronJob object is like one line of a _crontab_ (cron table) file. It runs a job periodically
-on a given schedule, written in [Cron](https://en.wikipedia.org/wiki/Cron) format.
-
-{{< caution >}}
-All **CronJob** `schedule:` times are based on the timezone of the
-{{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}.
-
-If your control plane runs the kube-controller-manager in Pods or bare
-containers, the timezone set for the kube-controller-manager container determines the timezone
-that the cron job controller uses.
-{{< /caution >}}
-
-{{< caution >}}
-The [v1 CronJob API](/docs/reference/kubernetes-api/workload-resources/cron-job-v1/)
-does not officially support setting timezone as explained above.
-
-Setting variables such as `CRON_TZ` or `TZ` is not officially supported by the Kubernetes project.
-`CRON_TZ` or `TZ` is an implementation detail of the internal library being used
-for parsing and calculating the next Job creation time. Any usage of it is not
-recommended in a production cluster.
-{{< /caution >}}
-
-When creating the manifest for a CronJob resource, make sure the name you provide
-is a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
-The name must be no longer than 52 characters. This is because the CronJob controller will automatically
-append 11 characters to the job name provided and there is a constraint that the
-maximum length of a Job name is no more than 63 characters.
+CronJob is meant for performing regular scheduled actions such as backups, report generation,
+and so on. One CronJob object is like one line of a _crontab_ (cron table) file on a
+Unix system. It runs a job periodically on a given schedule, written in
+[Cron](https://en.wikipedia.org/wiki/Cron) format.
+
+CronJobs have limitations and idiosyncrasies.
+For example, in certain circumstances, a single CronJob can create multiple concurrent Jobs. See the [limitations](#cron-job-limitations) below.
+
+When the control plane creates new Jobs and (indirectly) Pods for a CronJob, the `.metadata.name`
+of the CronJob is part of the basis for naming those Pods. The name of a CronJob must be a valid
+[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
+value, but this can produce unexpected results for the Pod hostnames. For best compatibility,
+the name should follow the more restrictive rules for a
+[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
+Even when the name is a DNS subdomain, the name must be no longer than 52
+characters. This is because the CronJob controller will automatically append
+11 characters to the name you provide and there is a constraint that the
+length of a Job name is no more than 63 characters.
-
-## CronJob
-
-CronJobs are meant for performing regular scheduled actions such as backups,
-report generation, and so on. Each of those tasks should be configured to recur
-indefinitely (for example: once a day / week / month); you can define the point
-in time within that interval when the job should start.
-
-### Example
+## Example
This example CronJob manifest prints the current time and a hello message every minute:
@@ -60,7 +43,9 @@ This example CronJob manifest prints the current time and a hello message every
([Running Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
takes you through this example in more detail).
-### Cron schedule syntax
+## Writing a CronJob spec
+### Schedule syntax
+The `.spec.schedule` field is required. The value of that field follows the [Cron](https://en.wikipedia.org/wiki/Cron) syntax:
```
# ┌───────────── minute (0 - 59)
@@ -74,6 +59,24 @@ takes you through this example in more detail).
# * * * * *
```
+For example, `0 0 13 * 5` states that the task must be started every Friday at midnight, as well as on the 13th of each month at midnight.
+
+The format also includes extended "Vixie cron" step values. As explained in the
+[FreeBSD manual](https://www.freebsd.org/cgi/man.cgi?crontab%285%29):
+
+> Step values can be used in conjunction with ranges. Following a range
+> with `/` specifies skips of the number's value through the
+> range. For example, `0-23/2` can be used in the hours field to specify
+> command execution every other hour (the alternative in the V7 standard is
+> `0,2,4,6,8,10,12,14,16,18,20,22`). Steps are also permitted after an
+> asterisk, so if you want to say "every two hours", just use `*/2`.
+
+{{< note >}}
+A question mark (`?`) in the schedule has the same meaning as an asterisk `*`, that is,
+it stands for any of available value for a given field.
+{{< /note >}}
+
+Other than the standard syntax, some macros like `@monthly` can also be used:
| Entry | Description | Equivalent to |
| ------------- | ------------- |------------- |
@@ -83,17 +86,83 @@ takes you through this example in more detail).
| @daily (or @midnight) | Run once a day at midnight | 0 0 * * * |
| @hourly | Run once an hour at the beginning of the hour | 0 * * * * |
+To generate CronJob schedule expressions, you can also use web tools like [crontab.guru](https://crontab.guru/).
+### Job template
-For example, the line below states that the task must be started every Friday at midnight, as well as on the 13th of each month at midnight:
+The `.spec.jobTemplate` defines a template for the Jobs that the CronJob creates, and it is required.
+It has exactly the same schema as a [Job](/docs/concepts/workloads/controllers/job/), except that
+it is nested and does not have an `apiVersion` or `kind`.
+You can specify common metadata for the templated Jobs, such as
+{{< glossary_tooltip text="labels" term_id="label" >}} or
+{{< glossary_tooltip text="annotations" term_id="annotation" >}}.
+For information about writing a Job `.spec`, see [Writing a Job Spec](/docs/concepts/workloads/controllers/job/#writing-a-job-spec).
-`0 0 13 * 5`
+### Deadline for delayed job start {#starting-deadline}
-To generate CronJob schedule expressions, you can also use web tools like [crontab.guru](https://crontab.guru/).
+The `.spec.startingDeadlineSeconds` field is optional.
+This field defines a deadline (in whole seconds) for starting the Job, if that Job misses its scheduled time
+for any reason.
+
+After missing the deadline, the CronJob skips that instance of the Job (future occurrences are still scheduled).
+For example, if you have a backup job that runs twice a day, you might allow it to start up to 8 hours late,
+but no later, because a backup taken any later wouldn't be useful: you would instead prefer to wait for
+the next scheduled run.
+
+For Jobs that miss their configured deadline, Kubernetes treats them as failed Jobs.
+If you don't specify `startingDeadlineSeconds` for a CronJob, the Job occurrences have no deadline.
+
+If the `.spec.startingDeadlineSeconds` field is set (not null), the CronJob
+controller measures the time between when a job is expected to be created and
+now. If the difference is higher than that limit, it will skip this execution.
+
+For example, if it is set to `200`, it allows a job to be created for up to 200
+seconds after the actual schedule.
+
+### Concurrency policy
+
+The `.spec.concurrencyPolicy` field is also optional.
+It specifies how to treat concurrent executions of a job that is created by this CronJob.
+The spec may specify only one of the following concurrency policies:
+
+* `Allow` (default): The CronJob allows concurrently running jobs
+* `Forbid`: The CronJob does not allow concurrent runs; if it is time for a new job run and the
+ previous job run hasn't finished yet, the CronJob skips the new job run
+* `Replace`: If it is time for a new job run and the previous job run hasn't finished yet, the
+ CronJob replaces the currently running job run with a new job run
-## Time zones
+Note that concurrency policy only applies to the jobs created by the same cron job.
+If there are multiple CronJobs, their respective jobs are always allowed to run concurrently.
-For CronJobs with no time zone specified, the kube-controller-manager interprets schedules relative to its local time zone.
+### Schedule suspension
+
+You can suspend execution of Jobs for a CronJob, by setting the optional `.spec.suspend` field
+to true. The field defaults to false.
+
+This setting does _not_ affect Jobs that the CronJob has already started.
+
+If you do set that field to true, all subsequent executions are suspended (they remain
+scheduled, but the CronJob controller does not start the Jobs to run the tasks) until
+you unsuspend the CronJob.
+
+{{< caution >}}
+Executions that are suspended during their scheduled time count as missed jobs.
+When `.spec.suspend` changes from `true` to `false` on an existing CronJob without a
+[starting deadline](#starting-deadline), the missed jobs are scheduled immediately.
+{{< /caution >}}
+
+### Jobs history limits
+
+The `.spec.successfulJobsHistoryLimit` and `.spec.failedJobsHistoryLimit` fields are optional.
+These fields specify how many completed and failed jobs should be kept.
+By default, they are set to 3 and 1 respectively. Setting a limit to `0` corresponds to keeping
+none of the corresponding kind of jobs after they finish.
+
+For another way to clean up jobs automatically, see [Clean up finished jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically).
+
+### Time zones
+
+For CronJobs with no time zone specified, the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}} interprets schedules relative to its local time zone.
{{< feature-state for_k8s_version="v1.25" state="beta" >}}
@@ -102,16 +171,39 @@ you can specify a time zone for a CronJob (if you don't enable that feature gate
Kubernetes that does not have experimental time zone support, all CronJobs in your cluster have an unspecified
timezone).
-When you have the feature enabled, you can set `spec.timeZone` to the name of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For example, setting
-`spec.timeZone: "Etc/UTC"` instructs Kubernetes to interpret the schedule relative to Coordinated Universal Time.
+When you have the feature enabled, you can set `.spec.timeZone` to the name of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For example, setting
+`.spec.timeZone: "Etc/UTC"` instructs Kubernetes to interpret the schedule relative to Coordinated Universal Time.
+
+{{< caution >}}
+The implementation of the CronJob API in Kubernetes {{< skew currentVersion >}} lets you set
+the `.spec.schedule` field to include a timezone; for example: `CRON_TZ=UTC * * * * *`
+or `TZ=UTC * * * * *`.
+
+Specifying a timezone that way is **not officially supported** (and never has been).
+
+If you try to set a schedule that includes `TZ` or `CRON_TZ` timezone specification,
+Kubernetes reports a [warning](/blog/2020/09/03/warnings/) to the client.
+Future versions of Kubernetes might not implement that unofficial timezone mechanism at all.
+{{< /caution >}}
A time zone database from the Go standard library is included in the binaries and used as a fallback in case an external database is not available on the system.
## CronJob limitations {#cron-job-limitations}
-A cron job creates a job object _about_ once per execution time of its schedule. We say "about" because there
-are certain circumstances where two jobs might be created, or no job might be created. We attempt to make these rare,
-but do not completely prevent them. Therefore, jobs should be _idempotent_.
+### Modifying a CronJob
+By design, a CronJob contains a template for _new_ Jobs.
+If you modify an existing CronJob, the changes you make will apply to new Jobs that
+start to run after your modification is complete. Jobs (and their Pods) that have already
+started continue to run without changes.
+That is, the CronJob does _not_ update existing Jobs, even if those remain running.
+
+### Job creation
+
+A CronJob creates a Job object approximately once per execution time of its schedule.
+The scheduling is approximate because there
+are certain circumstances where two Jobs might be created, or no Job might be created.
+Kubernetes tries to avoid those situations, but do not completely prevent them. Therefore,
+the Jobs that you define should be _idempotent_.
If `startingDeadlineSeconds` is set to a large value or left unset (the default)
and if `concurrencyPolicy` is set to `Allow`, the jobs will always run
@@ -143,32 +235,16 @@ be down for the same period as the previous example (`08:29:00` to `10:21:00`,)
The CronJob is only responsible for creating Jobs that match its schedule, and
the Job in turn is responsible for the management of the Pods it represents.
-## Controller version {#new-controller}
-
-Starting with Kubernetes v1.21 the second version of the CronJob controller
-is the default implementation. To disable the default CronJob controller
-and use the original CronJob controller instead, pass the `CronJobControllerV2`
-[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-flag to the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}},
-and set this flag to `false`. For example:
-
-```
---feature-gates="CronJobControllerV2=false"
-```
-
-
## {{% heading "whatsnext" %}}
* Learn about [Pods](/docs/concepts/workloads/pods/) and
[Jobs](/docs/concepts/workloads/controllers/job/), two concepts
that CronJobs rely upon.
-* Read about the [format](https://pkg.go.dev/github.com/robfig/cron/v3#hdr-CRON_Expression_Format)
+* Read about the detailed [format](https://pkg.go.dev/github.com/robfig/cron/v3#hdr-CRON_Expression_Format)
of CronJob `.spec.schedule` fields.
* For instructions on creating and working with CronJobs, and for an example
of a CronJob manifest,
see [Running automated tasks with CronJobs](/docs/tasks/job/automated-tasks-with-cron-jobs/).
-* For instructions to clean up failed or completed jobs automatically,
- see [Clean up Jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)
* `CronJob` is part of the Kubernetes REST API.
Read the {{< api-reference page="workload-resources/cron-job-v1" >}}
- object definition to understand the API for Kubernetes cron jobs.
+ API reference for more details.
diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md
index 9a69ed9b8aeee..e5fc14f64d732 100644
--- a/content/en/docs/concepts/workloads/controllers/deployment.md
+++ b/content/en/docs/concepts/workloads/controllers/deployment.md
@@ -44,7 +44,10 @@ The following is an example of a Deployment. It creates a ReplicaSet to bring up
In this example:
-* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
+* A Deployment named `nginx-deployment` is created, indicated by the
+ `.metadata.name` field. This name will become the basis for the ReplicaSets
+ and Pods which are created later. See [Writing a Deployment Spec](#writing-a-deployment-spec)
+ for more details.
* The Deployment creates a ReplicaSet that creates three replicated Pods, indicated by the `.spec.replicas` field.
* The `.spec.selector` field defines how the created ReplicaSet finds which Pods to manage.
In this case, you select a label that is defined in the Pod template (`app: nginx`).
@@ -120,8 +123,11 @@ Follow the steps given below to create the above Deployment:
* `CURRENT` displays how many replicas are currently running.
* `READY` displays how many replicas of the application are available to your users.
* `AGE` displays the amount of time that the application has been running.
-
- Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[HASH]`.
+
+ Notice that the name of the ReplicaSet is always formatted as
+ `[DEPLOYMENT-NAME]-[HASH]`. This name will become the basis for the Pods
+ which are created.
+
The `HASH` string is the same as the `pod-template-hash` label on the ReplicaSet.
6. To see the labels automatically generated for each Pod, run `kubectl get pods --show-labels`.
@@ -1076,8 +1082,13 @@ As with all other Kubernetes configs, a Deployment needs `.apiVersion`, `.kind`,
For general information about working with config files, see
[deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/),
configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents.
-The name of a Deployment object must be a valid
-[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+
+When the control plane creates new Pods for a Deployment, the `.metadata.name` of the
+Deployment is part of the basis for naming those Pods. The name of a Deployment must be a valid
+[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
+value, but this can produce unexpected results for the Pod hostnames. For best compatibility,
+the name should follow the more restrictive rules for a
+[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
A Deployment also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md
index 08855b8b08eed..a05ad752c3d8d 100644
--- a/content/en/docs/concepts/workloads/controllers/job.md
+++ b/content/en/docs/concepts/workloads/controllers/job.md
@@ -54,21 +54,21 @@ Check on the status of the Job with `kubectl`:
{{< tabs name="Check status of Job" >}}
{{< tab name="kubectl describe job pi" codelang="bash" >}}
-Name: pi
-Namespace: default
-Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
-Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
- job-name=pi
-Annotations: kubectl.kubernetes.io/last-applied-configuration:
- {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":...
-Parallelism: 1
-Completions: 1
-Start Time: Mon, 02 Dec 2019 15:20:11 +0200
-Completed At: Mon, 02 Dec 2019 15:21:16 +0200
-Duration: 65s
-Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
+Name: pi
+Namespace: default
+Selector: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578
+Labels: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578
+ job-name=pi
+Annotations: batch.kubernetes.io/job-tracking:
+Parallelism: 1
+Completions: 1
+Completion Mode: NonIndexed
+Start Time: Fri, 28 Oct 2022 13:05:18 +0530
+Completed At: Fri, 28 Oct 2022 13:05:21 +0530
+Duration: 3s
+Pods Statuses: 0 Active / 1 Succeeded / 0 Failed
Pod Template:
- Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
+ Labels: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578
job-name=pi
Containers:
pi:
@@ -86,24 +86,26 @@ Pod Template:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
- Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7
+ Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4
+ Normal Completed 18s job-controller Job completed
{{< /tab >}}
{{< tab name="kubectl get job pi -o yaml" codelang="bash" >}}
apiVersion: batch/v1
kind: Job
metadata:
annotations:
+ batch.kubernetes.io/job-tracking: ""
kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":{"spec":{"containers":[{"command":["perl","-Mbignum=bpi","-wle","print bpi(2000)"],"image":"perl","name":"pi"}],"restartPolicy":"Never"}}}}
- creationTimestamp: "2022-06-15T08:40:15Z"
+ {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":{"spec":{"containers":[{"command":["perl","-Mbignum=bpi","-wle","print bpi(2000)"],"image":"perl:5.34.0","name":"pi"}],"restartPolicy":"Never"}}}}
+ creationTimestamp: "2022-11-10T17:53:53Z"
generation: 1
labels:
- controller-uid: 863452e6-270d-420e-9b94-53a54146c223
+ controller-uid: 204fb678-040b-497f-9266-35ffa8716d14
job-name: pi
name: pi
namespace: default
- resourceVersion: "987"
- uid: 863452e6-270d-420e-9b94-53a54146c223
+ resourceVersion: "4751"
+ uid: 204fb678-040b-497f-9266-35ffa8716d14
spec:
backoffLimit: 4
completionMode: NonIndexed
@@ -111,13 +113,13 @@ spec:
parallelism: 1
selector:
matchLabels:
- controller-uid: 863452e6-270d-420e-9b94-53a54146c223
+ controller-uid: 204fb678-040b-497f-9266-35ffa8716d14
suspend: false
template:
metadata:
creationTimestamp: null
labels:
- controller-uid: 863452e6-270d-420e-9b94-53a54146c223
+ controller-uid: 204fb678-040b-497f-9266-35ffa8716d14
job-name: pi
spec:
containers:
@@ -127,7 +129,7 @@ spec:
- -wle
- print bpi(2000)
image: perl:5.34.0
- imagePullPolicy: Always
+ imagePullPolicy: IfNotPresent
name: pi
resources: {}
terminationMessagePath: /dev/termination-log
@@ -139,8 +141,9 @@ spec:
terminationGracePeriodSeconds: 30
status:
active: 1
- ready: 1
- startTime: "2022-06-15T08:40:15Z"
+ ready: 0
+ startTime: "2022-11-10T17:53:57Z"
+ uncountedTerminatedPods: {}
{{< /tab >}}
{{< /tabs >}}
@@ -177,7 +180,15 @@ The output is similar to this:
## Writing a Job spec
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields.
-Its name must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+
+When the control plane creates new Pods for a Job, the `.metadata.name` of the
+Job is part of the basis for naming those Pods. The name of a Job must be a valid
+[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
+value, but this can produce unexpected results for the Pod hostnames. For best compatibility,
+the name should follow the more restrictive rules for a
+[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
+Even when the name is a DNS subdomain, the name must be no longer than 63
+characters.
A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md
index da0aa76ddc420..35162c8dbc1c0 100644
--- a/content/en/docs/concepts/workloads/controllers/replicaset.md
+++ b/content/en/docs/concepts/workloads/controllers/replicaset.md
@@ -228,8 +228,12 @@ In this manner, a ReplicaSet can own a non-homogenous set of Pods
As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields.
For ReplicaSets, the `kind` is always a ReplicaSet.
-The name of a ReplicaSet object must be a valid
-[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+When the control plane creates new Pods for a ReplicaSet, the `.metadata.name` of the
+ReplicaSet is part of the basis for naming those Pods. The name of a ReplicaSet must be a valid
+[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
+value, but this can produce unexpected results for the Pod hostnames. For best compatibility,
+the name should follow the more restrictive rules for a
+[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
index 1360bd69f0a3c..2e658c7cba484 100644
--- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
+++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
@@ -112,11 +112,17 @@ Here, the selector is the same as the selector for the ReplicationController (se
`kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option
specifies an expression with the name from each pod in the returned list.
-## Writing a ReplicationController Spec
+## Writing a ReplicationController Manifest
As with all other Kubernetes config, a ReplicationController needs `apiVersion`, `kind`, and `metadata` fields.
-The name of a ReplicationController object must be a valid
-[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+
+When the control plane creates new Pods for a ReplicationController, the `.metadata.name` of the
+ReplicationController is part of the basis for naming those Pods. The name of a ReplicationController must be a valid
+[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
+value, but this can produce unexpected results for the Pod hostnames. For best compatibility,
+the name should follow the more restrictive rules for a
+[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
+
For general information about working with configuration files, see [object management](/docs/concepts/overview/working-with-objects/object-management/).
A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
diff --git a/content/en/docs/concepts/workloads/controllers/statefulset.md b/content/en/docs/concepts/workloads/controllers/statefulset.md
index bfe29e81a84ff..cfa65e285ad99 100644
--- a/content/en/docs/concepts/workloads/controllers/statefulset.md
+++ b/content/en/docs/concepts/workloads/controllers/statefulset.md
@@ -121,7 +121,7 @@ In the above example:
PersistentVolume Provisioner.
The name of a StatefulSet object must be a valid
-[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
### Pod Selector
diff --git a/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md b/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md
index a51c88602fcf6..aca3c090ebcd0 100644
--- a/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md
+++ b/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md
@@ -1,75 +1,87 @@
---
reviewers:
- janetkuo
-title: Automatic Clean-up for Finished Jobs
+title: Automatic Cleanup for Finished Jobs
content_type: concept
weight: 70
+description: >-
+ A time-to-live mechanism to clean up old Jobs that have finished execution.
---
{{< feature-state for_k8s_version="v1.23" state="stable" >}}
-TTL-after-finished {{}} provides a
-TTL (time to live) mechanism to limit the lifetime of resource objects that
-have finished execution. TTL controller only handles
-{{< glossary_tooltip text="Jobs" term_id="job" >}}.
+When your Job has finished, it's useful to keep that Job in the API (and not immediately delete the Job)
+so that you can tell whether the Job succeeded or failed.
+
+Kubernetes' TTL-after-finished {{}} provides a
+TTL (time to live) mechanism to limit the lifetime of Job objects that
+have finished execution.
-## TTL-after-finished Controller
+## Cleanup for finished Jobs
-The TTL-after-finished controller is only supported for Jobs. A cluster operator can use this feature to clean
+The TTL-after-finished controller is only supported for Jobs. You can use this mechanism to clean
up finished Jobs (either `Complete` or `Failed`) automatically by specifying the
`.spec.ttlSecondsAfterFinished` field of a Job, as in this
[example](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically).
-The TTL-after-finished controller will assume that a job is eligible to be cleaned up
-TTL seconds after the job has finished, in other words, when the TTL has expired. When the
+
+The TTL-after-finished controller assumes that a Job is eligible to be cleaned up
+TTL seconds after the Job has finished. The timer starts once the
+status condition of the Job changes to show that the Job is either `Complete` or `Failed`; once the TTL has
+expired, that Job becomes eligible for
+[cascading](/docs/concepts/architecture/garbage-collection/#cascading-deletion) removal. When the
TTL-after-finished controller cleans up a job, it will delete it cascadingly, that is to say it will delete
-its dependent objects together with it. Note that when the job is deleted,
-its lifecycle guarantees, such as finalizers, will be honored.
+its dependent objects together with it.
+
+Kubernetes honors object lifecycle guarantees on the Job, such as waiting for
+[finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
-The TTL seconds can be set at any time. Here are some examples for setting the
+You can set the TTL seconds at any time. Here are some examples for setting the
`.spec.ttlSecondsAfterFinished` field of a Job:
-* Specify this field in the job manifest, so that a Job can be cleaned up
+* Specify this field in the Job manifest, so that a Job can be cleaned up
automatically some time after it finishes.
-* Set this field of existing, already finished jobs, to adopt this new
- feature.
+* Manually set this field of existing, already finished Jobs, so that they become eligible
+ for cleanup.
* Use a
- [mutating admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
- to set this field dynamically at job creation time. Cluster administrators can
+ [mutating admission webhook](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)
+ to set this field dynamically at Job creation time. Cluster administrators can
use this to enforce a TTL policy for finished jobs.
* Use a
- [mutating admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
- to set this field dynamically after the job has finished, and choose
- different TTL values based on job status, labels, etc.
+ [mutating admission webhook](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)
+ to set this field dynamically after the Job has finished, and choose
+ different TTL values based on job status, labels. For this case, the webhook needs
+ to detect changes to the `.status` of the Job and only set a TTL when the Job
+ is being marked as completed.
+* Write your own controller to manage the cleanup TTL for Jobs that match a particular
+ {{< glossary_tooltip term_id="selector" text="selector-selector" >}}.
-## Caveat
+## Caveats
-### Updating TTL Seconds
+### Updating TTL for finished Jobs
-Note that the TTL period, e.g. `.spec.ttlSecondsAfterFinished` field of Jobs,
-can be modified after the job is created or has finished. However, once the
-Job becomes eligible to be deleted (when the TTL has expired), the system won't
-guarantee that the Jobs will be kept, even if an update to extend the TTL
-returns a successful API response.
+You can modify the TTL period, e.g. `.spec.ttlSecondsAfterFinished` field of Jobs,
+after the job is created or has finished. If you extend the TTL period after the
+existing `ttlSecondsAfterFinished` period has expired, Kubernetes doesn't guarantee
+to retain that Job, even if an update to extend the TTL returns a successful API
+response.
-### Time Skew
+### Time skew
-Because TTL-after-finished controller uses timestamps stored in the Kubernetes jobs to
+Because the TTL-after-finished controller uses timestamps stored in the Kubernetes jobs to
determine whether the TTL has expired or not, this feature is sensitive to time
-skew in the cluster, which may cause TTL-after-finish controller to clean up job objects
+skew in your cluster, which may cause the control plane to clean up Job objects
at the wrong time.
Clocks aren't always correct, but the difference should be
very small. Please be aware of this risk when setting a non-zero TTL.
-
-
## {{% heading "whatsnext" %}}
-* [Clean up Jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)
-
-* [Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md)
+* Read [Clean up Jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)
+* Refer to the [Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md)
+ (KEP) for adding this mechanism.
diff --git a/content/en/docs/concepts/workloads/pods/_index.md b/content/en/docs/concepts/workloads/pods/_index.md
index e49bbefc0725e..76c966757a38f 100644
--- a/content/en/docs/concepts/workloads/pods/_index.md
+++ b/content/en/docs/concepts/workloads/pods/_index.md
@@ -133,8 +133,11 @@ is not a process, but an environment for running container(s). A Pod persists un
it is deleted.
{{< /note >}}
-When you create the manifest for a Pod object, make sure the name specified is a valid
-[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+The name of a Pod must be a valid
+[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
+value, but this can produce unexpected results for the Pod hostname. For best compatibility,
+the name should follow the more restrictive rules for a
+[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
### Pod OS
diff --git a/content/en/docs/contribute/advanced.md b/content/en/docs/contribute/advanced.md
index c48712729cfbe..6bb0daa39c413 100644
--- a/content/en/docs/contribute/advanced.md
+++ b/content/en/docs/contribute/advanced.md
@@ -190,3 +190,7 @@ When you're ready to start the recording, click Record to Cloud.
When you're ready to stop recording, click Stop.
The video uploads automatically to YouTube.
+
+### Offboarding a SIG Co-chair (Emeritus)
+
+See: [k/community/sig-docs/offboarding.md](https://github.com/kubernetes/community/blob/master/sig-docs/offboarding.md)
\ No newline at end of file
diff --git a/content/en/docs/contribute/new-content/open-a-pr.md b/content/en/docs/contribute/new-content/open-a-pr.md
index 50b5a00608e23..589644223a99d 100644
--- a/content/en/docs/contribute/new-content/open-a-pr.md
+++ b/content/en/docs/contribute/new-content/open-a-pr.md
@@ -65,8 +65,7 @@ class id1 k8s
Figure 1. Steps for opening a PR using GitHub.
-1. On the page where you see the issue, select the pencil icon at the top right.
- You can also scroll to the bottom of the page and select **Edit this page**.
+1. On the page where you see the issue, select the **Edit this page** option in the right-hand side navigation panel.
1. Make your changes in the GitHub markdown editor.
diff --git a/content/en/docs/home/_index.md b/content/en/docs/home/_index.md
index 7297da2806811..a580c9aeadb09 100644
--- a/content/en/docs/home/_index.md
+++ b/content/en/docs/home/_index.md
@@ -56,9 +56,9 @@ cards:
description: Anyone can contribute, whether you're new to the project or you've been around a long time.
button: Contribute to the docs
button_path: /docs/contribute
-- name: release-notes
- title: K8s Release Notes
- description: If you are installing Kubernetes or upgrading to the newest version, refer to the current release notes.
+- name: Download
+ title: Download Kubernetes
+ description: Install Kubernetes or upgrade to the newest version.
button: "Download Kubernetes"
button_path: "/releases/download"
- name: about
diff --git a/content/en/docs/images/podSchedulingGates.svg b/content/en/docs/images/podSchedulingGates.svg
new file mode 100644
index 0000000000000..c87b23a7c6ef4
--- /dev/null
+++ b/content/en/docs/images/podSchedulingGates.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/en/docs/reference/_index.md b/content/en/docs/reference/_index.md
index 9e9fa6bcd9fff..05db47b7b46e3 100644
--- a/content/en/docs/reference/_index.md
+++ b/content/en/docs/reference/_index.md
@@ -9,13 +9,10 @@ content_type: concept
no_list: true
---
-
This section of the Kubernetes documentation contains references.
-
-
## API Reference
@@ -44,7 +41,7 @@ client libraries:
## CLI
* [kubectl](/docs/reference/kubectl/) - Main CLI tool for running commands and managing Kubernetes clusters.
- * [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](https://goessner.net/articles/JsonPath/) with kubectl.
+ * [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](https://goessner.net/articles/JsonPath/) with kubectl.
* [kubeadm](/docs/reference/setup-tools/kubeadm/) - CLI tool to easily provision a secure Kubernetes cluster.
## Components
@@ -55,16 +52,18 @@ client libraries:
* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) -
REST API that validates and configures data for API objects such as pods,
services, replication controllers.
-* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes.
+* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) -
+ Daemon that embeds the core control loops shipped with Kubernetes.
* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - Can
do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across
a set of back-ends.
-* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler that manages availability, performance, and capacity.
+* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) -
+ Scheduler that manages availability, performance, and capacity.
* [Scheduler Policies](/docs/reference/scheduling/policies)
* [Scheduler Profiles](/docs/reference/scheduling/config#profiles)
-* List of [ports and protocols](/docs/reference/ports-and-protocols/) that
+* List of [ports and protocols](/docs/reference/networking/ports-and-protocols/) that
should be open on control plane and worker nodes
## Config APIs
@@ -74,14 +73,19 @@ configure kubernetes components or tools. Most of these APIs are not exposed
by the API server in a RESTful way though they are essential for a user or an
operator to use or manage a cluster.
-* [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/)
-* [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/)
+
+* [kubeconfig (v1)](/docs/reference/config-api/kubeconfig.v1/)
+* [kube-apiserver admission (v1)](/docs/reference/config-api/apiserver-admission.v1/)
+* [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/) and
+ [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/)
* [kube-apiserver encryption (v1)](/docs/reference/config-api/apiserver-encryption.v1/)
* [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/)
* [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/) and
[kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/)
-* [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/)
-* [kubelet credential providers (v1beta1)](/docs/reference/config-api/kubelet-credentialprovider.v1beta1/)
+ [kubelet configuration (v1)](/docs/reference/config-api/kubelet-config.v1/)
+* [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/),
+ [kubelet credential providers (v1beta1)](/docs/reference/config-api/kubelet-credentialprovider.v1beta1/) and
+ [kubelet credential providers (v1)](/docs/reference/config-api/kubelet-credentialprovider.v1/)
* [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/),
[kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) and
[kube-scheduler configuration (v1)](/docs/reference/config-api/kube-scheduler-config.v1/)
diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md
index f819c237b1684..734a0a333b4d9 100644
--- a/content/en/docs/reference/access-authn-authz/admission-controllers.md
+++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md
@@ -110,7 +110,7 @@ The [`ValidatingAdmissionPolicy`](#validatingadmissionpolicy) admission plugin i
by default, but is only active if you enable the the `ValidatingAdmissionPolicy`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) **and**
the `admissionregistration.k8s.io/v1alpha1` API.
-{{< note >}}
+{{< /note >}}
## What does each admission controller do?
@@ -373,21 +373,21 @@ An example request body:
```json
{
- "apiVersion":"imagepolicy.k8s.io/v1alpha1",
- "kind":"ImageReview",
- "spec":{
- "containers":[
+ "apiVersion": "imagepolicy.k8s.io/v1alpha1",
+ "kind": "ImageReview",
+ "spec": {
+ "containers": [
{
- "image":"myrepo/myimage:v1"
+ "image": "myrepo/myimage:v1"
},
{
- "image":"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed"
+ "image": "myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed"
}
],
- "annotations":{
+ "annotations": {
"mycluster.image-policy.k8s.io/ticket-1234": "break-glass"
},
- "namespace":"mynamespace"
+ "namespace": "mynamespace"
}
}
```
@@ -610,9 +610,9 @@ This file may be json or yaml and has the following format:
```yaml
podNodeSelectorPluginConfig:
- clusterDefaultNodeSelector: name-of-node-selector
- namespace1: name-of-node-selector
- namespace2: name-of-node-selector
+ clusterDefaultNodeSelector: name-of-node-selector
+ namespace1: name-of-node-selector
+ namespace2: name-of-node-selector
```
Reference the `PodNodeSelector` configuration file from the file provided to the API server's
@@ -744,17 +744,37 @@ for more information.
### SecurityContextDeny {#securitycontextdeny}
-This admission controller will deny any Pod that attempts to set certain escalating
-[SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)
-fields, as shown in the
-[Configure a Security Context for a Pod or Container](/docs/tasks/configure-pod-container/security-context/)
-task.
-If you don't use [Pod Security admission](/docs/concepts/security/pod-security-admission/),
-[PodSecurityPolicies](/docs/concepts/security/pod-security-policy/), nor any external enforcement mechanism,
-then you could use this admission controller to restrict the set of values a security context can take.
-
-See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for more context on restricting
-pod privileges.
+{{< feature-state for_k8s_version="v1.0" state="alpha" >}}
+
+{{< caution >}}
+This admission controller plugin is **outdated** and **incomplete**, it may be
+unusable or not do what you would expect. It was originally designed to prevent
+the use of some, but not all, security-sensitive fields. Indeed, fields like
+`privileged`, were not filtered at creation and the plugin was not updated with
+the most recent fields, and new APIs like the `ephemeralContainers` field for a
+Pod.
+
+The [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
+plugin enforcing the [Pod Security Standards](/docs/concepts/security/pod-security-standards/)
+`Restricted` profile captures what this plugin was trying to achieve in a better
+and up-to-date way.
+{{< /caution >}}
+
+This admission controller will deny any Pod that attempts to set the following
+[SecurityContext](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context)
+fields:
+- `.spec.securityContext.supplementalGroups`
+- `.spec.securityContext.seLinuxOptions`
+- `.spec.securityContext.runAsUser`
+- `.spec.securityContext.fsGroup`
+- `.spec.(init)Containers[*].securityContext.seLinuxOptions`
+- `.spec.(init)Containers[*].securityContext.runAsUser`
+
+For more historical context on this plugin, see
+[The birth of PodSecurityPolicy](/blog/2022/08/23/podsecuritypolicy-the-historical-context/#the-birth-of-podsecuritypolicy)
+from the Kubernetes blog article about PodSecurityPolicy and its removal. The
+article details the PodSecurityPolicy historical context and the birth of the
+`securityContext` field for Pods.
### ServiceAccount {#serviceaccount}
diff --git a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md
index 71beccb53f5b0..31ab932589918 100644
--- a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md
+++ b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md
@@ -404,23 +404,25 @@ However, you _can_ enable its server certificate, at least partially, via certif
### Certificate Rotation
-Kubernetes v1.8 and higher kubelet implements __beta__ features for enabling
-rotation of its client and/or serving certificates. These can be enabled through
-the respective `RotateKubeletClientCertificate` and
-`RotateKubeletServerCertificate` feature flags on the kubelet and are enabled by
-default.
+Kubernetes v1.8 and higher kubelet implements features for enabling
+rotation of its client and/or serving certificates. Note, rotation of serving
+certificate is a __beta__ feature and requires the `RotateKubeletServerCertificate`
+feature flag on the kubelet (enabled by default).
-`RotateKubeletClientCertificate` causes the kubelet to rotate its client
-certificates by creating new CSRs as its existing credentials expire. To enable
-this feature pass the following flag to the kubelet:
+You can configure the kubelet to rotate its client certificates by creating new CSRs
+as its existing credentials expire. To enable this feature, use the `rotateCertificates`
+field of [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
+or pass the following command line argument to the kubelet (deprecated):
```
--rotate-certificates
```
-`RotateKubeletServerCertificate` causes the kubelet **both** to request a serving
+Enabling `RotateKubeletServerCertificate` causes the kubelet **both** to request a serving
certificate after bootstrapping its client credentials **and** to rotate that
-certificate. To enable this feature pass the following flag to the kubelet:
+certificate. To enable this behavior, use the field `serverTLSBootstrap` of
+the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
+or pass the following command line argument to the kubelet (deprecated):
```
--rotate-server-certificates
diff --git a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md
index 332e757313e8a..f78f0f81fb714 100644
--- a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md
+++ b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md
@@ -184,7 +184,7 @@ it does the following when a Pod is created:
`/var/run/secrets/kubernetes.io/serviceaccount`.
For Linux containers, that volume is mounted at `/var/run/secrets/kubernetes.io/serviceaccount`;
on Windows nodes, the mount is at the equivalent path.
-1. If the spec of the incoming Pod does already contain any `imagePullSecrets`, then the
+1. If the spec of the incoming Pod doesn't already contain any `imagePullSecrets`, then the
admission controller adds `imagePullSecrets`, copying them from the `ServiceAccount`.
### TokenRequest API
diff --git a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md
index 1cb2e0a2f579d..2bf6610eebd27 100644
--- a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md
+++ b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md
@@ -20,23 +20,32 @@ This page provides an overview of Validating Admission Policy.
Validating admission policies offer a declarative, in-process alternative to validating admission webhooks.
-Validating admission policies use the Common Expression Language (CEL) to declare the validation rules of a policy.
-Validation admission policies are highly configurable, enabling policy authors to define policies that can be parameterized and scoped to resources as needed by cluster administrators.
+Validating admission policies use the Common Expression Language (CEL) to declare the validation
+rules of a policy.
+Validation admission policies are highly configurable, enabling policy authors to define policies
+that can be parameterized and scoped to resources as needed by cluster administrators.
## What Resources Make a Policy
A policy is generally made up of three resources:
-- The `ValidatingAdmissionPolicy` describes the abstract logic of a policy (think: "this policy makes sure a particular label is set to a particular value").
+- The `ValidatingAdmissionPolicy` describes the abstract logic of a policy
+ (think: "this policy makes sure a particular label is set to a particular value").
-- A `ValidatingAdmissionPolicyBinding` links the above resources together and provides scoping. If you only want to require an `owner` label to be set for `Pods`, the binding is where you would specify this restriction.
+- A `ValidatingAdmissionPolicyBinding` links the above resources together and provides scoping.
+ If you only want to require an `owner` label to be set for `Pods`, the binding is where you would
+ specify this restriction.
-- A parameter resource provides information to a ValidatingAdmissionPolicy to make it a concrete statement (think "the `owner` label must be set to something that ends in `.company.com`"). A native type such as ConfigMap or a CRD defines the schema of a parameter resource. `ValidatingAdmissionPolicy` objects specify what Kind they are expecting for their parameter resource.
+- A parameter resource provides information to a ValidatingAdmissionPolicy to make it a concrete
+ statement (think "the `owner` label must be set to something that ends in `.company.com`").
+ A native type such as ConfigMap or a CRD defines the schema of a parameter resource.
+ `ValidatingAdmissionPolicy` objects specify what Kind they are expecting for their parameter resource.
+At least a `ValidatingAdmissionPolicy` and a corresponding `ValidatingAdmissionPolicyBinding`
+must be defined for a policy to have an effect.
-At least a `ValidatingAdmissionPolicy` and a corresponding `ValidatingAdmissionPolicyBinding` must be defined for a policy to have an effect.
-
-If a `ValidatingAdmissionPolicy` does not need to be configured via parameters, simply leave `spec.paramKind` in `ValidatingAdmissionPolicy` unset.
+If a `ValidatingAdmissionPolicy` does not need to be configured via parameters, simply leave
+`spec.paramKind` in `ValidatingAdmissionPolicy` unset.
## {{% heading "prerequisites" %}}
@@ -45,11 +54,13 @@ If a `ValidatingAdmissionPolicy` does not need to be configured via parameters,
## Getting Started with Validating Admission Policy
-Validating Admission Policy is part of the cluster control-plane. You should write and deploy them with great caution. The following describes how to quickly experiment with Validating Admission Policy.
+Validating Admission Policy is part of the cluster control-plane. You should write and deploy them
+with great caution. The following describes how to quickly experiment with Validating Admission Policy.
### Creating a ValidatingAdmissionPolicy
The following is an example of a ValidatingAdmissionPolicy.
+
```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicy
@@ -66,26 +77,31 @@ spec:
validations:
- expression: "object.spec.replicas <= 5"
```
+
`spec.validations` contains CEL expressions which use the [Common Expression Language (CEL)](https://github.com/google/cel-spec)
-to validate the request. If an expression evaluates to false, the validation check is enforced according to the `spec.failurePolicy` field.
+to validate the request. If an expression evaluates to false, the validation check is enforced
+according to the `spec.failurePolicy` field.
+
+To configure a validating admission policy for use in a cluster, a binding is required.
+The following is an example of a ValidatingAdmissionPolicyBinding.:
-To configure a validating admission policy for use in a cluster, a binding is required. The following is an example of a ValidatingAdmissionPolicyBinding.:
```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicyBinding
metadata:
name: "demo-binding-test.example.com"
spec:
- policy: "replicalimit-policy.example.com"
+ policyName: "demo-policy.example.com"
matchResources:
- namespaceSelectors:
- - key: environment,
- operator: In,
- values: ["test"]
+ namespaceSelector:
+ matchLabels:
+ environment: test
```
-When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:
-```
+When trying to create a deployment with replicas set not satisfying the validation expression, an
+error will return containing message:
+
+```none
ValidatingAdmissionPolicy 'demo-policy.example.com' with binding 'demo-binding-test.example.com' denied request: failed expression: object.spec.replicas <= 5
```
@@ -97,13 +113,15 @@ Parameter resources allow a policy configuration to be separate from its definit
A policy can define paramKind, which outlines GVK of the parameter resource,
and then a policy binding ties a policy by name (via policyName) to a particular parameter resource via paramRef.
-If parameter configuration is needed, the following is an example of a ValidatingAdmissionPolicy with parameter configuration.
+If parameter configuration is needed, the following is an example of a ValidatingAdmissionPolicy
+with parameter configuration.
+
```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicy
metadata:
name: "replicalimit-policy.example.com"
-Spec:
+spec:
failurePolicy: Fail
paramKind:
apiVersion: rules.example.com/v1
@@ -118,32 +136,39 @@ Spec:
- expression: "object.spec.replicas <= params.maxReplicas"
reason: Invalid
```
-The `spec.paramKind` field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources.
-Note in this example how the CEL expression references the parameters via the CEL params variable, e.g. `params.maxReplicas`.
-spec.matchConstraints specifies what resources this policy is designed to validate.
-Note that the native types such like `ConfigMap` could also be used as parameter reference.
-The `spec.validations` fields contain CEL expressions. If an expression evaluates to false, the validation check is enforced according to the `spec.failurePolicy` field.
+The `spec.paramKind` field of the ValidatingAdmissionPolicy specifies the kind of resources used
+to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources.
+Note in this example how the CEL expression references the parameters via the CEL params variable,
+e.g. `params.maxReplicas`. `spec.matchConstraints` specifies what resources this policy is
+designed to validate. Note that the native types such like `ConfigMap` could also be used as
+parameter reference.
+
+The `spec.validations` fields contain CEL expressions. If an expression evaluates to false, the
+validation check is enforced according to the `spec.failurePolicy` field.
The validating admission policy author is responsible for providing the ReplicaLimit parameter CRD.
-To configure an validating admission policy for use in a cluster, a binding and parameter resource are created. The following is an example of a ValidatingAdmissionPolicyBinding.
+To configure an validating admission policy for use in a cluster, a binding and parameter resource
+are created. The following is an example of a ValidatingAdmissionPolicyBinding.
+
```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicyBinding
metadata:
name: "replicalimit-binding-test.example.com"
spec:
- policy: "replicalimit-policy.example.com"
- paramsRef:
+ policyName: "replicalimit-policy.example.com"
+ paramRef:
name: "replica-limit-test.example.com"
matchResources:
- namespaceSelectors:
- - key: environment,
- operator: In,
- values: ["test"]
+ namespaceSelector:
+ matchLabels:
+ environment: test
```
+
The parameter resource could be as following:
+
```yaml
apiVersion: rules.example.com/v1
kind: ReplicaLimit
@@ -151,24 +176,31 @@ metadata:
name: "replica-limit-test.example.com"
maxReplicas: 3
```
-This policy parameter resource limits deployments to a max of 3 replicas in all namespaces in the test environment.
-An admission policy may have multiple bindings. To bind all other environments environment to have a maxReplicas limit of 100, create another ValidatingAdmissionPolicyBinding:
+
+This policy parameter resource limits deployments to a max of 3 replicas in all namespaces in the
+test environment. An admission policy may have multiple bindings. To bind all other environments
+environment to have a maxReplicas limit of 100, create another ValidatingAdmissionPolicyBinding:
+
```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicyBinding
metadata:
name: "replicalimit-binding-nontest"
spec:
- policy: "replicalimit-policy.example.com"
- paramsRef:
+ policyName: "replicalimit-policy.example.com"
+ paramRef:
name: "replica-limit-clusterwide.example.com"
matchResources:
- namespaceSelectors:
- - key: environment,
- operator: NotIn,
- values: ["test"]
+ namespaceSelector:
+ matchExpressions:
+ - key: environment
+ operator: NotIn
+ values:
+ - test
```
+
And have a parameter resource like:
+
```yaml
apiVersion: rules.example.com/v1
kind: ReplicaLimit
@@ -176,57 +208,75 @@ metadata:
name: "replica-limit-clusterwide.example.com"
maxReplicas: 100
```
-Bindings can have overlapping match criteria. The policy is evaluated for each matching binding. In the above example, the "nontest" policy binding could instead have been defined as a global policy:
+
+Bindings can have overlapping match criteria. The policy is evaluated for each matching binding.
+In the above example, the "nontest" policy binding could instead have been defined as a global policy:
+
```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicyBinding
metadata:
name: "replicalimit-binding-global"
spec:
- policy: "replicalimit-policy.example.com"
+ policyName: "replicalimit-policy.example.com"
params: "replica-limit-clusterwide.example.com"
matchResources:
- namespaceSelectors:
- - key: environment,
- operator: Exists
+ namespaceSelector:
+ matchExpressions:
+ - key: environment
+ operator: Exists
```
-The params object representing a parameter resource will not be set if a parameter resource has not been bound,
-so for policies requiring a parameter resource,
-it can be useful to add a check to ensure one has been bound.
+The params object representing a parameter resource will not be set if a parameter resource has
+not been bound, so for policies requiring a parameter resource, it can be useful to add a check to
+ensure one has been bound.
+
+For the use cases require parameter configuration, we recommend to add a param check in
+`spec.validations[0].expression`:
-For the use cases require parameter configuration,
-we recommend to add a param check in `spec.validations[0].expression`:
```
- expression: "params != null"
message: "params missing but required to bind to this policy"
```
-It can be convenient to be able to have optional parameters as part of a parameter resource, and only validate them if present.
-CEL provides has(), which checks if the key passed to it exists. CEL also implements Boolean short-circuiting:
-If the first half of a logical OR evaluates to true, it won’t evaluate the other half (since the result of the entire OR will be true regardless).
+It can be convenient to be able to have optional parameters as part of a parameter resource, and
+only validate them if present. CEL provides `has()`, which checks if the key passed to it exists.
+CEL also implements Boolean short-circuiting. If the first half of a logical OR evaluates to true,
+it won’t evaluate the other half (since the result of the entire OR will be true regardless).
+
Combining the two, we can provide a way to validate optional parameters:
+
`!has(params.optionalNumber) || (params.optionalNumber >= 5 && params.optionalNumber <= 10)`
+
Here, we first check that the optional parameter is present with `!has(params.optionalNumber)`.
-If `optionalNumber` hasn’t been defined, then the expression short-circuits since `!has(params.optionalNumber)` will evaluate to true.
-If `optionalNumber` has been defined, then the latter half of the CEL expression will be evaluated, and optionalNumber will be checked to ensure that it contains a value between 5 and 10 inclusive.
+
+- If `optionalNumber` hasn’t been defined, then the expression short-circuits since
+ `!has(params.optionalNumber)` will evaluate to true.
+- If `optionalNumber` has been defined, then the latter half of the CEL expression will be
+ evaluated, and optionalNumber will be checked to ensure that it contains a value between 5 and
+ 10 inclusive.
#### Authorization Check
We introduced the authorization check for parameter resources.
-User is expected to have `read` access to the resources referenced by `paramKind` in `ValidatingAdmissionPolicy` and `paramRef` in `ValidatingAdmissionPolicyBinding`.
+User is expected to have `read` access to the resources referenced by `paramKind` in
+`ValidatingAdmissionPolicy` and `paramRef` in `ValidatingAdmissionPolicyBinding`.
-Note that if a resource in `paramKind` fails resolving via the restmapper, `read` access to all resources of groups is required.
+Note that if a resource in `paramKind` fails resolving via the restmapper, `read` access to all
+resources of groups is required.
### Failure Policy
-`failurePolicy` defines how mis-configurations and CEL expressions evaluating to error from the admission policy are handled.
-Allowed values are `Ignore` or `Fail`.
+`failurePolicy` defines how mis-configurations and CEL expressions evaluating to error from the
+admission policy are handled. Allowed values are `Ignore` or `Fail`.
-- `Ignore` means that an error calling the ValidatingAdmissionPolicy is ignored and the API request is allowed to continue.
-- `Fail` means that an error calling the ValidatingAdmissionPolicy causes the admission to fail and the API request to be rejected.
+- `Ignore` means that an error calling the ValidatingAdmissionPolicy is ignored and the API
+ request is allowed to continue.
+- `Fail` means that an error calling the ValidatingAdmissionPolicy causes the admission to fail
+ and the API request to be rejected.
Note that the `failurePolicy` is defined inside `ValidatingAdmissionPolicy`:
+
```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicy
@@ -241,18 +291,21 @@ validations:
`spec.validations[i].expression` represents the expression which will be evaluated by CEL.
To learn more, see the [CEL language specification](https://github.com/google/cel-spec)
-CEL expressions have access to the contents of the Admission request/response, organized into CEL variables as well as some other useful variables:
+CEL expressions have access to the contents of the Admission request/response, organized into CEL
+variables as well as some other useful variables:
+
- 'object' - The object from the incoming request. The value is null for DELETE requests.
- 'oldObject' - The existing object. The value is null for CREATE requests.
-- 'request' - Attributes of the [admission request](/pkg/apis/admission/types.go#AdmissionRequest).
-- 'params' - Parameter resource referred to by the policy binding being evaluated. The value is null if `ParamKind` is unset.
+- 'request' - Attributes of the [admission request](/docs/reference/config-api/apiserver-admission.v1/#admission-k8s-io-v1-AdmissionRequest).
+- 'params' - Parameter resource referred to by the policy binding being evaluated. The value is
+ null if `ParamKind` is unset.
-The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from the root of the
-object. No other metadata properties are accessible.
+The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from
+the root of the object. No other metadata properties are accessible.
Only property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible.
-Only property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible.
-Accessible property names are escaped according to the following rules when accessed in the expression:
+Accessible property names are escaped according to the following rules when accessed in the
+expression:
| escape sequence | property name equivalent |
| ----------------------- | -----------------------|
@@ -303,10 +356,12 @@ Concatenation on arrays with x-kubernetes-list-type use the semantics of the lis
| `size(object.names) == size(object.details) && object.names.all(n, n in object.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet |
| `size(object.clusters.filter(c, c.name == object.primary)) == 1` | Validate that the 'primary' property has one and only one occurrence in the 'clusters' listMap |
-Read [Supported evaluation on CEL](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#evaluation) for more information about CEL rules.
+Read [Supported evaluation on CEL](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#evaluation)
+for more information about CEL rules.
`spec.validation[i].reason` represents a machine-readable description of why this validation failed.
-If this is the first validation in the list to fail, this reason, as well as the corresponding HTTP response code, are used in the
-HTTP response to the client.
+If this is the first validation in the list to fail, this reason, as well as the corresponding
+HTTP response code, are used in the HTTP response to the client.
The currently supported reasons are: `Unauthorized`, `Forbidden`, `Invalid`, `RequestEntityTooLarge`.
If not set, `StatusReasonInvalid` is used in the response to the client.
+
diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md b/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md
index 26f6663e90291..a3b704d891e5b 100644
--- a/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md
+++ b/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md
@@ -13,8 +13,8 @@ However, a GA'ed or a deprecated feature gate is still recognized by the corresp
components although they are unable to cause any behavior differences in a cluster.
For feature gates that are still recognized by the Kubernetes components, please refer to
-the [Alpha/Beta feature gate table](/docs/reference/command-line-tools/reference/feature-gates/#feature-gates-for-alpha-or-beta-features)
-or the [Graduated/Deprecated feature gate table](/docs/reference/command-line-tools/reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features)
+the [Alpha/Beta feature gate table](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features)
+or the [Graduated/Deprecated feature gate table](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features)
### Feature gates that are removed
@@ -36,6 +36,8 @@ In the following table:
| `AffinityInAnnotations` | - | Deprecated | 1.8 | 1.8 |
| `AllowExtTrafficLocalEndpoints` | `false` | Beta | 1.4 | 1.6 |
| `AllowExtTrafficLocalEndpoints` | `true` | GA | 1.7 | 1.9 |
+| `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | 1.20 |
+| `AllowInsecureBackendProxy` | `true` | GA | 1.21 | 1.25 |
| `AttachVolumeLimit` | `false` | Alpha | 1.11 | 1.11 |
| `AttachVolumeLimit` | `true` | Beta | 1.12 | 1.16 |
| `AttachVolumeLimit` | `true` | GA | 1.17 | 1.21 |
@@ -64,6 +66,9 @@ In the following table:
| `CSIMigrationAzureFileComplete` | - | Deprecated | 1.21 | 1.21 |
| `CSIMigrationGCEComplete` | `false` | Alpha | 1.17 | 1.20 |
| `CSIMigrationGCEComplete` | - | Deprecated | 1.21 | 1.21 |
+| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | 1.17 |
+| `CSIMigrationOpenStack` | `true` | Beta | 1.18 | 1.23 |
+| `CSIMigrationOpenStack` | `true` | GA | 1.24 | 1.25 |
| `CSIMigrationOpenStackComplete` | `false` | Alpha | 1.17 | 1.20 |
| `CSIMigrationOpenStackComplete` | - | Deprecated | 1.21 | 1.21 |
| `CSIMigrationvSphereComplete` | `false` | Beta | 1.19 | 1.21 |
@@ -106,8 +111,14 @@ In the following table:
| `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 |
| `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | 1.15 |
| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | 1.18 |
+| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 |
+| `DefaultPodTopologySpread` | `true` | Beta | 1.20 | 1.23 |
+| `DefaultPodTopologySpread` | `true` | GA | 1.24 | 1.25 |
| `DynamicAuditing` | `false` | Alpha | 1.13 | 1.18 |
| `DynamicAuditing` | - | Deprecated | 1.19 | 1.19 |
+| `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 |
+| `DynamicKubeletConfig` | `true` | Beta | 1.11 | 1.21 |
+| `DynamicKubeletConfig` | `false` | Deprecated | 1.22 | 1.25 |
| `DynamicProvisioningScheduling` | `false` | Alpha | 1.11 | 1.11 |
| `DynamicProvisioningScheduling` | - | Deprecated| 1.12 | - |
| `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 |
@@ -149,6 +160,9 @@ In the following table:
| `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | 1.18 |
| `ImmutableEphemeralVolumes` | `true` | Beta | 1.19 | 1.20 |
| `ImmutableEphemeralVolumes` | `true` | GA | 1.21 | 1.24 |
+| `IndexedJob` | `false` | Alpha | 1.21 | 1.21 |
+| `IndexedJob` | `true` | Beta | 1.22 | 1.23 |
+| `IndexedJob` | `true` | GA | 1.24 | 1.25 |
| `IngressClassNamespacedParams` | `false` | Alpha | 1.21 | 1.21 |
| `IngressClassNamespacedParams` | `true` | Beta | 1.22 | 1.22 |
| `IngressClassNamespacedParams` | `true` | GA | 1.23 | 1.24 |
@@ -175,11 +189,17 @@ In the following table:
| `NodeLease` | `false` | Alpha | 1.12 | 1.13 |
| `NodeLease` | `true` | Beta | 1.14 | 1.16 |
| `NodeLease` | `true` | GA | 1.17 | 1.23 |
+| `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 |
+| `NonPreemptingPriority` | `true` | Beta | 1.19 | 1.23 |
+| `NonPreemptingPriority` | `true` | GA | 1.24 | 1.25 |
| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 |
| `PVCProtection` | - | Deprecated | 1.10 | 1.10 |
| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 |
| `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 |
| `PersistentLocalVolumes` | `true` | GA | 1.14 | 1.16 |
+| `PodAffinityNamespaceSelector` | `false` | Alpha | 1.21 | 1.21 |
+| `PodAffinityNamespaceSelector` | `true` | Beta | 1.22 | 1.23 |
+| `PodAffinityNamespaceSelector` | `true` | GA | 1.24 | 1.25 |
| `PodDisruptionBudget` | `false` | Alpha | 1.3 | 1.4 |
| `PodDisruptionBudget` | `true` | Beta | 1.5 | 1.20 |
| `PodDisruptionBudget` | `true` | GA | 1.21 | 1.25 |
@@ -195,6 +215,9 @@ In the following table:
| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | 1.11 |
| `PodShareProcessNamespace` | `true` | Beta | 1.12 | 1.16 |
| `PodShareProcessNamespace` | `true` | GA | 1.17 | 1.19 |
+| `PreferNominatedNode` | `false` | Alpha | 1.21 | 1.21 |
+| `PreferNominatedNode` | `true` | Beta | 1.22 | 1.23 |
+| `PreferNominatedNode` | `true` | GA | 1.24 | 1.25 |
| `RequestManagement` | `false` | Alpha | 1.15 | 1.16 |
| `RequestManagement` | - | Deprecated | 1.17 | 1.17 |
| `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | 1.18 |
@@ -227,6 +250,12 @@ In the following table:
| `ServiceAppProtocol` | `false` | Alpha | 1.18 | 1.18 |
| `ServiceAppProtocol` | `true` | Beta | 1.19 | 1.19 |
| `ServiceAppProtocol` | `true` | GA | 1.20 | 1.22 |
+| `ServiceLBNodePortControl` | `false` | Alpha | 1.20 | 1.21 |
+| `ServiceLBNodePortControl` | `true` | Beta | 1.22 | 1.23 |
+| `ServiceLBNodePortControl` | `true` | GA | 1.24 | 1.25 |
+| `ServiceLoadBalancerClass` | `false` | Alpha | 1.21 | 1.21 |
+| `ServiceLoadBalancerClass` | `true` | Beta | 1.22 | 1.23 |
+| `ServiceLoadBalancerClass` | `true` | GA | 1.24 | 1.25 |
| `ServiceLoadBalancerFinalizer` | `false` | Alpha | 1.15 | 1.15 |
| `ServiceLoadBalancerFinalizer` | `true` | Beta | 1.16 | 1.16 |
| `ServiceLoadBalancerFinalizer` | `true` | GA | 1.17 | 1.20 |
@@ -257,6 +286,9 @@ In the following table:
| `SupportPodPidsLimit` | `false` | Alpha | 1.10 | 1.13 |
| `SupportPodPidsLimit` | `true` | Beta | 1.14 | 1.19 |
| `SupportPodPidsLimit` | `true` | GA | 1.20 | 1.23 |
+| `SuspendJob` | `false` | Alpha | 1.21 | 1.21 |
+| `SuspendJob` | `true` | Beta | 1.22 | 1.23 |
+| `SuspendJob` | `true` | GA | 1.24 | 1.25 |
| `Sysctls` | `true` | Beta | 1.11 | 1.20 |
| `Sysctls` | `true` | GA | 1.21 | 1.22 |
| `TTLAfterFinished` | `false` | Alpha | 1.12 | 1.20 |
@@ -314,6 +346,9 @@ In the following table:
- `AllowExtTrafficLocalEndpoints`: Enable a service to route external requests to node local endpoints.
+- `AllowInsecureBackendProxy`: Enable the users to skip TLS verification of
+ kubelets on Pod log requests.
+
- `AttachVolumeLimit`: Enable volume plugins to report limits on number of volumes
that can be attached to a node.
See [dynamic volume limits](/docs/concepts/storage/storage-limits/#dynamic-volume-limits)
@@ -383,6 +418,14 @@ In the following table:
been deprecated in favor of the `InTreePluginGCEUnregister` feature flag which
prevents the registration of in-tree GCE PD plugin.
+- `CSIMigrationOpenStack`: Enables shims and translation logic to route volume
+ operations from the Cinder in-tree plugin to Cinder CSI plugin. Supports
+ falling back to in-tree Cinder plugin for mount operations to nodes that have
+ the feature disabled or that do not have Cinder CSI plugin installed and
+ configured. Does not support falling back for provision operations, for those
+ the CSI plugin must be installed and configured. Requires CSIMigration
+ feature flag enabled.
+
- `CSIMigrationOpenStackComplete`: Stops registering the Cinder in-tree plugin in
kubelet and volume controllers and enables shims and translation logic to route
volume operations from the Cinder in-tree plugin to Cinder CSI plugin.
@@ -442,8 +485,15 @@ In the following table:
- `CustomResourceWebhookConversion`: Enable webhook-based conversion
on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
+- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do
+ [default spreading](/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints).
+
- `DynamicAuditing`: Used to enable dynamic auditing before v1.19.
+- `DynamicKubeletConfig`: Enable the dynamic configuration of kubelet. The
+ feature is no longer supported outside of supported skew policy. The feature
+ gate was removed from kubelet in 1.24. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/).
+
- `DynamicProvisioningScheduling`: Extend the default scheduler to be aware of
volume topology and handle PV provisioning.
This feature was superseded by the `VolumeScheduling` feature in v1.12.
@@ -500,6 +550,9 @@ In the following table:
- `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as
immutable for better safety and performance.
+- `IndexedJob`: Allows the [Job](/docs/concepts/workloads/controllers/job/)
+ controller to manage Pod completions per completion index.
+
- `IngressClassNamespacedParams`: Allow namespace-scoped parameters reference in
`IngressClass` resource. This feature adds two fields - `Scope` and `Namespace`
to `IngressClass.spec.parameters`.
@@ -533,12 +586,19 @@ In the following table:
- `NodeLease`: Enable the new Lease API to report node heartbeats, which could be used as a node health signal.
+- `NonPreemptingPriority`: Enable `preemptionPolicy` field for PriorityClass and Pod.
+
- `PVCProtection`: Enable the prevention of a PersistentVolumeClaim (PVC) from
being deleted when it is still used by any Pod.
- `PersistentLocalVolumes`: Enable the usage of `local` volume type in Pods.
Pod affinity has to be specified if requesting a `local` volume.
+- `PodAffinityNamespaceSelector`: Enable the
+ [Pod Affinity Namespace Selector](/docs/concepts/scheduling-eviction/assign-pod-node/#namespace-selector)
+ and [CrossNamespacePodAffinity](/docs/concepts/policy/resource-quotas/#cross-namespace-pod-affinity-quota)
+ quota scope features.
+
- `PodDisruptionBudget`: Enable the [PodDisruptionBudget](/docs/tasks/run-application/configure-pdb/) feature.
- `PodOverhead`: Enable the [PodOverhead](/docs/concepts/scheduling-eviction/pod-overhead/)
@@ -555,6 +615,10 @@ In the following table:
a single process namespace between containers running in a pod. More details can be found in
[Share Process Namespace between Containers in a Pod](/docs/tasks/configure-pod-container/share-process-namespace/).
+- `PreferNominatedNode`: This flag tells the scheduler whether the nominated
+ nodes will be checked first before looping through all the other nodes in
+ the cluster.
+
- `RequestManagement`: Enables managing request concurrency with prioritization and fairness
at each API server. Deprecated by `APIPriorityAndFairness` since 1.17.
@@ -597,8 +661,14 @@ In the following table:
- `ServiceAppProtocol`: Enables the `appProtocol` field on Services and Endpoints.
+- `ServiceLoadBalancerClass`: Enables the `loadBalancerClass` field on Services. See
+ [Specifying class of load balancer implementation](/docs/concepts/services-networking/service/#load-balancer-class)
+ for more details.
+
- `ServiceLoadBalancerFinalizer`: Enable finalizer protection for Service load balancers.
+- `ServiceLBNodePortControl`: Enables the `allocateLoadBalancerNodePorts` field on Services.
+
- `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers created by a cloud provider.
A node is eligible for exclusion if labelled with "`node.kubernetes.io/exclude-from-external-load-balancers`".
@@ -629,6 +699,9 @@ In the following table:
- `SupportPodPidsLimit`: Enable the support to limiting PIDs in Pods.
+- `SuspendJob`: Enable support to suspend and resume Jobs. For more details, see
+ [the Jobs docs](/docs/concepts/workloads/controllers/job/).
+
- `Sysctls`: Enable support for namespaced kernel parameters (sysctls) that can be set for each
pod. See [sysctls](/docs/tasks/administer-cluster/sysctl-cluster/) for more details.
diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
index 48de9a13dd29c..a7b1058b10b6d 100644
--- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md
+++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
@@ -62,11 +62,11 @@ For a reference to old feature gates that are removed, please refer to
| `APIPriorityAndFairness` | `true` | Beta | 1.20 | |
| `APIResponseCompression` | `false` | Alpha | 1.7 | 1.15 |
| `APIResponseCompression` | `true` | Beta | 1.16 | |
-| `APISelfSubjectAttributesReview` | `false` | Alpha | 1.26 | |
+| `APISelfSubjectReview` | `false` | Alpha | 1.26 | |
| `APIServerIdentity` | `false` | Alpha | 1.20 | 1.25 |
| `APIServerIdentity` | `true` | Beta | 1.26 | |
| `APIServerTracing` | `false` | Alpha | 1.22 | |
-| `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | |
+| `AggregatedDiscoveryEndpoint` | `false` | Alpha | 1.26 | |
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | 1.23 |
| `AnyVolumeDataSource` | `true` | Beta | 1.24 | |
| `AppArmor` | `true` | Beta | 1.4 | |
@@ -79,9 +79,12 @@ For a reference to old feature gates that are removed, please refer to
| `CSIMigrationRBD` | `false` | Alpha | 1.23 | |
| `CSINodeExpandSecret` | `false` | Alpha | 1.25 | |
| `CSIVolumeHealth` | `false` | Alpha | 1.21 | |
-| `CrossNamespaceVolumeDataSource` | `false` | Alpha| 1.26 | |
+| `ComponentSLIs` | `false` | Alpha | 1.26 | |
| `ContainerCheckpoint` | `false` | Alpha | 1.25 | |
| `ContextualLogging` | `false` | Alpha | 1.24 | |
+| `CronJobTimeZone` | `false` | Alpha | 1.24 | 1.24 |
+| `CronJobTimeZone` | `true` | Beta | 1.25 | |
+| `CrossNamespaceVolumeDataSource` | `false` | Alpha| 1.26 | |
| `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | |
| `CustomResourceValidationExpressions` | `false` | Alpha | 1.23 | 1.24 |
| `CustomResourceValidationExpressions` | `true` | Beta | 1.25 | |
@@ -91,9 +94,9 @@ For a reference to old feature gates that are removed, please refer to
| `DownwardAPIHugePages` | `false` | Beta | 1.21 | 1.21 |
| `DownwardAPIHugePages` | `true` | Beta | 1.22 | |
| `DynamicResourceAllocation` | `false` | Alpha | 1.26 | |
-| `EndpointSliceTerminatingCondition` | `false` | Alpha | 1.20 | 1.21 |
-| `EndpointSliceTerminatingCondition` | `true` | Beta | 1.22 | |
-| `ExpandedDNSConfig` | `false` | Alpha | 1.22 | |
+| `EventedPLEG` | `false` | Alpha | 1.26 | - |
+| `ExpandedDNSConfig` | `false` | Alpha | 1.22 | 1.25 |
+| `ExpandedDNSConfig` | `true` | Beta | 1.26 | |
| `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | |
| `GRPCContainerProbe` | `false` | Alpha | 1.23 | 1.23 |
| `GRPCContainerProbe` | `true` | Beta | 1.24 | |
@@ -104,6 +107,7 @@ For a reference to old feature gates that are removed, please refer to
| `HPAContainerMetrics` | `false` | Alpha | 1.20 | |
| `HPAScaleToZero` | `false` | Alpha | 1.16 | |
| `HonorPVReclaimPolicy` | `false` | Alpha | 1.23 | |
+| `IPTablesOwnershipCleanup` | `false` | Alpha | 1.25 | |
| `InTreePluginAWSUnregister` | `false` | Alpha | 1.21 | |
| `InTreePluginAzureDiskUnregister` | `false` | Alpha | 1.21 | |
| `InTreePluginAzureFileUnregister` | `false` | Alpha | 1.21 | |
@@ -112,15 +116,11 @@ For a reference to old feature gates that are removed, please refer to
| `InTreePluginPortworxUnregister` | `false` | Alpha | 1.23 | |
| `InTreePluginRBDUnregister` | `false` | Alpha | 1.23 | |
| `InTreePluginvSphereUnregister` | `false` | Alpha | 1.21 | |
-| `IPTablesOwnershipCleanup` | `false` | Alpha | 1.25 | |
| `JobMutableNodeSchedulingDirectives` | `true` | Beta | 1.23 | |
| `JobPodFailurePolicy` | `false` | Alpha | 1.25 | 1.25 |
| `JobPodFailurePolicy` | `true` | Beta | 1.26 | |
| `JobReadyPods` | `false` | Alpha | 1.23 | 1.23 |
| `JobReadyPods` | `true` | Beta | 1.24 | |
-| `JobTrackingWithFinalizers` | `false` | Alpha | 1.22 | 1.22 |
-| `JobTrackingWithFinalizers` | `false` | Beta | 1.23 | 1.24 |
-| `JobTrackingWithFinalizers` | `true` | Beta | 1.25 | |
| `KMSv2` | `false` | Alpha | 1.25 | |
| `KubeletInUserNamespace` | `false` | Alpha | 1.22 | |
| `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 |
@@ -128,11 +128,12 @@ For a reference to old feature gates that are removed, please refer to
| `KubeletPodResourcesGetAllocatable` | `false` | Alpha | 1.21 | 1.22 |
| `KubeletPodResourcesGetAllocatable` | `true` | Beta | 1.23 | |
| `KubeletTracing` | `false` | Alpha | 1.25 | |
-| `LegacyServiceAccountTokenTracking` | `false` | Alpha | 1.26 | |
-| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | 1.24 |
-| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `true` | Beta | 1.25 | |
+| `LegacyServiceAccountTokenTracking` | `false` | Alpha | 1.25 | |
+| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | - |
| `LogarithmicScaleDown` | `false` | Alpha | 1.21 | 1.21 |
| `LogarithmicScaleDown` | `true` | Beta | 1.22 | |
+| `LoggingAlphaOptions` | `false` | Alpha | 1.24 | - |
+| `LoggingBetaOptions` | `true` | Beta | 1.24 | - |
| `MatchLabelKeysInPodTopologySpread` | `false` | Alpha | 1.25 | |
| `MaxUnavailableStatefulSet` | `false` | Alpha | 1.24 | |
| `MemoryManager` | `false` | Alpha | 1.21 | 1.21 |
@@ -140,11 +141,11 @@ For a reference to old feature gates that are removed, please refer to
| `MemoryQoS` | `false` | Alpha | 1.22 | |
| `MinDomainsInPodTopologySpread` | `false` | Alpha | 1.24 | 1.24 |
| `MinDomainsInPodTopologySpread` | `false` | Beta | 1.25 | |
-| `MixedProtocolLBService` | `false` | Alpha | 1.20 | 1.23 |
-| `MixedProtocolLBService` | `true` | Beta | 1.24 | |
+| `MinimizeIPTablesRestore` | `false` | Alpha | 1.26 | - |
| `MultiCIDRRangeAllocator` | `false` | Alpha | 1.25 | |
| `NetworkPolicyStatus` | `false` | Alpha | 1.24 | |
-| `NodeInclusionPolicyInPodTopologySpread` | `false` | Alpha | 1.25 | |
+| `NodeInclusionPolicyInPodTopologySpread` | `false` | Alpha | 1.25 | 1.25 |
+| `NodeInclusionPolicyInPodTopologySpread` | `true` | Beta | 1.26 | |
| `NodeOutOfServiceVolumeDetach` | `false` | Alpha | 1.24 | 1.25 |
| `NodeOutOfServiceVolumeDetach` | `true` | Beta | 1.26 | |
| `NodeSwap` | `false` | Alpha | 1.22 | |
@@ -196,7 +197,7 @@ For a reference to old feature gates that are removed, please refer to
| `TopologyManagerPolicyBetaOptions` | `false` | Beta | 1.26 | |
| `TopologyManagerPolicyOptions` | `false` | Alpha | 1.26 | |
| `UserNamespacesStatelessPodsSupport` | `false` | Alpha | 1.25 | |
-| `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | |
+| `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | |
| `VolumeCapacityPriority` | `false` | Alpha | 1.21 | - |
| `WinDSR` | `false` | Alpha | 1.14 | |
| `WinOverlay` | `false` | Alpha | 1.14 | 1.19 |
@@ -242,45 +243,37 @@ For a reference to old feature gates that are removed, please refer to
| `CSIMigrationvSphere` | `false` | Beta | 1.19 | 1.24 |
| `CSIMigrationvSphere` | `true` | Beta | 1.25 | 1.25 |
| `CSIMigrationvSphere` | `true` | GA | 1.26 | - |
-| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | 1.17 |
-| `CSIMigrationOpenStack` | `true` | Beta | 1.18 | 1.23 |
-| `CSIMigrationOpenStack` | `true` | GA | 1.24 | |
| `CSIStorageCapacity` | `false` | Alpha | 1.19 | 1.20 |
| `CSIStorageCapacity` | `true` | Beta | 1.21 | 1.23 |
| `CSIStorageCapacity` | `true` | GA | 1.24 | - |
+| `ConsistentHTTPGetHandlers` | `true` | GA | 1.25 | - |
| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | 1.21 |
| `ControllerManagerLeaderMigration` | `true` | Beta | 1.22 | 1.23 |
| `ControllerManagerLeaderMigration` | `true` | GA | 1.24 | - |
-| `CronJobTimeZone` | `false` | Alpha | 1.24 | 1.24 |
-| `CronJobTimeZone` | `true` | Beta | 1.25 | |
| `DaemonSetUpdateSurge` | `false` | Alpha | 1.21 | 1.21 |
| `DaemonSetUpdateSurge` | `true` | Beta | 1.22 | 1.24 |
| `DaemonSetUpdateSurge` | `true` | GA | 1.25 | - |
-| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 |
-| `DefaultPodTopologySpread` | `true` | Beta | 1.20 | 1.23 |
-| `DefaultPodTopologySpread` | `true` | GA | 1.24 | - |
| `DelegateFSGroupToCSIDriver` | `false` | Alpha | 1.22 | 1.22 |
| `DelegateFSGroupToCSIDriver` | `true` | Beta | 1.23 | 1.25 |
| `DelegateFSGroupToCSIDriver` | `true` | GA | 1.26 |-|
-| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 |
-| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.24 |
-| `DisableAcceleratorUsageMetrics` | `true` | GA | 1.25 |- |
| `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 |
| `DevicePlugins` | `true` | Beta | 1.10 | 1.25 |
| `DevicePlugins` | `true` | GA | 1.26 | - |
+| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 |
+| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.24 |
+| `DisableAcceleratorUsageMetrics` | `true` | GA | 1.25 |- |
| `DryRun` | `false` | Alpha | 1.12 | 1.12 |
| `DryRun` | `true` | Beta | 1.13 | 1.18 |
| `DryRun` | `true` | GA | 1.19 | - |
-| `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 |
-| `DynamicKubeletConfig` | `true` | Beta | 1.11 | 1.21 |
-| `DynamicKubeletConfig` | `false` | Deprecated | 1.22 | - |
| `EfficientWatchResumption` | `false` | Alpha | 1.20 | 1.20 |
| `EfficientWatchResumption` | `true` | Beta | 1.21 | 1.23 |
| `EfficientWatchResumption` | `true` | GA | 1.24 | - |
+| `EndpointSliceTerminatingCondition` | `false` | Alpha | 1.20 | 1.21 |
+| `EndpointSliceTerminatingCondition` | `true` | Beta | 1.22 | 1.25 |
+| `EndpointSliceTerminatingCondition` | `true` | GA | 1.26 | |
| `EphemeralContainers` | `false` | Alpha | 1.16 | 1.22 |
| `EphemeralContainers` | `true` | Beta | 1.23 | 1.24 |
| `EphemeralContainers` | `true` | GA | 1.25 | - |
-| `EventedPLEG` | `false` | Alpha | 1.26 | - |
| `ExecProbeTimeout` | `true` | GA | 1.20 | - |
| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 |
| `ExpandCSIVolumes` | `true` | Beta | 1.16 | 1.23 |
@@ -294,9 +287,6 @@ For a reference to old feature gates that are removed, please refer to
| `IdentifyPodOS` | `false` | Alpha | 1.23 | 1.23 |
| `IdentifyPodOS` | `true` | Beta | 1.24 | 1.24 |
| `IdentifyPodOS` | `true` | GA | 1.25 | - |
-| `IndexedJob` | `false` | Alpha | 1.21 | 1.21 |
-| `IndexedJob` | `true` | Beta | 1.22 | 1.23 |
-| `IndexedJob` | `true` | GA | 1.24 | - |
| `JobTrackingWithFinalizers` | `false` | Alpha | 1.22 | 1.22 |
| `JobTrackingWithFinalizers` | `false` | Beta | 1.23 | 1.24 |
| `JobTrackingWithFinalizers` | `true` | Beta | 1.25 | 1.25 |
@@ -309,45 +299,30 @@ For a reference to old feature gates that are removed, please refer to
| `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 |
| `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | 1.24 |
| `LocalStorageCapacityIsolation` | `true` | GA | 1.25 | - |
+| `MixedProtocolLBService` | `false` | Alpha | 1.20 | 1.23 |
+| `MixedProtocolLBService` | `true` | Beta | 1.24 | 1.25 |
+| `MixedProtocolLBService` | `true` | GA | 1.26 | - |
| `NetworkPolicyEndPort` | `false` | Alpha | 1.21 | 1.21 |
| `NetworkPolicyEndPort` | `true` | Beta | 1.22 | 1.24 |
| `NetworkPolicyEndPort` | `true` | GA | 1.25 | - |
-| `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 |
-| `NonPreemptingPriority` | `true` | Beta | 1.19 | 1.23 |
-| `NonPreemptingPriority` | `true` | GA | 1.24 | - |
-| `PodAffinityNamespaceSelector` | `false` | Alpha | 1.21 | 1.21 |
-| `PodAffinityNamespaceSelector` | `true` | Beta | 1.22 | 1.23 |
-| `PodAffinityNamespaceSelector` | `true` | GA | 1.24 | - |
| `PodSecurity` | `false` | Alpha | 1.22 | 1.22 |
| `PodSecurity` | `true` | Beta | 1.23 | 1.24 |
| `PodSecurity` | `true` | GA | 1.25 | |
-| `PreferNominatedNode` | `false` | Alpha | 1.21 | 1.21 |
-| `PreferNominatedNode` | `true` | Beta | 1.22 | 1.23 |
-| `PreferNominatedNode` | `true` | GA | 1.24 | - |
| `RemoveSelfLink` | `false` | Alpha | 1.16 | 1.19 |
| `RemoveSelfLink` | `true` | Beta | 1.20 | 1.23 |
| `RemoveSelfLink` | `true` | GA | 1.24 | - |
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
| `ServerSideApply` | `true` | Beta | 1.16 | 1.21 |
| `ServerSideApply` | `true` | GA | 1.22 | - |
-| `ServiceInternalTrafficPolicy` | `false` | Alpha | 1.21 | 1.21 |
-| `ServiceInternalTrafficPolicy` | `true` | Beta | 1.22 | 1.25 |
-| `ServiceInternalTrafficPolicy` | `true` | GA | 1.26 | - |
| `ServiceIPStaticSubrange` | `false` | Alpha | 1.24 | 1.24 |
| `ServiceIPStaticSubrange` | `true` | Beta | 1.25 | 1.25 |
| `ServiceIPStaticSubrange` | `true` | GA | 1.26 | - |
-| `ServiceLBNodePortControl` | `false` | Alpha | 1.20 | 1.21 |
-| `ServiceLBNodePortControl` | `true` | Beta | 1.22 | 1.23 |
-| `ServiceLBNodePortControl` | `true` | GA | 1.24 | - |
-| `ServiceLoadBalancerClass` | `false` | Alpha | 1.21 | 1.21 |
-| `ServiceLoadBalancerClass` | `true` | Beta | 1.22 | 1.23 |
-| `ServiceLoadBalancerClass` | `true` | GA | 1.24 | - |
+| `ServiceInternalTrafficPolicy` | `false` | Alpha | 1.21 | 1.21 |
+| `ServiceInternalTrafficPolicy` | `true` | Beta | 1.22 | 1.25 |
+| `ServiceInternalTrafficPolicy` | `true` | GA | 1.26 | - |
| `StatefulSetMinReadySeconds` | `false` | Alpha | 1.22 | 1.22 |
| `StatefulSetMinReadySeconds` | `true` | Beta | 1.23 | 1.24 |
| `StatefulSetMinReadySeconds` | `true` | GA | 1.25 | - |
-| `SuspendJob` | `false` | Alpha | 1.21 | 1.21 |
-| `SuspendJob` | `true` | Beta | 1.22 | 1.23 |
-| `SuspendJob` | `true` | GA | 1.24 | - |
| `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 |
| `WatchBookmark` | `true` | Beta | 1.16 | 1.16 |
| `WatchBookmark` | `true` | GA | 1.17 | - |
@@ -404,16 +379,16 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `APIPriorityAndFairness`: Enable managing request concurrency with
prioritization and fairness at each server. (Renamed from `RequestManagement`)
- `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests.
-- `APIServerIdentity`: Assign each API server an ID in a cluster.
-- `APIServerTracing`: Add support for distributed tracing in the API server.
- See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces) for more details.
-- `APISelfSubjectAttributesReview`: Activate the `SelfSubjectReview` API which allows users
+- `APISelfSubjectReview`: Activate the `SelfSubjectReview` API which allows users
to see the requesting subject's authentication information.
See [API access to authentication information for a client](/docs/reference/access-authn-authz/authentication/#self-subject-review)
for more details.
+- `APIServerIdentity`: Assign each API server an ID in a cluster.
+- `APIServerTracing`: Add support for distributed tracing in the API server.
+ See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces) for more details.
- `AdvancedAuditing`: Enable [advanced auditing](/docs/tasks/debug/debug-cluster/audit/#advanced-audit)
-- `AllowInsecureBackendProxy`: Enable the users to skip TLS verification of
- kubelets on Pod log requests.
+- `AggregatedDiscoveryEndpoint`: Enable a single HTTP endpoint `/discovery/` which
+ supports native HTTP caching with ETags containing all APIResources known to the API server.
- `AnyVolumeDataSource`: Enable use of any custom resource as the `DataSource` of a
{{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}.
- `AppArmor`: Enable use of AppArmor mandatory access control for Pods running on Linux nodes.
@@ -437,9 +412,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
This feature gate guards *a group* of CPUManager options whose quality level is beta.
This feature gate will never graduate to stable.
- `CPUManagerPolicyOptions`: Allow fine-tuning of CPUManager policies.
-- `CrossNamespaceVolumeDataSource`: Enable the usage of cross namespace volume data source
- to allow you to specify a source namespace in the `dataSourceRef` field of a
- PersistentVolumeClaim.
- `CSIInlineVolume`: Enable CSI Inline volumes support for pods.
- `CSIMigration`: Enables shims and translation logic to route volume
operations from in-tree plugins to corresponding pre-installed CSI plugins
@@ -470,14 +442,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
Does not support falling back for provision operations, for those the CSI
plugin must be installed and configured. Requires CSIMigration feature flag
enabled.
-- `CSIMigrationOpenStack`: Enables shims and translation logic to route volume
- operations from the Cinder in-tree plugin to Cinder CSI plugin. Supports
- falling back to in-tree Cinder plugin for mount operations to nodes that have
- the feature disabled or that do not have Cinder CSI plugin installed and
- configured. Does not support falling back for provision operations, for those
- the CSI plugin must be installed and configured. Requires CSIMigration
- feature flag enabled.
-- `csiMigrationRBD`: Enables shims and translation logic to route volume
+- `CSIMigrationRBD`: Enables shims and translation logic to route volume
operations from the RBD in-tree plugin to Ceph RBD CSI plugin. Requires
CSIMigration and csiMigrationRBD feature flags enabled and Ceph CSI plugin
installed and configured in the cluster. This flag has been deprecated in
@@ -500,11 +465,19 @@ Each feature gate is designed for enabling/disabling a specific feature:
[Storage Capacity](/docs/concepts/storage/storage-capacity/).
Check the [`csi` volume type](/docs/concepts/storage/volumes/#csi) documentation for more details.
- `CSIVolumeHealth`: Enable support for CSI volume health monitoring on node.
+- `ComponentSLIs`: Enable the `/metrics/slis` endpoint on Kubernetes components like
+ kubelet, kube-scheduler, kube-proxy, kube-controller-manager, cloud-controller-manager
+ allowing you to scrape health check metrics.
+- `ConsistentHTTPGetHandlers`: Normalize HTTP get URL and Header passing for lifecycle
+ handlers with probers.
- `ContextualLogging`: When you enable this feature gate, Kubernetes components that support
contextual logging add extra detail to log output.
- `ControllerManagerLeaderMigration`: Enables leader migration for
`kube-controller-manager` and `cloud-controller-manager`.
- `CronJobTimeZone`: Allow the use of the `timeZone` optional field in [CronJobs](/docs/concepts/workloads/controllers/cron-jobs/)
+- `CrossNamespaceVolumeDataSource`: Enable the usage of cross namespace volume data source
+ to allow you to specify a source namespace in the `dataSourceRef` field of a
+ PersistentVolumeClaim.
- `CustomCPUCFSQuotaPeriod`: Enable nodes to change `cpuCFSQuotaPeriod` in
[kubelet config](/docs/tasks/administer-cluster/kubelet-config-file/).
- `CustomResourceValidationExpressions`: Enable expression language validation in CRD
@@ -513,8 +486,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `DaemonSetUpdateSurge`: Enables the DaemonSet workloads to maintain
availability during update per node.
See [Perform a Rolling Update on a DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/).
-- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do
- [default spreading](/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints).
- `DelegateFSGroupToCSIDriver`: If supported by the CSI driver, delegates the
role of applying `fsGroup` from a Pod's `securityContext` to the driver by
passing `fsGroup` through the NodeStageVolume and NodePublishVolume CSI calls.
@@ -531,9 +502,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
[downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information).
- `DryRun`: Enable server-side [dry run](/docs/reference/using-api/api-concepts/#dry-run) requests
so that validation, merging, and mutation can be tested without committing.
-- `DynamicKubeletConfig`: Enable the dynamic configuration of kubelet. The
- feature is no longer supported outside of supported skew policy. The feature
- gate was removed from kubelet in 1.24. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/).
+- `DynamicResourceAllocation": Enables support for resources with custom parameters and a lifecycle
+ that is independent of a Pod.
- `EndpointSliceTerminatingCondition`: Enables EndpointSlice `terminating` and `serving`
condition fields.
- `EfficientWatchResumption`: Allows for storage-originated bookmark (progress
@@ -584,13 +554,11 @@ Each feature gate is designed for enabling/disabling a specific feature:
metrics from individual containers in target pods.
- `HPAScaleToZero`: Enables setting `minReplicas` to 0 for `HorizontalPodAutoscaler`
resources when using custom or external metrics.
-- `IPTablesOwnershipCleanup`: This causes kubelet to no longer create legacy IPTables rules.
+- `IPTablesOwnershipCleanup`: This causes kubelet to no longer create legacy iptables rules.
- `IdentifyPodOS`: Allows the Pod OS field to be specified. This helps in identifying
the OS of the pod authoritatively during the API server admission time.
In Kubernetes {{< skew currentVersion >}}, the allowed values for the `pod.spec.os.name`
are `windows` and `linux`.
-- `IndexedJob`: Allows the [Job](/docs/concepts/workloads/controllers/job/)
- controller to manage Pod completions per completion index.
- `InTreePluginAWSUnregister`: Stops registering the aws-ebs in-tree plugin in kubelet
and volume controllers.
- `InTreePluginAzureDiskUnregister`: Stops registering the azuredisk in-tree plugin in kubelet
@@ -655,6 +623,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
filesystem walk for better performance and accuracy.
- `LogarithmicScaleDown`: Enable semi-random selection of pods to evict on controller scaledown
based on logarithmic bucketing of pod timestamps.
+- `LoggingAlphaOptions`: Allow fine-tuing of experimental, alpha-quality logging options.
+- `LoggingBetaOptions`: Allow fine-tuing of experimental, beta-quality logging options.
- `MatchLabelKeysInPodTopologySpread`: Enable the `matchLabelKeys` field for
[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/).
- `MaxUnavailableStatefulSet`: Enables setting the `maxUnavailable` field for the
@@ -667,6 +637,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
cgroup v2 memory controller.
- `MinDomainsInPodTopologySpread`: Enable `minDomains` in
[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/).
+- `MinimizeIPTablesRestore`: Enables new performance improvement logics
+ in the kube-proxy iptables mode.
- `MixedProtocolLBService`: Enable using different protocols in the same `LoadBalancer` type
Service instance.
- `MultiCIDRRangeAllocator`: Enables the MultiCIDR range allocator.
@@ -683,7 +655,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `NodeSwap`: Enable the kubelet to allocate swap memory for Kubernetes workloads on a node.
Must be used with `KubeletConfiguration.failSwapOn` set to false.
For more details, please see [swap memory](/docs/concepts/architecture/nodes/#swap-memory)
-- `NonPreemptingPriority`: Enable `preemptionPolicy` field for PriorityClass and Pod.
- `OpenAPIEnums`: Enables populating "enum" fields of OpenAPI schemas in the
spec returned from the API server.
- `OpenAPIV3`: Enables the API server to publish OpenAPI v3.
@@ -692,19 +663,12 @@ Each feature gate is designed for enabling/disabling a specific feature:
for more details.
- `PodDeletionCost`: Enable the [Pod Deletion Cost](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)
feature which allows users to influence ReplicaSet downscaling order.
-- `PodAffinityNamespaceSelector`: Enable the
- [Pod Affinity Namespace Selector](/docs/concepts/scheduling-eviction/assign-pod-node/#namespace-selector)
- and [CrossNamespacePodAffinity](/docs/concepts/policy/resource-quotas/#cross-namespace-pod-affinity-quota)
- quota scope features.
- `PodAndContainerStatsFromCRI`: Configure the kubelet to gather container and pod stats from the CRI container runtime rather than gathering them from cAdvisor.
As of 1.26, this also includes gathering metrics from CRI and emitting them over `/metrics/cadvisor` (rather than having cAdvisor emit them directly).
- `PodDisruptionConditions`: Enables support for appending a dedicated pod condition indicating that the pod is being deleted due to a disruption.
- `PodHasNetworkCondition`: Enable the kubelet to mark the [PodHasNetwork](/docs/concepts/workloads/pods/pod-lifecycle/#pod-has-network) condition on pods.
- `PodSchedulingReadiness`: Enable setting `schedulingGates` field to control a Pod's [scheduling readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness).
- `PodSecurity`: Enables the `PodSecurity` admission plugin.
-- `PreferNominatedNode`: This flag tells the scheduler whether the nominated
- nodes will be checked first before looping through all the other nodes in
- the cluster.
- `ProbeTerminationGracePeriod`: Enable [setting probe-level
`terminationGracePeriodSeconds`](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#probe-level-terminationgraceperiodseconds)
on pods. See the [enhancement proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2238-liveness-probe-grace-period)
@@ -733,25 +697,18 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `RotateKubeletServerCertificate`: Enable the rotation of the server TLS certificate on the kubelet.
See [kubelet configuration](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubelet-configuration)
for more details.
-- `SELinuxMountReadWriteOncePod`: Speed up container startup by mounting volumes with the correct
- SELinux label instead of changing each file on the volumes recursively. The initial implementation
- focused on ReadWriteOncePod volumes.
+- `SELinuxMountReadWriteOncePod`: Speeds up container startup by allowing kubelet to mount volumes
+ for a Pod directly with the correct SELinux label instead of changing each file on the volumes
+ recursively. The initial implementation focused on ReadWriteOncePod volumes.
- `SeccompDefault`: Enables the use of `RuntimeDefault` as the default seccomp profile
for all workloads.
The seccomp profile is specified in the `securityContext` of a Pod and/or a Container.
-- `SELinuxMountReadWriteOncePod`: Allows kubelet to mount volumes for a Pod directly with the
- right SELinux label instead of applying the SELinux label recursively on every file on the
- volume.
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/server-side-apply/)
feature on the API Server.
- `ServerSideFieldValidation`: Enables server-side field validation. This means the validation
of resource schema is performed at the API server side rather than the client side
(for example, the `kubectl create` or `kubectl apply` command line).
- `ServiceInternalTrafficPolicy`: Enables the `internalTrafficPolicy` field on Services
-- `ServiceLBNodePortControl`: Enables the `allocateLoadBalancerNodePorts` field on Services.
-- `ServiceLoadBalancerClass`: Enables the `loadBalancerClass` field on Services. See
- [Specifying class of load balancer implementation](/docs/concepts/services-networking/service/#load-balancer-class)
- for more details.
- `ServiceIPStaticSubrange`: Enables a strategy for Services ClusterIP allocations, whereby the
ClusterIP range is subdivided. Dynamic allocated ClusterIP addresses will be allocated preferently
from the upper range allowing users to assign static ClusterIPs from the lower range with a low
@@ -770,8 +727,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
[storage version API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageversion-v1alpha1-internal-apiserver-k8s-io).
- `StorageVersionHash`: Allow API servers to expose the storage version hash in the
discovery.
-- `SuspendJob`: Enable support to suspend and resume Jobs. For more details, see
- [the Jobs docs](/docs/concepts/workloads/controllers/job/).
- `TopologyAwareHints`: Enables topology aware routing based on topology hints
in EndpointSlices. See [Topology Aware
Hints](/docs/concepts/services-networking/topology-aware-hints/) for more
@@ -785,7 +740,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
This feature gate will never graduate to beta or stable.
- `TopologyManagerPolicyBetaOptions`: Allow fine-tuning of topology manager policies,
experimental, Beta-quality options.
- This feature gate guards *a group* of topology manager options whose quality level is alpha.
+ This feature gate guards *a group* of topology manager options whose quality level is beta.
This feature gate will never graduate to stable.
- `TopologyManagerPolicyOptions`: Allow fine-tuning of topology manager policies,
- `UserNamespacesStatelessPodsSupport`: Enable user namespace support for stateless Pods.
@@ -795,6 +750,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `WatchBookmark`: Enable support for watch bookmark events.
- `WinDSR`: Allows kube-proxy to create DSR loadbalancers for Windows.
- `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows.
+- `WindowsHostNetwork`: Enables support for joining Windows containers to a hosts' network namespace.
- `WindowsHostProcessContainers`: Enables support for Windows HostProcess containers.
diff --git a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
index cff6120589904..af3687662e61c 100644
--- a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
+++ b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
@@ -53,6 +53,13 @@ kube-apiserver [flags]
The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
Aggregator reject forwarding redirect response back to client.
+
+
--allow-metric-labels stringToString Default: []
@@ -449,7 +456,7 @@ kube-apiserver [flags]
--disable-admission-plugins strings
-
admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+
admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
@@ -470,7 +477,7 @@ kube-apiserver [flags]
--enable-admission-plugins strings
-
admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+
admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
@@ -508,6 +515,13 @@ kube-apiserver [flags]
The file containing configuration for encryption providers to be used for storing secrets in etcd
+
+
--encryption-provider-config-automatic-reload
+
+
+
Determines if the file set by --encryption-provider-config should be automatically reloaded if the disk contents change. Setting this to true disables the ability to uniquely identify distinct KMS plugins via the API server healthz endpoints.
The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting.
+
The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. Supported media types: [application/json, application/yaml, application/vnd.kubernetes.protobuf]
The number of horizontal pod autoscaler objects that are allowed to sync concurrently. Larger number = more responsive horizontal pod autoscaler objects processing, but more CPU (and network) load.
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
+
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than the lease duration. This is only applicable if leader election is enabled.
If non-empty, will use this string as identification instead of the actual hostname.
+
+
--iptables-localhost-nodeports Default: true
+
+
+
If false Kube-proxy will disable the legacy behavior of allowing NodePort services to be accessed via localhost, This only applies to iptables mode and ipv4.
+
+
--iptables-masquerade-bit int32 Default: 14
@@ -284,21 +291,21 @@ kube-proxy [flags]
--log_dir string
-
If non-empty, write log files in this directory
+
If non-empty, write log files in this directory (no effect when -logtostderr=true)
--log_file string
-
If non-empty, use this log file
+
If non-empty, use this log file (no effect when -logtostderr=true)
--log_file_max_size uint Default: 1800
-
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
+
Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited.
@@ -347,7 +354,7 @@ kube-proxy [flags]
--one_output
-
If true, only write logs to their native severity level (vs also writing to each lower severity level)
+
If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
@@ -382,7 +389,7 @@ kube-proxy [flags]
--proxy-mode ProxyMode
-
Which proxy mode to use: 'iptables' (Linux-only), 'ipvs' (Linux-only), 'kernelspace' (Windows-only), or 'userspace' (Linux/Windows, deprecated). The default value is 'iptables' on Linux and 'userspace' on Windows(will be 'kernelspace' in a future release).This parameter is ignored if a config file is specified by --config.
+
Which proxy mode to use: on Linux this can be 'iptables' (default) or 'ipvs'. On Windows the only supported value is 'kernelspace'.This parameter is ignored if a config file is specified by --config.
@@ -396,7 +403,7 @@ kube-proxy [flags]
--show-hidden-metrics-for-version string
-
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.This parameter is ignored if a config file is specified by --config.
+
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that. This parameter is ignored if a config file is specified by --config.
@@ -410,21 +417,14 @@ kube-proxy [flags]
--skip_log_headers
-
If true, avoid headers when opening log files
+
If true, avoid headers when opening log files (no effect when -logtostderr=true)
--stderrthreshold int Default: 2
-
logs at or above this threshold go to stderr
-
-
-
-
--udp-timeout duration Default: 250ms
-
-
-
How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace
+
logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false)
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
+
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than the lease duration. This is only applicable if leader election is enabled.
@@ -278,7 +278,7 @@ kube-scheduler [flags]
--logging-format string Default: "text"
-
Sets the log format. Permitted formats: "text". Non-default formats don't honor these flags: --add-dir-header, --alsologtostderr, --log-backtrace-at, --log-dir, --log-file, --log-file-max-size, --logtostderr, --one-output, --skip-headers, --skip-log-headers, --stderrthreshold, --vmodule. Non-default choices are currently alpha and subject to change without warning.
UID is an identifier for the individual request/response. It allows us to distinguish instances of requests which are
+otherwise identical (parallel requests, requests when earlier requests did not modify etc)
+The UID is meant to track the round trip (request/response) between the KAS and the WebHook, not the user request.
+It is suitable for correlating log entries between the webhook and apiserver, for either auditing or debugging.
RequestKind is the fully-qualified type of the original API request (for example, v1.Pod or autoscaling.v1.Scale).
+If this is specified and differs from the value in "kind", an equivalent match and conversion was performed.
+
For example, if deployments can be modified via apps/v1 and apps/v1beta1, and a webhook registered a rule of
+apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] and matchPolicy: Equivalent,
+an API request to apps/v1beta1 deployments would be converted and sent to the webhook
+with kind: {group:"apps", version:"v1", kind:"Deployment"} (matching the rule the webhook registered for),
+and requestKind: {group:"apps", version:"v1beta1", kind:"Deployment"} (indicating the kind of the original API request).
+
See documentation for the "matchPolicy" field in the webhook configuration type for more details.
RequestResource is the fully-qualified resource of the original API request (for example, v1.pods).
+If this is specified and differs from the value in "resource", an equivalent match and conversion was performed.
+
For example, if deployments can be modified via apps/v1 and apps/v1beta1, and a webhook registered a rule of
+apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] and matchPolicy: Equivalent,
+an API request to apps/v1beta1 deployments would be converted and sent to the webhook
+with resource: {group:"apps", version:"v1", resource:"deployments"} (matching the resource the webhook registered for),
+and requestResource: {group:"apps", version:"v1beta1", resource:"deployments"} (indicating the resource of the original API request).
+
See documentation for the "matchPolicy" field in the webhook configuration type.
+
+
+
requestSubResource
+string
+
+
+
RequestSubResource is the name of the subresource of the original API request, if any (for example, "status" or "scale")
+If this is specified and differs from the value in "subResource", an equivalent match and conversion was performed.
+See documentation for the "matchPolicy" field in the webhook configuration type.
+
+
+
name
+string
+
+
+
Name is the name of the object as presented in the request. On a CREATE operation, the client may omit name and
+rely on the server to generate the name. If that is the case, this field will contain an empty string.
+
+
+
namespace
+string
+
+
+
Namespace is the namespace associated with the request (if any).
Operation is the operation being performed. This may be different than the operation
+requested. e.g. a patch can result in either a CREATE or UPDATE Operation.
Options is the operation option structure of the operation being performed.
+e.g. meta.k8s.io/v1.DeleteOptions or meta.k8s.io/v1.CreateOptions. This may be
+different than the options the caller provided. e.g. for a patch request the performed
+Operation might be a CREATE, in which case the Options will a
+meta.k8s.io/v1.CreateOptions even though the caller provided meta.k8s.io/v1.PatchOptions.
The type of Patch. Currently we only allow "JSONPatch".
+
+
+
auditAnnotations
+map[string]string
+
+
+
AuditAnnotations is an unstructured key value map set by remote admission controller (e.g. error=image-blacklisted).
+MutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controller will prefix the keys with
+admission webhook name (e.g. imagepolicy.example.com/error=image-blacklisted). AuditAnnotations will be provided by
+the admission webhook to add additional context to the audit log for this request.
+
+
+
warnings
+[]string
+
+
+
warnings is a list of warning messages to return to the requesting API client.
+Warning messages describe a problem the client making the API request should correct or be aware of.
+Limit warnings to 120 characters if possible.
+Warnings over 256 characters and large numbers of warnings may be truncated.
PatchType is the type of patch being used to represent the mutated object
+
+
+
+
\ No newline at end of file
diff --git a/content/en/docs/reference/config-api/apiserver-audit.v1.md b/content/en/docs/reference/config-api/apiserver-audit.v1.md
index 30cdd12dca95e..ffef0b7f2b01f 100644
--- a/content/en/docs/reference/config-api/apiserver-audit.v1.md
+++ b/content/en/docs/reference/config-api/apiserver-audit.v1.md
@@ -72,14 +72,14 @@ For non-resource requests, this is the lower-cased HTTP method.
diff --git a/content/en/docs/reference/config-api/client-authentication.v1.md b/content/en/docs/reference/config-api/client-authentication.v1.md
index 0c7784a8b3d88..0a3fab1a5c493 100644
--- a/content/en/docs/reference/config-api/client-authentication.v1.md
+++ b/content/en/docs/reference/config-api/client-authentication.v1.md
@@ -108,6 +108,15 @@ If empty, system roots should be used.
cluster.
+
disable-compression
+bool
+
+
+
DisableCompression allows client to opt-out of response compression for all requests to the server. This is useful
+to speed up requests (specifically lists) when client-server network bandwidth is ample, by saving time on
+compression (server-side) and decompression (client-side): https://github.com/kubernetes/kubernetes/issues/112296.
ExpirationTimestamp indicates a time when the provided credentials expire.
diff --git a/content/en/docs/reference/config-api/client-authentication.v1beta1.md b/content/en/docs/reference/config-api/client-authentication.v1beta1.md
index 15029d106efe6..09aa4dcc8753e 100644
--- a/content/en/docs/reference/config-api/client-authentication.v1beta1.md
+++ b/content/en/docs/reference/config-api/client-authentication.v1beta1.md
@@ -108,6 +108,15 @@ If empty, system roots should be used.
cluster.
+
disable-compression
+bool
+
+
+
DisableCompression allows client to opt-out of response compression for all requests to the server. This is useful
+to speed up requests (specifically lists) when client-server network bandwidth is ample, by saving time on
+compression (server-side) and decompression (client-side): https://github.com/kubernetes/kubernetes/issues/112296.
udpIdleTimeout is how long an idle UDP connection will be kept open (e.g. '250ms', '2s').
-Must be greater than 0. Only applicable for proxyMode=userspace.
ProxyMode represents modes used by the Kubernetes proxy server.
-
Currently, three modes of proxy are available in Linux platform: 'userspace' (older, going to be EOL), 'iptables'
-(newer, faster), 'ipvs'(newest, better in performance and scalability).
-
Two modes of proxy are available in Windows platform: 'userspace'(older, stable) and 'kernelspace' (newer, faster).
-
In Linux platform, if proxy mode is blank, use the best-available proxy (currently iptables, but may change in the
-future). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are
-insufficient, this always falls back to the userspace proxy. IPVS mode will be enabled when proxy mode is set to 'ipvs',
-and the fall back path is firstly iptables and then userspace.
-
In Windows platform, if proxy mode is blank, use the best-available proxy (currently userspace, but may change in the
-future). If winkernel proxy is selected, regardless of how, but the Windows kernel can't support this mode of proxy,
-this always falls back to the userspace proxy.
+
Currently, two modes of proxy are available on Linux platforms: 'iptables' and 'ipvs'.
+One mode of proxy is available on Windows platforms: 'kernelspace'.
+
If the proxy mode is unspecified, the best-available proxy mode will be used (currently this
+is iptables on Linux and kernelspace on Windows). If the selected proxy mode cannot be
+used (due to lack of kernel support, missing userspace components, etc) then kube-proxy
+will exit with an error.
@@ -535,10 +531,12 @@ this always falls back to the userspace proxy.
- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)
-- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
+- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
+- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
+
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
@@ -595,10 +593,12 @@ client.
**Appears in:**
-- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
+- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
+- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
+
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
@@ -637,6 +637,8 @@ enableProfiling is true.
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
+- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
+
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
diff --git a/content/en/docs/reference/config-api/kube-scheduler-config.v1.md b/content/en/docs/reference/config-api/kube-scheduler-config.v1.md
index ed03a74a53399..876122ef5410f 100644
--- a/content/en/docs/reference/config-api/kube-scheduler-config.v1.md
+++ b/content/en/docs/reference/config-api/kube-scheduler-config.v1.md
@@ -144,7 +144,7 @@ at least "minFeasibleNodesToFind" feasible nodes no matter what the va
Example: if the cluster size is 500 nodes and the value of this flag is 30,
then scheduler stops finding further feasible nodes once it finds 150 feasible ones.
When the value is 0, default percentage (5%--50% based on the size of the cluster) of the
-nodes will be scored.
+nodes will be scored. It is overridden by profile level PercentageofNodesToScore.
podInitialBackoffSeconds[Required]
@@ -202,7 +202,7 @@ with the extender. These extenders are shared by all scheduler profiles.
AddedAffinity is applied to all Pods additionally to the NodeAffinity
@@ -301,7 +301,7 @@ The default strategy is LeastAllocated with an equal "cpu" and "m
DefaultConstraints defines topology spread constraints to be applied to
@@ -635,6 +635,21 @@ If SchedulerName matches with the pod's "spec.schedulerName", then the
is scheduled with this profile.
+
percentageOfNodesToScore[Required]
+int32
+
+
+
PercentageOfNodesToScore is the percentage of all nodes that once found feasible
+for running a pod, the scheduler stops its search for more feasible nodes in
+the cluster. This helps improve scheduler's performance. Scheduler always tries to find
+at least "minFeasibleNodesToFind" feasible nodes no matter what the value of this flag is.
+Example: if the cluster size is 500 nodes and the value of this flag is 30,
+then scheduler stops finding further feasible nodes once it finds 150 feasible ones.
+When the value is 0, default percentage (5%--50% based on the size of the cluster) of the
+nodes will be scored. It will override global PercentageOfNodesToScore. If it is empty,
+global PercentageOfNodesToScore will be used.
LeaderElectionConfiguration defines the configuration of leader election
clients for components that can run with leader election enabled.
diff --git a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta2.md b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta2.md
index 8a4c735b32647..edf1071e18a05 100644
--- a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta2.md
+++ b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta2.md
@@ -218,7 +218,7 @@ with the extender. These extenders are shared by all scheduler profiles.
AddedAffinity is applied to all Pods additionally to the NodeAffinity
@@ -317,7 +317,7 @@ The default strategy is LeastAllocated with an equal "cpu" and "m
DefaultConstraints defines topology spread constraints to be applied to
@@ -803,6 +803,13 @@ be invoked before default plugins, default plugins must be disabled and re-enabl
diff --git a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md
index c9c2d9651bef0..1f67ffce6c466 100644
--- a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md
+++ b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md
@@ -202,7 +202,7 @@ with the extender. These extenders are shared by all scheduler profiles.
AddedAffinity is applied to all Pods additionally to the NodeAffinity
@@ -301,7 +301,7 @@ The default strategy is LeastAllocated with an equal "cpu" and "m
DefaultConstraints defines topology spread constraints to be applied to
@@ -787,6 +787,13 @@ be invoked before default plugins, default plugins must be disabled and re-enabl
Package v1beta2 defines the v1beta2 version of the kubeadm configuration file format.
This version improves on the v1beta1 format by fixing some minor issues and adding a few new fields.
A list of changes since v1beta1:
@@ -15,7 +16,7 @@ This version improves on the v1beta1 format by fixing some minor issues and addi
The JSON "omitempty" tag of the "taints" field (inside NodeRegistrationOptions) is removed.
See the Kubernetes 1.15 changelog for further details.
-
Migration from old kubeadm config versions
+
Migration from old kubeadm config versions
Please convert your v1beta1 configuration files to v1beta2 using the "kubeadm config migrate" command of kubeadm v1.15.x
(conversion from older releases of kubeadm config files requires older release of kubeadm as well e.g.
@@ -75,16 +76,16 @@ use it to customize the node name, the CRI socket to use or any other settings t
node only (e.g. the node ip).
-
apiServer, that represents the endpoint of the instance of the API server to be deployed on this node;
+
localAPIEndpoint, that represents the endpoint of the instance of the API server to be deployed on this node;
use it e.g. to customize the API server advertise address.
apiVersion:kubeadm.k8s.io/v1beta2kind:ClusterConfigurationnetworking:
-...
+...etcd:
-...
+...apiServer:extraArgs:...
@@ -109,7 +110,7 @@ components by adding customized setting or overriding kubeadm default settings.<
The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances deployed
in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.
See https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ or
@@ -117,7 +118,7 @@ https://pkg.go.dev/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
for kube proxy official documentation.
The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances
deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.
See https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ or
@@ -228,18 +229,18 @@ configuration types to be used during a kubeadm init run.
When executing kubeadm join with the --config option, the JoinConfiguration type should be provided.
The JoinConfiguration type should be used to configure runtime settings, that in case of kubeadm join
are the discovery method used for accessing the cluster info and all the setting which are specific
to the node where kubeadm is executed, including:
-
NodeRegistration, that holds fields that relate to registering the new node to the cluster;
+
nodeRegistration, that holds fields that relate to registering the new node to the cluster;
use it to customize the node name, the CRI socket to use or any other settings that should apply to this
node only (e.g. the node IP).
-
APIEndpoint, that represents the endpoint of the instance of the API server to be eventually deployed on this node.
+
apiEndpoint, that represents the endpoint of the instance of the API server to be eventually deployed on this node.
@@ -637,7 +638,7 @@ for, so other administrators can know its purpose.
expires specifies the timestamp when this token expires. Defaults to being set
@@ -948,7 +949,7 @@ Kubeadm has no knowledge of where certificate files live and they must be suppli
[]string
-
endpoints of etcd members.
+
endpoints of etcd members. Required for external etcd.
caFile[Required]
@@ -1050,7 +1051,7 @@ from which to load cluster information.
taints specifies the taints the Node API object should be registered with.
diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md
index c631b359fabd3..8abeb61fe3572 100644
--- a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md
+++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md
@@ -137,23 +137,23 @@ configuration types to be used during a kubeadm init run.
expires specifies the timestamp when this token expires. Defaults to being set
+dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.
+
+
+
usages
+[]string
+
+
+
usages describes the ways in which this token can be used. Can by default be used
+for establishing bidirectional trust, but that can be changed here.
+
+
+
groups
+[]string
+
+
+
groups specifies the extra groups that this token will authenticate as when/if
+used for authentication
BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used
+for both validation of the practically of the API server from a joining node's point
+of view and as an authentication method for the node in the bootstrap phase of
+"kubeadm join". This token is and should be short-lived.
+
+
+
+
Field
Description
+
+
+
+
-[Required]
+string
+
+
+ No description provided.
+
+
-[Required]
+string
+
+
+ No description provided.
+
+
+
+
+
+
## `ClusterConfiguration` {#kubeadm-k8s-io-v1beta3-ClusterConfiguration}
@@ -641,7 +744,7 @@ information will be fetched.
caCertHashes specifies a set of public key pins to verify when token-based discovery
is used. The root CA found during discovery must match one of these values.
Specifying an empty set disables root CA pinning, which can be unsafe.
-Each hash is specified as ":", where the only currently supported type is
+Each hash is specified as <type>:<value>, where the only currently supported type is
"sha256". This is a hex-encoded SHA-256 hash of the Subject Public Key Info (SPKI)
object in DER-encoded ASN.1. These hashes can be calculated using, for example, OpenSSL.
@@ -933,7 +1036,7 @@ file from which to load cluster information.
taints specifies the taints the Node API object should be registered with.
-If this field is unset, i.e. nil, in the kubeadm init process it will be defaulted
-with a control-plane taint for control-plane nodes.
+If this field is unset, i.e. nil, it will be defaulted with a control-plane taint for control-plane nodes.
If you don't want to taint your control-plane node, set this field to an empty list,
i.e. taints: [] in the YAML file. This field is solely used for Node registration.
@@ -1173,7 +1275,7 @@ i.e. taints: [] in the YAML file. This field is solely used for Nod
kubeletExtraArgs passes through extra arguments to the kubelet.
The arguments here are passed to the kubelet command line via the environment file
kubeadm writes at runtime for the kubelet to source.
-This overrides the generic base-level configuration in the 'kubelet-config-1.X' ConfigMap.
+This overrides the generic base-level configuration in the kubelet-config ConfigMap.
Flags have higher priority when parsing. These values are local and specific to the node
kubeadm is executing on. A key in this map is the flag name as it appears on the
command line except without leading dash(es).
@@ -1188,13 +1290,13 @@ the current node is registered.
imagePullPolicy specifies the policy for image pulling during kubeadm "init" and
"join" operations.
The value of this field must be one of "Always", "IfNotPresent" or "Never".
-If this field is unset kubeadm will default it to "IfNotPresent", or pull the required
+If this field is not set, kubeadm will default it to "IfNotPresent", or pull the required
images if not present on the host.
expires specifies the timestamp when this token expires. Defaults to being set
-dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.
-
-
-
usages
-[]string
-
-
-
usages describes the ways in which this token can be used. Can by default be used
-for establishing bidirectional trust, but that can be changed here.
-
-
-
groups
-[]string
-
-
-
groups specifies the extra groups that this token will authenticate as when/if
-used for authentication
BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used
-for both validation of the practically of the API server from a joining node's point
-of view and as an authentication method for the node in the bootstrap phase of
-"kubeadm join". This token is and should be short-lived.
-
-
-
-
Field
Description
-
-
-
-
-[Required]
-string
-
-
- No description provided.
-
-
-[Required]
-string
-
-
- No description provided.
-
-
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/en/docs/reference/config-api/kubeconfig.v1.md b/content/en/docs/reference/config-api/kubeconfig.v1.md
new file mode 100644
index 0000000000000..42cf3bd7cc9c6
--- /dev/null
+++ b/content/en/docs/reference/config-api/kubeconfig.v1.md
@@ -0,0 +1,602 @@
+---
+title: kubeconfig (v1)
+content_type: tool-reference
+package: v1
+auto_generated: true
+---
+
+## Resource Types
+
+
+- [Config](#Config)
+
+
+
+## `AuthInfo` {#AuthInfo}
+
+
+**Appears in:**
+
+- [NamedAuthInfo](#NamedAuthInfo)
+
+
+
AuthInfo contains information that describes identity information. This is use to tell the kubernetes cluster who you are.
+
+
+
+
Field
Description
+
+
+
+
client-certificate
+string
+
+
+
ClientCertificate is the path to a client cert file for TLS.
+
+
+
client-certificate-data
+[]byte
+
+
+
ClientCertificateData contains PEM-encoded data from a client cert file for TLS. Overrides ClientCertificate
+
+
+
client-key
+string
+
+
+
ClientKey is the path to a client key file for TLS.
+
+
+
client-key-data
+[]byte
+
+
+
ClientKeyData contains PEM-encoded data from a client key file for TLS. Overrides ClientKey
+
+
+
token
+string
+
+
+
Token is the bearer token for authentication to the kubernetes cluster.
+
+
+
tokenFile
+string
+
+
+
TokenFile is a pointer to a file that contains a bearer token (as described above). If both Token and TokenFile are present, Token takes precedence.
+
+
+
as
+string
+
+
+
Impersonate is the username to impersonate. The name matches the flag.
+
+
+
as-uid
+string
+
+
+
ImpersonateUID is the uid to impersonate.
+
+
+
as-groups
+[]string
+
+
+
ImpersonateGroups is the groups to impersonate.
+
+
+
as-user-extra
+map[string][]string
+
+
+
ImpersonateUserExtra contains additional information for impersonated user.
+
+
+
username
+string
+
+
+
Username is the username for basic authentication to the kubernetes cluster.
+
+
+
password
+string
+
+
+
Password is the password for basic authentication to the kubernetes cluster.
ProxyURL is the URL to the proxy to be used for all requests made by this
+client. URLs with "http", "https", and "socks5" schemes are supported. If
+this configuration is not provided or the empty string, the client
+attempts to construct a proxy configuration from http_proxy and
+https_proxy environment variables. If these environment variables are not
+set, the client does not attempt to proxy requests.
+
socks5 proxying does not currently support spdy streaming endpoints (exec,
+attach, port forward).
+
+
+
disable-compression
+bool
+
+
+
DisableCompression allows client to opt-out of response compression for all requests to the server. This is useful
+to speed up requests (specifically lists) when client-server network bandwidth is ample, by saving time on
+compression (server-side) and decompression (client-side): https://github.com/kubernetes/kubernetes/issues/112296.
Context is a tuple of references to a cluster (how do I communicate with a kubernetes cluster), a user (how do I identify myself), and a namespace (what subset of resources do I want to work with)
+
+
+
+
Field
Description
+
+
+
+
cluster[Required]
+string
+
+
+
Cluster is the name of the cluster for this context
+
+
+
user[Required]
+string
+
+
+
AuthInfo is the name of the authInfo for this context
+
+
+
namespace
+string
+
+
+
Namespace is the default namespace to use on unspecified requests
Env defines additional environment variables to expose to the process. These
+are unioned with the host's environment, as well as variables client-go uses
+to pass argument to the plugin.
+
+
+
apiVersion[Required]
+string
+
+
+
Preferred input version of the ExecInfo. The returned ExecCredentials MUST use
+the same encoding version as the input.
+
+
+
installHint[Required]
+string
+
+
+
This text is shown to the user when the executable doesn't seem to be
+present. For example, brew install foo-cli might be a good InstallHint for
+foo-cli on Mac OS systems.
+
+
+
provideClusterInfo[Required]
+bool
+
+
+
ProvideClusterInfo determines whether or not to provide cluster information,
+which could potentially contain very large CA data, to this exec plugin as a
+part of the KUBERNETES_EXEC_INFO environment variable. By default, it is set
+to false. Package k8s.io/client-go/tools/auth/exec provides helper methods for
+reading this environment variable.
InteractiveMode determines this plugin's relationship with standard input. Valid
+values are "Never" (this exec plugin never uses standard input), "IfAvailable" (this
+exec plugin wants to use standard input if it is available), or "Always" (this exec
+plugin requires standard input to function). See ExecInteractiveMode values for more
+details.
+
If APIVersion is client.authentication.k8s.io/v1alpha1 or
+client.authentication.k8s.io/v1beta1, then this field is optional and defaults
+to "IfAvailable" when unset. Otherwise, this field is required.
Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields
+
+
+
+
\ No newline at end of file
diff --git a/content/en/docs/reference/config-api/kubelet-config.v1.md b/content/en/docs/reference/config-api/kubelet-config.v1.md
new file mode 100644
index 0000000000000..abaf48ec4bb3b
--- /dev/null
+++ b/content/en/docs/reference/config-api/kubelet-config.v1.md
@@ -0,0 +1,379 @@
+---
+title: Kubelet Configuration (v1)
+content_type: tool-reference
+package: kubelet.config.k8s.io/v1
+auto_generated: true
+---
+
+
+## Resource Types
+
+
+- [CredentialProviderConfig](#kubelet-config-k8s-io-v1-CredentialProviderConfig)
+
+
+
+## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1-CredentialProviderConfig}
+
+
+
+
CredentialProviderConfig is the configuration containing information about
+each exec credential provider. Kubelet reads this configuration from disk and enables
+each provider as specified by the CredentialProvider type.
providers is a list of credential provider plugins that will be enabled by the kubelet.
+Multiple providers may match against a single image, in which case credentials
+from all providers will be returned to the kubelet. If multiple providers are called
+for a single image, the results are combined. If providers return overlapping
+auth keys, the value from the provider earlier in this list is used.
CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only
+invoked when an image being pulled matches the images handled by the plugin (see matchImages).
+
+
+
+
Field
Description
+
+
+
+
name[Required]
+string
+
+
+
name is the required name of the credential provider. It must match the name of the
+provider executable as seen by the kubelet. The executable must be in the kubelet's
+bin directory (set by the --image-credential-provider-bin-dir flag).
+
+
+
matchImages[Required]
+[]string
+
+
+
matchImages is a required list of strings used to match against images in order to
+determine if this provider should be invoked. If one of the strings matches the
+requested image from the kubelet, the plugin will be invoked and given a chance
+to provide credentials. Images are expected to contain the registry domain
+and URL path.
+
Each entry in matchImages is a pattern which can optionally contain a port and a path.
+Globs can be used in the domain, but not in the port or the path. Globs are supported
+as subdomains like '.k8s.io' or 'k8s..io', and top-level-domains such as 'k8s.'.
+Matching partial subdomains like 'app.k8s.io' is also supported. Each glob can only match
+a single subdomain segment, so *.io does not match *.k8s.io.
+
A match exists between an image and a matchImage when all of the below are true:
+
+
Both contain the same number of domain parts and each part matches.
+
The URL path of an imageMatch must be a prefix of the target image URL path.
+
If the imageMatch contains a port, then the port must match in the image as well.
defaultCacheDuration is the default duration the plugin will cache credentials in-memory
+if a cache duration is not provided in the plugin response. This field is required.
+
+
+
apiVersion[Required]
+string
+
+
+
Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse
+MUST use the same encoding version as the input. Current supported values are:
+
+
credentialprovider.kubelet.k8s.io/v1
+
+
+
+
args
+[]string
+
+
+
Arguments to pass to the command when executing it.
Env defines additional environment variables to expose to the process. These
+are unioned with the host's environment, as well as variables client-go uses
+to pass argument to the plugin.
JSONOptions contains options for logging format "json".
+
+
+
+
Field
Description
+
+
+
+
splitStream[Required]
+bool
+
+
+
[Alpha] SplitStream redirects error messages to stderr while
+info messages go to stdout, with buffering. The default is to write
+both to stdout, without buffering. Only available when
+the LoggingAlphaOptions feature gate is enabled.
[Alpha] InfoBufferSize sets the size of the info stream when
+using split streams. The default is zero, which disables buffering.
+Only available when the LoggingAlphaOptions feature gate is enabled.
Maximum number of nanoseconds (i.e. 1s = 1000000000) between log
+flushes. Ignored if the selected logging backend writes log
+messages without buffering.
Verbosity is the threshold that determines which log messages are
+logged. Default is zero which logs only the most important
+messages. Higher values enable additional messages. Error messages
+are always logged.
[Alpha] Options holds additional parameters that are specific
+to the different logging formats. Only the options for the selected
+format get used, but all of them get validated.
+Only available when the LoggingAlphaOptions feature gate is enabled.
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
+
+
+
+
Field
Description
+
+
+
+
endpoint
+string
+
+
+
Endpoint of the collector this component will report traces to.
+The connection is insecure, and does not currently support TLS.
+Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
+
+
+
samplingRatePerMillion
+int32
+
+
+
SamplingRatePerMillion is the number of samples to collect per million spans.
+Recommended is unset. If unset, sampler respects its parent span's sampling
+rate, but otherwise never samples.
VerbosityLevel represents a klog or logr verbosity threshold.
+
+
diff --git a/content/en/docs/reference/config-api/kubelet-config.v1beta1.md b/content/en/docs/reference/config-api/kubelet-config.v1beta1.md
index 2d415c617aa9a..a11c179a58aa3 100644
--- a/content/en/docs/reference/config-api/kubelet-config.v1beta1.md
+++ b/content/en/docs/reference/config-api/kubelet-config.v1beta1.md
@@ -547,6 +547,16 @@ that topology manager requests and hint providers generate. Valid values include
Default: "container"
+
topologyManagerPolicyOptions
+map[string]string
+
+
+
TopologyManagerPolicyOptions is a set of key=value which allows to set extra options
+to fine tune the behaviour of the topology manager policies.
+Requires both the "TopologyManager" and "TopologyManagerPolicyOptions" feature gates to be enabled.
+Default: nil
+
+
qosReserved map[string]string
@@ -645,7 +655,7 @@ Default: true
cpuCFSQuotaPeriod is the CPU CFS quota period value, cpu.cfs_period_us.
-The value must be between 1 us and 1 second, inclusive.
+The value must be between 1 ms and 1 second, inclusive.
Requires the CustomCPUCFSQuotaPeriod feature gate to be enabled.
Default: "100ms"
@@ -1145,12 +1155,12 @@ Default: false
when setting the cgroupv2 memory.high value to enforce MemoryQoS.
Decreasing this factor will set lower high limit for container cgroups and put heavier reclaim pressure
while increasing will put less reclaim pressure.
-See http://kep.k8s.io/2570 for more details.
+See https://kep.k8s.io/2570 for more details.
Default: 0.8
CredentialProviderRequest includes the image that the kubelet requires authentication for.
+Kubelet will pass this request object to the plugin via stdin. In general, plugins should
+prefer responding with the same apiVersion they were sent.
+
+
+
+
Field
Description
+
+
+
apiVersion string
credentialprovider.kubelet.k8s.io/v1
+
kind string
CredentialProviderRequest
+
+
+
image[Required]
+string
+
+
+
image is the container image that is being pulled as part of the
+credential provider plugin request. Plugins may optionally parse the image
+to extract any information required to fetch credentials.
CredentialProviderResponse holds credentials that the kubelet should use for the specified
+image provided in the original request. Kubelet will read the response from the plugin via stdout.
+This response should be set to the same apiVersion as CredentialProviderRequest.
cacheKeyType indiciates the type of caching key to use based on the image provided
+in the request. There are three valid values for the cache key type: Image, Registry, and
+Global. If an invalid value is specified, the response will NOT be used by the kubelet.
cacheDuration indicates the duration the provided credentials should be cached for.
+The kubelet will use this field to set the in-memory cache duration for credentials
+in the AuthConfig. If null, the kubelet will use defaultCacheDuration provided in
+CredentialProviderConfig. If set to 0, the kubelet will not cache the provided AuthConfig.
auth is a map containing authentication information passed into the kubelet.
+Each key is a match image string (more on this below). The corresponding authConfig value
+should be valid for all images that match against this key. A plugin should set
+this field to null if no valid credentials can be returned for the requested image.
+
Each key in the map is a pattern which can optionally contain a port and a path.
+Globs can be used in the domain, but not in the port or the path. Globs are supported
+as subdomains like '.k8s.io' or 'k8s..io', and top-level-domains such as 'k8s.'.
+Matching partial subdomains like 'app.k8s.io' is also supported. Each glob can only match
+a single subdomain segment, so *.io does not match *.k8s.io.
+
The kubelet will match images against the key when all of the below are true:
+
+
Both contain the same number of domain parts and each part matches.
+
The URL path of an imageMatch must be a prefix of the target image URL path.
+
If the imageMatch contains a port, then the port must match in the image as well.
+
+
When multiple keys are returned, the kubelet will traverse all keys in reverse order so that:
+
+
longer keys come before shorter keys with the same prefix
+
non-wildcard keys come before wildcard keys with the same prefix.
+
+
For any given match, the kubelet will attempt an image pull with the provided credentials,
+stopping after the first successfully authenticated pull.
AuthConfig contains authentication information for a container registry.
+Only username/password based authentication is supported today, but more authentication
+mechanisms may be added in the future.
+
+
+
+
Field
Description
+
+
+
+
username[Required]
+string
+
+
+
username is the username used for authenticating to the container registry
+An empty username is valid.
+
+
+
password[Required]
+string
+
+
+
password is the password used for authenticating to the container registry
+An empty password is valid.
+
+
+
+
+
+## `PluginCacheKeyType` {#credentialprovider-kubelet-k8s-io-v1-PluginCacheKeyType}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse)
+
+
+
+
+
\ No newline at end of file
diff --git a/content/en/docs/reference/glossary/feature-gates.md b/content/en/docs/reference/glossary/feature-gates.md
new file mode 100644
index 0000000000000..410581ee0ced4
--- /dev/null
+++ b/content/en/docs/reference/glossary/feature-gates.md
@@ -0,0 +1,23 @@
+---
+title: Feature gate
+id: feature-gate
+date: 2023-01-12
+full_link: /docs/reference/command-line-tools-reference/feature-gates/
+short_description: >
+ A way to control whether or not a particular Kubernetes feature is enabled.
+
+aka:
+tags:
+- fundamental
+- operation
+---
+
+Feature gates are a set of keys (opaque string values) that you can use to control which
+Kubernetes features are enabled in your cluster.
+
+
+
+You can turn these features on or off using the `--feature-gates` command line flag on each Kubernetes component.
+Each Kubernetes component lets you enable or disable a set of feature gates that are relevant to that component.
+The Kubernetes documentation lists all current
+[feature gates](/docs/reference/command-line-tools-reference/feature-gates/) and what they control.
diff --git a/content/en/docs/reference/glossary/istio.md b/content/en/docs/reference/glossary/istio.md
index fbf29f421c952..7dfea5de9e1ce 100644
--- a/content/en/docs/reference/glossary/istio.md
+++ b/content/en/docs/reference/glossary/istio.md
@@ -2,7 +2,7 @@
title: Istio
id: istio
date: 2018-04-12
-full_link: https://istio.io/docs/concepts/what-is-istio/
+full_link: https://istio.io/latest/about/service-mesh/#what-is-istio
short_description: >
An open platform (not Kubernetes-specific) that provides a uniform way to integrate microservices, manage traffic flow, enforce policies, and aggregate telemetry data.
@@ -17,4 +17,3 @@ tags:
Adding Istio does not require changing application code. It is a layer of infrastructure between a service and the network, which when combined with service deployments, is commonly referred to as a service mesh. Istio's control plane abstracts away the underlying cluster management platform, which may be Kubernetes, Mesosphere, etc.
-
diff --git a/content/en/docs/reference/glossary/kops.md b/content/en/docs/reference/glossary/kops.md
index 0a3da419694f6..3a1ea5628cbda 100644
--- a/content/en/docs/reference/glossary/kops.md
+++ b/content/en/docs/reference/glossary/kops.md
@@ -1,32 +1,29 @@
---
-title: Kops
+title: kOps (Kubernetes Operations)
id: kops
date: 2018-04-12
-full_link: /docs/getting-started-guides/kops/
+full_link: /docs/setup/production-environment/kops/
short_description: >
- A CLI tool that helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters.
+ kOps will not only help you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes cluster, but it will also provision the necessary cloud infrastructure.
aka:
tags:
- tool
- operation
---
- A CLI tool that helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters.
+
+`kOps` will not only help you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes cluster, but it will also provision the necessary cloud infrastructure.
{{< note >}}
-kops has general availability support only for AWS.
-Support for using kops with GCE and VMware vSphere are in alpha.
+AWS (Amazon Web Services) is currently officially supported, with DigitalOcean, GCE and OpenStack in beta support, and Azure in alpha.
{{< /note >}}
-`kops` provisions your cluster with:
-
+`kOps` is an automated provisioning system:
* Fully automated installation
- * DNS-based cluster identification
- * Self-healing: everything runs in Auto-Scaling Groups
- * Limited OS support (Debian preferred, Ubuntu 16.04 supported, early support for CentOS & RHEL)
- * High availability (HA) support
- * The ability to directly provision, or to generate Terraform manifests
-
-You can also build your own cluster using {{< glossary_tooltip term_id="kubeadm" >}} as a building block. `kops` builds on the kubeadm work.
+ * Uses DNS to identify clusters
+ * Self-healing: everything runs in Auto-Scaling Groups
+ * Multiple OS support (Amazon Linux, Debian, Flatcar, RHEL, Rocky and Ubuntu)
+ * High-Availability support
+ * Can directly provision, or generate terraform manifests
diff --git a/content/en/docs/reference/kubernetes-api/workload-resources/job-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/job-v1.md
index cefcb950865c1..42cec53db7cfc 100644
--- a/content/en/docs/reference/kubernetes-api/workload-resources/job-v1.md
+++ b/content/en/docs/reference/kubernetes-api/workload-resources/job-v1.md
@@ -140,7 +140,8 @@ JobSpec describes how the job execution will look like.
- **podFailurePolicy.rules.action** (string), required
- Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are: - FailJob: indicates that the pod's job is marked as Failed and all
+ Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are:
+ - FailJob: indicates that the pod's job is marked as Failed and all
running pods are terminated.
- Ignore: indicates that the counter towards the .backoffLimit is not
incremented and a replacement pod is created.
@@ -176,7 +177,8 @@ JobSpec describes how the job execution will look like.
- **podFailurePolicy.rules.onExitCodes.operator** (string), required
- Represents the relationship between the container exit code(s) and the specified values. Containers completed with success (exit code 0) are excluded from the requirement check. Possible values are: - In: the requirement is satisfied if at least one container exit code
+ Represents the relationship between the container exit code(s) and the specified values. Containers completed with success (exit code 0) are excluded from the requirement check. Possible values are:
+ - In: the requirement is satisfied if at least one container exit code
(might be multiple if there are multiple containers not restricted
by the 'containerName' field) is in the set of specified values.
- NotIn: the requirement is satisfied if at least one container exit code
diff --git a/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md
index 2bf88d6e43327..3b4dbd6ef8486 100644
--- a/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md
+++ b/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md
@@ -219,9 +219,12 @@ PodSpec is a description of a pod.
- **topologySpreadConstraints.whenUnsatisfiable** (string), required
- WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location,
+ WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint.
+ - DoNotSchedule (default) tells the scheduler not to schedule it.
+ - ScheduleAnyway tells the scheduler to schedule the pod in any location,
but giving higher precedence to topologies that would help reduce the
skew.
+
A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it *more* imbalanced. It's a required field.
diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md
index 75974454588c5..46cf104bf9753 100644
--- a/content/en/docs/reference/labels-annotations-taints/_index.md
+++ b/content/en/docs/reference/labels-annotations-taints/_index.md
@@ -155,6 +155,32 @@ This label has been deprecated. Please use `kubernetes.io/arch` instead.
This label has been deprecated. Please use `kubernetes.io/os` instead.
+### kube-aggregator.kubernetes.io/automanaged {#kube-aggregator-kubernetesio-automanaged}
+
+Example: `kube-aggregator.kubernetes.io/automanaged: "onstart"`
+
+Used on: APIService
+
+The `kube-apiserver` sets this label on any APIService object that the API server has created automatically. The label marks how the control plane should manage that APIService. You should not add, modify, or remove this label by yourself.
+
+{{< note >}}
+Automanaged APIService objects are deleted by kube-apiserver when it has no built-in or custom resource API corresponding to the API group/version of the APIService.
+{{< /note >}}
+
+There are two possible values:
+- `onstart`: The APIService should be reconciled when an API server starts up, but not otherwise.
+- `true`: The API server should reconcile this APIService continuously.
+
+### service.alpha.kubernetes.io/tolerate-unready-endpoints (deprecated)
+
+Used on: StatefulSet
+
+This annotation on a Service denotes if the Endpoints controller should go ahead and create Endpoints for unready Pods.
+Endpoints of these Services retain their DNS records and continue receiving
+traffic for the Service from the moment the kubelet starts all containers in the pod
+and marks it _Running_, til the kubelet stops all containers and deletes the pod from
+the API server.
+
### kubernetes.io/hostname {#kubernetesiohostname}
Example: `kubernetes.io/hostname: "ip-172-20-114-199.ec2.internal"`
@@ -294,6 +320,50 @@ See [topology.kubernetes.io/zone](#topologykubernetesiozone).
{{< note >}} Starting in v1.17, this label is deprecated in favor of [topology.kubernetes.io/zone](#topologykubernetesiozone). {{< /note >}}
+### pv.kubernetes.io/bind-completed {#pv-kubernetesiobind-completed}
+
+Example: `pv.kubernetes.io/bind-completed: "yes"`
+
+Used on: PersistentVolumeClaim
+
+When this annotation is set on a PersistentVolumeClaim (PVC), that indicates that the lifecycle
+of the PVC has passed through initial binding setup. When present, that information changes
+how the control plane interprets the state of PVC objects.
+The value of this annotation does not matter to Kubernetes.
+
+### pv.kubernetes.io/bound-by-controller {#pv-kubernetesioboundby-controller}
+
+Example: `pv.kubernetes.io/bound-by-controller: "yes"`
+
+Used on: PersistentVolume, PersistentVolumeClaim
+
+If this annotation is set on a PersistentVolume or PersistentVolumeClaim, it indicates that a storage binding
+(PersistentVolume → PersistentVolumeClaim, or PersistentVolumeClaim → PersistentVolume) was installed
+by the {{< glossary_tooltip text="controller" term_id="controller" >}}.
+If the annotation isn't set, and there is a storage binding in place, the absence of that annotation means that
+the binding was done manually. The value of this annotation does not matter.
+
+### pv.kubernetes.io/provisioned-by {#pv-kubernetesiodynamically-provisioned}
+
+Example: `pv.kubernetes.io/provisioned-by: "kubernetes.io/rbd"`
+
+Used on: PersistentVolume
+
+This annotation is added to a PersistentVolume(PV) that has been dynamically provisioned by Kubernetes.
+Its value is the name of volume plugin that created the volume. It serves both user (to show where a PV
+comes from) and Kubernetes (to recognize dynamically provisioned PVs in its decisions).
+
+### pv.kubernetes.io/migrated-to {#pv-kubernetesio-migratedto}
+
+Example: `pv.kubernetes.io/migrated-to: pd.csi.storage.gke.io`
+
+Used on: PersistentVolume, PersistentVolumeClaim
+
+It is added to a PersistentVolume(PV) and PersistentVolumeClaim(PVC) that is supposed to be
+dynamically provisioned/deleted by its corresponding CSI driver through the `CSIMigration` feature gate.
+When this annotation is set, the Kubernetes components will "stand-down" and the `external-provisioner`
+will act on the objects.
+
### statefulset.kubernetes.io/pod-name {#statefulsetkubernetesiopod-name}
Example:
@@ -377,6 +447,25 @@ Used on: PersistentVolumeClaim
This annotation will be added to dynamic provisioning required PVC.
+### volume.kubernetes.io/selected-node
+
+Used on: PersistentVolumeClaim
+
+This annotation is added to a PVC that is triggered by a scheduler to be dynamically provisioned. Its value is the name of the selected node.
+
+### volumes.kubernetes.io/controller-managed-attach-detach
+
+Used on: Node
+
+If a node has set the annotation `volumes.kubernetes.io/controller-managed-attach-detach`
+on itself, then its storage attach and detach operations are being managed
+by the _volume attach/detach_
+{{< glossary_tooltip text="controller" term_id="controller" >}} running within the
+{{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}.
+
+The value of the annotation isn't important; if this annotation exists on a node,
+then storage attaches and detaches are controller managed.
+
### node.kubernetes.io/windows-build {#nodekubernetesiowindows-build}
Example: `node.kubernetes.io/windows-build: "10.0.17763"`
@@ -769,6 +858,16 @@ created from a VolumeSnapshot.
Refer to [Converting the volume mode of a Snapshot](/docs/concepts/storage/volume-snapshots/#convert-volume-mode)
and the [Kubernetes CSI Developer Documentation](https://kubernetes-csi.github.io/docs/) for more information.
+### scheduler.alpha.kubernetes.io/critical-pod (deprecated)
+
+Example: `scheduler.alpha.kubernetes.io/critical-pod: ""`
+
+Used on: Pod
+
+This annotation lets Kubernetes control plane know about a pod being a critical pod so that the descheduler will not remove this pod.
+
+{{< note >}} Starting in v1.16, this annotation was removed in favor of [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/). {{< /note >}}
+
## Annotations used for audit
diff --git a/content/en/docs/reference/networking/virtual-ips.md b/content/en/docs/reference/networking/virtual-ips.md
index 4022317a82952..af3899a703d2e 100644
--- a/content/en/docs/reference/networking/virtual-ips.md
+++ b/content/en/docs/reference/networking/virtual-ips.md
@@ -14,7 +14,6 @@ mechanism for {{< glossary_tooltip term_id="service" text="Services">}}
of `type` other than
[`ExternalName`](/docs/concepts/services-networking/service/#externalname).
-
A question that pops up every now and then is why Kubernetes relies on
proxying to forward inbound traffic to backends. What about other
approaches? For example, would it be possible to configure DNS records that
@@ -39,15 +38,13 @@ network proxying service on a computer. Although the `kube-proxy` executable su
`cleanup` function, this function is not an official feature and thus is only available
to use as-is.
-
-Some of the details in this reference refer to an example: the back end Pods for a stateless
+Some of the details in this reference refer to an example: the backend Pods for a stateless
image-processing workload, running with three replicas. Those replicas are
fungible—frontends do not care which backend they use. While the actual Pods that
compose the backend set may change, the frontend clients should not need to be aware of that,
nor should they need to keep track of the set of backends themselves.
-
## Proxy modes
@@ -87,7 +84,7 @@ to verify that backend Pods are working OK, so that kube-proxy in iptables mode
only sees backends that test out as healthy. Doing this means you avoid
having traffic sent via kube-proxy to a Pod that's known to have failed.
-{{< figure src="/images/docs/services-iptables-overview.svg" title="Services overview diagram for iptables proxy" class="diagram-medium" >}}
+{{< figure src="/images/docs/services-iptables-overview.svg" title="Virtual IP mechanism for Services, using iptables mode" class="diagram-medium" >}}
#### Example {#packet-processing-iptables}
@@ -111,6 +108,91 @@ redirected to the backend without rewriting the client IP address.
This same basic flow executes when traffic comes in through a node-port or
through a load-balancer, though in those cases the client IP address does get altered.
+#### Optimizing iptables mode performance
+
+In large clusters (with tens of thousands of Pods and Services), the
+iptables mode of kube-proxy may take a long time to update the rules
+in the kernel when Services (or their EndpointSlices) change. You can adjust the syncing
+behavior of kube-proxy via options in the [`iptables` section](/docs/reference/config-api/kube-proxy-config.v1alpha1/#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPTablesConfiguration)
+of the
+kube-proxy [configuration file](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
+(which you specify via `kube-proxy --config `):
+
+```yaml
+...
+iptables:
+ minSyncPeriod: 1s
+ syncPeriod: 30s
+...
+```
+
+##### `minSyncPeriod`
+
+The `minSyncPeriod` parameter sets the minimum duration between
+attempts to resynchronize iptables rules with the kernel. If it is
+`0s`, then kube-proxy will always immediately synchronize the rules
+every time any Service or Endpoint changes. This works fine in very
+small clusters, but it results in a lot of redundant work when lots of
+things change in a small time period. For example, if you have a
+Service backed by a Deployment with 100 pods, and you delete the
+Deployment, then with `minSyncPeriod: 0s`, kube-proxy would end up
+removing the Service's Endpoints from the iptables rules one by one,
+for a total of 100 updates. With a larger `minSyncPeriod`, multiple
+Pod deletion events would get aggregated together, so kube-proxy might
+instead end up making, say, 5 updates, each removing 20 endpoints,
+which will be much more efficient in terms of CPU, and result in the
+full set of changes being synchronized faster.
+
+The larger the value of `minSyncPeriod`, the more work that can be
+aggregated, but the downside is that each individual change may end up
+waiting up to the full `minSyncPeriod` before being processed, meaning
+that the iptables rules spend more time being out-of-sync with the
+current apiserver state.
+
+The default value of `1s` is a good compromise for small and medium
+clusters. In large clusters, it may be necessary to set it to a larger
+value. (Especially, if kube-proxy's
+`sync_proxy_rules_duration_seconds` metric indicates an average
+time much larger than 1 second, then bumping up `minSyncPeriod` may
+make updates more efficient.)
+
+##### `syncPeriod`
+
+The `syncPeriod` parameter controls a handful of synchronization
+operations that are not directly related to changes in individual
+Services and Endpoints. In particular, it controls how quickly
+kube-proxy notices if an external component has interfered with
+kube-proxy's iptables rules. In large clusters, kube-proxy also only
+performs certain cleanup operations once every `syncPeriod` to avoid
+unnecessary work.
+
+For the most part, increasing `syncPeriod` is not expected to have much
+impact on performance, but in the past, it was sometimes useful to set
+it to a very large value (eg, `1h`). This is no longer recommended,
+and is likely to hurt functionality more than it improves performance.
+
+##### Experimental performance improvements {#minimize-iptables-restore}
+
+{{< feature-state for_k8s_version="v1.26" state="alpha" >}}
+
+In Kubernetes 1.26, some new performance improvements were made to the
+iptables proxy mode, but they are not enabled by default (and should
+probably not be enabled in production clusters yet). To try them out,
+enable the `MinimizeIPTablesRestore` [feature
+gate](/docs/reference/command-line-tools-reference/feature-gates/) for
+kube-proxy with `--feature-gates=MinimizeIPTablesRestore=true,…`.
+
+If you enable that feature gate and you were previously overriding
+`minSyncPeriod`, you should try removing that override and letting
+kube-proxy use the default value (`1s`) or at least a smaller value
+than you were using before.
+
+If you notice kube-proxy's
+`sync_proxy_rules_iptables_restore_failures_total` or
+`sync_proxy_rules_iptables_partial_restore_failures_total` metrics
+increasing after enabling this feature, that likely indicates you are
+encountering bugs in the feature and you should file a bug report.
+
### IPVS proxy mode {#proxy-mode-ipvs}
In `ipvs` mode, kube-proxy watches Kubernetes Services and EndpointSlices,
@@ -147,7 +229,7 @@ kernel modules are available. If the IPVS kernel modules are not detected, then
falls back to running in iptables proxy mode.
{{< /note >}}
-{{< figure src="/images/docs/services-ipvs-overview.svg" title="Services overview diagram for IPVS proxy" class="diagram-medium" >}}
+{{< figure src="/images/docs/services-ipvs-overview.svg" title="Virtual IP address mechanism for Services, using IPVS mode" class="diagram-medium" >}}
## Session affinity
@@ -276,9 +358,11 @@ should have seen the node's health check failing and fully removed the node from
## {{% heading "whatsnext" %}}
To learn more about Services,
-read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/).
+read [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/).
You can also:
-* Read about [Services](/docs/concepts/services-networking/service/)
-* Read the [API reference](/docs/reference/kubernetes-api/service-resources/service-v1/) for the Service API
\ No newline at end of file
+* Read about [Services](/docs/concepts/services-networking/service/) as a concept
+* Read about [Ingresses](/docs/concepts/services-networking/ingress/) as a concept
+* Read the [API reference](/docs/reference/kubernetes-api/service-resources/service-v1/) for the Service API
+
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md
index 46063a427e75f..db92db3f73189 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md
@@ -17,6 +17,10 @@ Commands related to handling kubernetes certificates
Commands related to handling kubernetes certificates
+```
+kubeadm certs [flags]
+```
+
### Options
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md
index 9be11c4331d77..541d9892a1527 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md
@@ -55,7 +55,7 @@ kubeadm config images list [flags]
--feature-gates string
-
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false) UnversionedKubeletConfigMap=true|false (default=true)
+
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false)
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false) UnversionedKubeletConfigMap=true|false (default=true)
+
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false)
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md
index ac2897751e849..ad919d2e16ede 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md
@@ -55,6 +55,7 @@ kubelet-finalize Updates settings relevant to the kubelet after TLS
addon Install required addons for passing conformance tests
/coredns Install the CoreDNS addon to a Kubernetes cluster
/kube-proxy Install the kube-proxy addon to a Kubernetes cluster
+show-join-command Show the join command for control-plane and worker node
```
@@ -138,7 +139,7 @@ kubeadm init [flags]
--feature-gates string
-
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false) UnversionedKubeletConfigMap=true|false (default=true)
+
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false)
Specify a stable IP address or DNS name for the control plane.
+
+
--dry-run
+
+
+
Don't apply any changes; just output what would be done.
+
+
--feature-gates string
-
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false) UnversionedKubeletConfigMap=true|false (default=true)
+
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false)
Don't apply any changes; just output what would be done.
+
+
--feature-gates string
-
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false) UnversionedKubeletConfigMap=true|false (default=true)
+
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false)
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false) UnversionedKubeletConfigMap=true|false (default=true)
+
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false)
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false) UnversionedKubeletConfigMap=true|false (default=true)
+
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false)
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
+
+
--dry-run
+
+
+
Don't apply any changes; just output what would be done.
[EXPERIMENTAL] The path to the 'real' host root filesystem.
+
+
+
+
+
+
+
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md
index f7717958b6437..9915f522ab954 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md
@@ -15,7 +15,7 @@ Upload certificates to kubeadm-certs
### Synopsis
-This command is not meant to be run on its own. See list of available subcommands.
+Upload control plane certificates to the kubeadm-certs Secret
```
kubeadm init phase upload-certs [flags]
@@ -44,6 +44,13 @@ kubeadm init phase upload-certs [flags]
Path to a kubeadm configuration file.
+
+
--dry-run
+
+
+
Don't apply any changes; just output what would be done.
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
+
+
--dry-run
+
+
+
Don't apply any changes; just output what would be done.
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false) UnversionedKubeletConfigMap=true|false (default=true)
+
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false)
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md
index 28ab989a84490..d235a0652645f 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md
@@ -55,7 +55,7 @@ kubeadm upgrade plan [version] [flags]
--feature-gates string
-
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false) UnversionedKubeletConfigMap=true|false (default=true)
+
A set of key=value pairs that describe feature gates for various features. Options are: PublicKeysECDSA=true|false (ALPHA - default=false) RootlessControlPlane=true|false (ALPHA - default=false)
diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md
index d4f22871ed13d..107473aeeeedc 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md
@@ -244,7 +244,7 @@ it off regardless. Doing so will disable the ability to use the `--discovery-tok
* Fetch the `cluster-info` file from the API Server:
```shell
-kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/ //" | tee cluster-info.yaml
+kubectl -n kube-public get cm cluster-info -o jsonpath='{.data.kubeconfig}' | tee cluster-info.yaml
```
The output is similar to this:
diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md
index 084b54d7f4a20..3c7c0fa8ccbb5 100644
--- a/content/en/docs/reference/using-api/api-concepts.md
+++ b/content/en/docs/reference/using-api/api-concepts.md
@@ -83,6 +83,16 @@ namespace (`/apis/GROUP/VERSION/namespaces/NAMESPACE/*`). A namespace-scoped res
type will be deleted when its namespace is deleted and access to that resource type
is controlled by authorization checks on the namespace scope.
+Note: core resources use `/api` instead of `/apis` and omit the GROUP path segment.
+
+Examples:
+* `/api/v1/namespaces`
+* `/api/v1/pods`
+* `/api/v1/namespaces/my-namespace/pods`
+* `/apis/apps/v1/deployments`
+* `/apis/apps/v1/namespaces/my-namespace/deployments`
+* `/apis/apps/v1/namespaces/my-namespace/deployments/my-deployment`
+
You can also access collections of resources (for example: listing all Nodes).
The following paths are used to retrieve collections and resources:
@@ -737,7 +747,7 @@ by default.
The `kubectl` tool uses the `--validate` flag to set the level of field validation.
Historically `--validate` was used to toggle client-side validation on or off as
a boolean flag. Since Kubernetes 1.25, kubectl uses
-server-side field validation when sending requests to a serer with this feature
+server-side field validation when sending requests to a server with this feature
enabled. Validation will fall back to client-side only when it cannot connect
to an API server with field validation enabled.
It accepts the values `ignore`, `warn`,
diff --git a/content/en/docs/reference/using-api/server-side-apply.md b/content/en/docs/reference/using-api/server-side-apply.md
index c40b168c94f5f..980ad7020fb69 100644
--- a/content/en/docs/reference/using-api/server-side-apply.md
+++ b/content/en/docs/reference/using-api/server-side-apply.md
@@ -366,12 +366,26 @@ There are two solutions:
First, the user defines a new configuration containing only the `replicas` field:
-{{< codenew file="application/ssa/nginx-deployment-replicas-only.yaml" >}}
+```yaml
+# Save this file as 'nginx-deployment-replicas-only.yaml'.
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-deployment
+spec:
+ replicas: 3
+```
+
+{{< note >}}
+The YAML file for SSA in this case only contains the fields you want to change.
+You are not supposed to provide a fully compliant Deployment manifest if you only
+want to modify the `spec.replicas` field using SSA.
+{{< /note >}}
The user applies that configuration using the field manager name `handover-to-hpa`:
```shell
-kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment-replicas-only.yaml \
+kubectl apply -f nginx-deployment-replicas-only.yaml \
--server-side --field-manager=handover-to-hpa \
--validate=false
```
diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md
index 91687ff48b55d..20229b8c04d28 100644
--- a/content/en/docs/setup/best-practices/certificates.md
+++ b/content/en/docs/setup/best-practices/certificates.md
@@ -9,12 +9,12 @@ weight: 50
Kubernetes requires PKI certificates for authentication over TLS.
-If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/), the certificates that your cluster requires are automatically generated.
-You can also generate your own certificates -- for example, to keep your private keys more secure by not storing them on the API server.
+If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/), the certificates
+that your cluster requires are automatically generated.
+You can also generate your own certificates -- for example, to keep your private keys more secure
+by not storing them on the API server.
This page explains the certificates that your cluster requires.
-
-
## How certificates are used by your cluster
@@ -33,24 +33,30 @@ Kubernetes requires PKI for the following operations:
* Client and server certificates for the [front-proxy](/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
{{< note >}}
-`front-proxy` certificates are required only if you run kube-proxy to support [an extension API server](/docs/tasks/extend-kubernetes/setup-extension-api-server/).
+`front-proxy` certificates are required only if you run kube-proxy to support
+[an extension API server](/docs/tasks/extend-kubernetes/setup-extension-api-server/).
{{< /note >}}
etcd also implements mutual TLS to authenticate clients and peers.
## Where certificates are stored
-If you install Kubernetes with kubeadm, most certificates are stored in `/etc/kubernetes/pki`. All paths in this documentation are relative to that directory, with the exception of user account certificates which kubeadm places in `/etc/kubernetes`.
+If you install Kubernetes with kubeadm, most certificates are stored in `/etc/kubernetes/pki`.
+All paths in this documentation are relative to that directory, with the exception of user account
+certificates which kubeadm places in `/etc/kubernetes`.
## Configure certificates manually
-If you don't want kubeadm to generate the required certificates, you can create them using a single root CA or by providing all certificates. See [Certificates](/docs/tasks/administer-cluster/certificates/) for details on creating your own certificate authority.
-See [Certificate Management with kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) for more on managing certificates.
-
+If you don't want kubeadm to generate the required certificates, you can create them using a
+single root CA or by providing all certificates. See [Certificates](/docs/tasks/administer-cluster/certificates/)
+for details on creating your own certificate authority. See
+[Certificate Management with kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/)
+for more on managing certificates.
### Single root CA
-You can create a single root CA, controlled by an administrator. This root CA can then create multiple intermediate CAs, and delegate all further creation to Kubernetes itself.
+You can create a single root CA, controlled by an administrator. This root CA can then create
+multiple intermediate CAs, and delegate all further creation to Kubernetes itself.
Required CAs:
@@ -60,7 +66,8 @@ Required CAs:
| etcd/ca.crt,key | etcd-ca | For all etcd-related functions |
| front-proxy-ca.crt,key | kubernetes-front-proxy-ca | For the [front-end proxy](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) |
-On top of the above CAs, it is also necessary to get a public/private key pair for service account management, `sa.key` and `sa.pub`.
+On top of the above CAs, it is also necessary to get a public/private key pair for service account
+management, `sa.key` and `sa.pub`.
The following example illustrates the CA key and certificate files shown in the previous table:
```
@@ -71,27 +78,30 @@ The following example illustrates the CA key and certificate files shown in the
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-ca.key
```
+
### All certificates
If you don't wish to copy the CA private keys to your cluster, you can generate all certificates yourself.
Required certificates:
-| Default CN | Parent CA | O (in Subject) | kind | hosts (SAN) |
-|-------------------------------|---------------------------|----------------|----------------------------------------|---------------------------------------------|
-| kube-etcd | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` |
-| kube-etcd-peer | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` |
-| kube-etcd-healthcheck-client | etcd-ca | | client | |
-| kube-apiserver-etcd-client | etcd-ca | system:masters | client | |
-| kube-apiserver | kubernetes-ca | | server | ``, ``, ``, `[1]` |
-| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
-| front-proxy-client | kubernetes-front-proxy-ca | | client | |
+| Default CN | Parent CA | O (in Subject) | kind | hosts (SAN) |
+|-------------------------------|---------------------------|----------------|------------------|-----------------------------------------------------|
+| kube-etcd | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` |
+| kube-etcd-peer | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` |
+| kube-etcd-healthcheck-client | etcd-ca | | client | |
+| kube-apiserver-etcd-client | etcd-ca | system:masters | client | |
+| kube-apiserver | kubernetes-ca | | server | ``, ``, ``, `[1]` |
+| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
+| front-proxy-client | kubernetes-front-proxy-ca | | client | |
[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)
the load balancer stable IP and/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`,
`kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`)
-where `kind` maps to one or more of the [x509 key usage](https://pkg.go.dev/k8s.io/api/certificates/v1beta1#KeyUsage) types:
+where `kind` maps to one or more of the x509 key usage, which is also documented in the
+`.spec.usages` of a [CertificateSigningRequest](/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1#CertificateSigningRequest)
+type:
| kind | Key usage |
|--------|---------------------------------------------------------------------------------|
@@ -99,15 +109,18 @@ where `kind` maps to one or more of the [x509 key usage](https://pkg.go.dev/k8s.
| client | digital signature, key encipherment, client auth |
{{< note >}}
-Hosts/SAN listed above are the recommended ones for getting a working cluster; if required by a specific setup, it is possible to add additional SANs on all the server certificates.
+Hosts/SAN listed above are the recommended ones for getting a working cluster; if required by a
+specific setup, it is possible to add additional SANs on all the server certificates.
{{< /note >}}
{{< note >}}
For kubeadm users only:
-* The scenario where you are copying to your cluster CA certificates without private keys is referred as external CA in the kubeadm documentation.
-* If you are comparing the above list with a kubeadm generated PKI, please be aware that `kube-etcd`, `kube-etcd-peer` and `kube-etcd-healthcheck-client` certificates
- are not generated in case of external etcd.
+* The scenario where you are copying to your cluster CA certificates without private keys is
+ referred as external CA in the kubeadm documentation.
+* If you are comparing the above list with a kubeadm generated PKI, please be aware that
+ `kube-etcd`, `kube-etcd-peer` and `kube-etcd-healthcheck-client` certificates are not generated
+ in case of external etcd.
{{< /note >}}
@@ -116,31 +129,32 @@ For kubeadm users only:
Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)).
Paths should be specified using the given argument regardless of location.
-| Default CN | recommended key path | recommended cert path | command | key argument | cert argument |
-|------------------------------|------------------------------|-----------------------------|----------------|------------------------------|-------------------------------------------|
-| etcd-ca | etcd/ca.key | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
-| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile |
-| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file |
-| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file, --root-ca-file, --cluster-signing-cert-file |
-| kube-apiserver | apiserver.key | apiserver.crt | kube-apiserver | --tls-private-key-file | --tls-cert-file |
-| kube-apiserver-kubelet-client| apiserver-kubelet-client.key | apiserver-kubelet-client.crt| kube-apiserver | --kubelet-client-key | --kubelet-client-certificate |
-| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file |
-| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file |
-| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file |
-| etcd-ca | etcd/ca.key | etcd/ca.crt | etcd | | --trusted-ca-file, --peer-trusted-ca-file |
-| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file |
-| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
-| etcd-ca | | etcd/ca.crt | etcdctl | | --cacert |
-| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl | --key | --cert |
+| Default CN | recommended key path | recommended cert path | command | key argument | cert argument |
+|------------------------------|------------------------------|-----------------------------|-------------------------|------------------------------|-------------------------------------------|
+| etcd-ca | etcd/ca.key | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
+| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile |
+| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file |
+| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file, --root-ca-file, --cluster-signing-cert-file |
+| kube-apiserver | apiserver.key | apiserver.crt | kube-apiserver | --tls-private-key-file | --tls-cert-file |
+| kube-apiserver-kubelet-client| apiserver-kubelet-client.key | apiserver-kubelet-client.crt| kube-apiserver | --kubelet-client-key | --kubelet-client-certificate |
+| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file |
+| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file |
+| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file |
+| etcd-ca | etcd/ca.key | etcd/ca.crt | etcd | | --trusted-ca-file, --peer-trusted-ca-file |
+| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file |
+| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
+| etcd-ca | | etcd/ca.crt | etcdctl | | --cacert |
+| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl | --key | --cert |
Same considerations apply for the service account key pair:
-| private key path | public key path | command | argument |
-|------------------------------|-----------------------------|-------------------------|--------------------------------------|
-| sa.key | | kube-controller-manager | --service-account-private-key-file |
-| | sa.pub | kube-apiserver | --service-account-key-file |
+| private key path | public key path | command | argument |
+|-------------------|------------------|-------------------------|--------------------------------------|
+| sa.key | | kube-controller-manager | --service-account-private-key-file |
+| | sa.pub | kube-apiserver | --service-account-key-file |
-The following example illustrates the file paths [from the previous tables](/docs/setup/best-practices/certificates/#certificate-paths) you need to provide if you are generating all of your own keys and certificates:
+The following example illustrates the file paths [from the previous tables](#certificate-paths)
+you need to provide if you are generating all of your own keys and certificates:
```
/etc/kubernetes/pki/etcd/ca.key
@@ -170,15 +184,17 @@ The following example illustrates the file paths [from the previous tables](/doc
You must manually configure these administrator account and service accounts:
-| filename | credential name | Default CN | O (in Subject) |
-|-------------------------|----------------------------|--------------------------------|----------------|
-| admin.conf | default-admin | kubernetes-admin | system:masters |
+| filename | credential name | Default CN | O (in Subject) |
+|-------------------------|----------------------------|-------------------------------------|----------------|
+| admin.conf | default-admin | kubernetes-admin | system:masters |
| kubelet.conf | default-auth | system:node:`` (see note) | system:nodes |
-| controller-manager.conf | default-controller-manager | system:kube-controller-manager | |
-| scheduler.conf | default-scheduler | system:kube-scheduler | |
+| controller-manager.conf | default-controller-manager | system:kube-controller-manager | |
+| scheduler.conf | default-scheduler | system:kube-scheduler | |
{{< note >}}
-The value of `` for `kubelet.conf` **must** match precisely the value of the node name provided by the kubelet as it registers with the apiserver. For further details, read the [Node Authorization](/docs/reference/access-authn-authz/node/).
+The value of `` for `kubelet.conf` **must** match precisely the value of the node name
+provided by the kubelet as it registers with the apiserver. For further details, read the
+[Node Authorization](/docs/reference/access-authn-authz/node/).
{{< /note >}}
1. For each config, generate an x509 cert/key pair with the given CN and O.
@@ -196,7 +212,7 @@ These files are used as follows:
| filename | command | comment |
|-------------------------|-------------------------|-----------------------------------------------------------------------|
-| admin.conf | kubectl | Configures administrator user for the cluster |
+| admin.conf | kubectl | Configures administrator user for the cluster |
| kubelet.conf | kubelet | One required for each node in the cluster. |
| controller-manager.conf | kube-controller-manager | Must be added to manifest in `manifests/kube-controller-manager.yaml` |
| scheduler.conf | kube-scheduler | Must be added to manifest in `manifests/kube-scheduler.yaml` |
diff --git a/content/en/docs/setup/best-practices/cluster-large.md b/content/en/docs/setup/best-practices/cluster-large.md
index 55be39a299305..808a1c47510a3 100644
--- a/content/en/docs/setup/best-practices/cluster-large.md
+++ b/content/en/docs/setup/best-practices/cluster-large.md
@@ -9,13 +9,13 @@ weight: 10
A cluster is a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (physical
or virtual machines) running Kubernetes agents, managed by the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}.
-Kubernetes {{< param "version" >}} supports clusters with up to 5000 nodes. More specifically,
+Kubernetes {{< param "version" >}} supports clusters with up to 5,000 nodes. More specifically,
Kubernetes is designed to accommodate configurations that meet *all* of the following criteria:
* No more than 110 pods per node
-* No more than 5000 nodes
-* No more than 150000 total pods
-* No more than 300000 total containers
+* No more than 5,000 nodes
+* No more than 150,000 total pods
+* No more than 300,000 total containers
You can scale your cluster by adding or removing nodes. The way you do this depends
on how your cluster is deployed.
@@ -115,15 +115,15 @@ many nodes, consider the following:
## {{% heading "whatsnext" %}}
-`VerticalPodAutoscaler` is a custom resource that you can deploy into your cluster
+* `VerticalPodAutoscaler` is a custom resource that you can deploy into your cluster
to help you manage resource requests and limits for pods.
-Visit [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme)
-to learn more about `VerticalPodAutoscaler` and how you can use it to scale cluster
+Learn more about [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme)
+and how you can use it to scale cluster
components, including cluster-critical addons.
-The [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme)
+* The [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme)
integrates with a number of cloud providers to help you run the right number of
nodes for the level of resource demand in your cluster.
-The [addon resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#readme)
-helps you in resizing the addons automatically as your cluster's scale changes.
\ No newline at end of file
+* The [addon resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#readme)
+helps you in resizing the addons automatically as your cluster's scale changes.
diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md
index 7984655797d5c..8b23806f6c817 100644
--- a/content/en/docs/setup/production-environment/container-runtimes.md
+++ b/content/en/docs/setup/production-environment/container-runtimes.md
@@ -56,11 +56,7 @@ For more information, see [Network Plugin Requirements](/docs/concepts/extend-ku
### Forwarding IPv4 and letting iptables see bridged traffic
-Verify that the `br_netfilter` module is loaded by running `lsmod | grep br_netfilter`.
-
-To load it explicitly, run `sudo modprobe br_netfilter`.
-
-In order for a Linux node's iptables to correctly view bridged traffic, verify that `net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config. For example:
+Execute the below mentioned instructions:
```bash
cat <}}
@@ -217,6 +225,13 @@ that the CRI integration plugin is disabled by default.
You need CRI support enabled to use containerd with Kubernetes. Make sure that `cri`
is not included in the`disabled_plugins` list within `/etc/containerd/config.toml`;
if you made changes to that file, also restart `containerd`.
+
+If you experience container crash loops after the initial cluster installation or after
+installing a CNI, the containerd configuration provided with the package might contain
+incompatible configuration parameters. Consider resetting the containerd configuration
+with `containerd config default > /etc/containerd/config.toml` as specified in
+[getting-started.md](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#advanced-topics)
+and then set the configuration parameters specified above accordingly.
{{< /note >}}
If you apply this change, make sure to restart containerd:
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index b40a783264634..01f0d75f7d8f6 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -12,13 +12,15 @@ card:
This page shows how to install the `kubeadm` toolbox.
-For information on how to create a cluster with kubeadm once you have performed this installation process, see the [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
+For information on how to create a cluster with kubeadm once you have performed this installation process,
+see the [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
## {{% heading "prerequisites" %}}
-* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager.
+* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions
+ based on Debian and Red Hat, and those distributions without a package manager.
* 2 GB or more of RAM per machine (any less will leave little room for your apps).
* 2 CPUs or more.
* Full network connectivity between all machines in the cluster (public or private network is fine).
@@ -26,8 +28,6 @@ For information on how to create a cluster with kubeadm once you have performed
* Certain ports are open on your machines. See [here](#check-required-ports) for more details.
* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
-
-
## Verify the MAC address and product_uuid are unique for every node {#verify-mac-address}
@@ -46,9 +46,9 @@ If you have more than one network adapter, and your Kubernetes components are no
route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter.
## Check required ports
-These
-[required ports](/docs/reference/ports-and-protocols/)
-need to be open in order for Kubernetes components to communicate with each other. You can use tools like netcat to check if a port is open. For example:
+These [required ports](/docs/reference/networking/ports-and-protocols/)
+need to be open in order for Kubernetes components to communicate with each other.
+You can use tools like netcat to check if a port is open. For example:
```shell
nc 127.0.0.1 6443
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md
index 1baa12b3b7a0e..9509989daf62e 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md
@@ -26,15 +26,15 @@ etcd cluster of three members that can be used by kubeadm during cluster creatio
## {{% heading "prerequisites" %}}
-* Three hosts that can talk to each other over TCP ports 2379 and 2380. This
+- Three hosts that can talk to each other over TCP ports 2379 and 2380. This
document assumes these default ports. However, they are configurable through
the kubeadm config file.
-* Each host must have systemd and a bash compatible shell installed.
-* Each host must [have a container runtime, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
-* Each host should have access to the Kubernetes container image registry (`registry.k8s.io`) or list/pull the required etcd image using
-`kubeadm config images list/pull`. This guide will set up etcd instances as
-[static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet.
-* Some infrastructure to copy files between hosts. For example `ssh` and `scp`
+- Each host must have systemd and a bash compatible shell installed.
+- Each host must [have a container runtime, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
+- Each host should have access to the Kubernetes container image registry (`registry.k8s.io`) or list/pull the required etcd image using
+ `kubeadm config images list/pull`. This guide will set up etcd instances as
+ [static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet.
+- Some infrastructure to copy files between hosts. For example `ssh` and `scp`
can satisfy this requirement.
@@ -42,7 +42,7 @@ etcd cluster of three members that can be used by kubeadm during cluster creatio
## Setting up the cluster
The general approach is to generate all certs on one node and only distribute
-the *necessary* files to the other nodes.
+the _necessary_ files to the other nodes.
{{< note >}}
kubeadm contains all the necessary cryptographic machinery to generate
@@ -59,242 +59,239 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
1. Configure the kubelet to be a service manager for etcd.
{{< note >}}You must do this on every host where etcd should be running.{{< /note >}}
- Since etcd was created first, you must override the service priority by creating a new unit file
- that has higher precedence than the kubeadm-provided kubelet unit file.
+ Since etcd was created first, you must override the service priority by creating a new unit file
+ that has higher precedence than the kubeadm-provided kubelet unit file.
- ```sh
- cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
- [Service]
- ExecStart=
- # Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
- # Replace the value of "--container-runtime-endpoint" for a different container runtime if needed.
- ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
- Restart=always
- EOF
+ ```sh
+ cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
+ [Service]
+ ExecStart=
+ # Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
+ # Replace the value of "--container-runtime-endpoint" for a different container runtime if needed.
+ ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
+ Restart=always
+ EOF
- systemctl daemon-reload
- systemctl restart kubelet
- ```
+ systemctl daemon-reload
+ systemctl restart kubelet
+ ```
- Check the kubelet status to ensure it is running.
+ Check the kubelet status to ensure it is running.
- ```sh
- systemctl status kubelet
- ```
+ ```sh
+ systemctl status kubelet
+ ```
1. Create configuration files for kubeadm.
- Generate one kubeadm configuration file for each host that will have an etcd
- member running on it using the following script.
-
- ```sh
- # Update HOST0, HOST1 and HOST2 with the IPs of your hosts
- export HOST0=10.0.0.6
- export HOST1=10.0.0.7
- export HOST2=10.0.0.8
-
- # Update NAME0, NAME1 and NAME2 with the hostnames of your hosts
- export NAME0="infra0"
- export NAME1="infra1"
- export NAME2="infra2"
-
- # Create temp directories to store files that will end up on other hosts
- mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
-
- HOSTS=(${HOST0} ${HOST1} ${HOST2})
- NAMES=(${NAME0} ${NAME1} ${NAME2})
-
- for i in "${!HOSTS[@]}"; do
- HOST=${HOSTS[$i]}
- NAME=${NAMES[$i]}
- cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
- ---
- apiVersion: "kubeadm.k8s.io/v1beta3"
- kind: InitConfiguration
- nodeRegistration:
- name: ${NAME}
- localAPIEndpoint:
- advertiseAddress: ${HOST}
- ---
- apiVersion: "kubeadm.k8s.io/v1beta3"
- kind: ClusterConfiguration
- etcd:
- local:
- serverCertSANs:
- - "${HOST}"
- peerCertSANs:
- - "${HOST}"
- extraArgs:
- initial-cluster: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380
- initial-cluster-state: new
- name: ${NAME}
- listen-peer-urls: https://${HOST}:2380
- listen-client-urls: https://${HOST}:2379
- advertise-client-urls: https://${HOST}:2379
- initial-advertise-peer-urls: https://${HOST}:2380
- EOF
- done
- ```
+ Generate one kubeadm configuration file for each host that will have an etcd
+ member running on it using the following script.
+
+ ```sh
+ # Update HOST0, HOST1 and HOST2 with the IPs of your hosts
+ export HOST0=10.0.0.6
+ export HOST1=10.0.0.7
+ export HOST2=10.0.0.8
+
+ # Update NAME0, NAME1 and NAME2 with the hostnames of your hosts
+ export NAME0="infra0"
+ export NAME1="infra1"
+ export NAME2="infra2"
+
+ # Create temp directories to store files that will end up on other hosts
+ mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
+
+ HOSTS=(${HOST0} ${HOST1} ${HOST2})
+ NAMES=(${NAME0} ${NAME1} ${NAME2})
+
+ for i in "${!HOSTS[@]}"; do
+ HOST=${HOSTS[$i]}
+ NAME=${NAMES[$i]}
+ cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
+ ---
+ apiVersion: "kubeadm.k8s.io/v1beta3"
+ kind: InitConfiguration
+ nodeRegistration:
+ name: ${NAME}
+ localAPIEndpoint:
+ advertiseAddress: ${HOST}
+ ---
+ apiVersion: "kubeadm.k8s.io/v1beta3"
+ kind: ClusterConfiguration
+ etcd:
+ local:
+ serverCertSANs:
+ - "${HOST}"
+ peerCertSANs:
+ - "${HOST}"
+ extraArgs:
+ initial-cluster: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380
+ initial-cluster-state: new
+ name: ${NAME}
+ listen-peer-urls: https://${HOST}:2380
+ listen-client-urls: https://${HOST}:2379
+ advertise-client-urls: https://${HOST}:2379
+ initial-advertise-peer-urls: https://${HOST}:2380
+ EOF
+ done
+ ```
1. Generate the certificate authority.
- If you already have a CA then the only action that is copying the CA's `crt` and
- `key` file to `/etc/kubernetes/pki/etcd/ca.crt` and
- `/etc/kubernetes/pki/etcd/ca.key`. After those files have been copied,
- proceed to the next step, "Create certificates for each member".
+ If you already have a CA then the only action that is copying the CA's `crt` and
+ `key` file to `/etc/kubernetes/pki/etcd/ca.crt` and
+ `/etc/kubernetes/pki/etcd/ca.key`. After those files have been copied,
+ proceed to the next step, "Create certificates for each member".
- If you do not already have a CA then run this command on `$HOST0` (where you
- generated the configuration files for kubeadm).
+ If you do not already have a CA then run this command on `$HOST0` (where you
+ generated the configuration files for kubeadm).
- ```
- kubeadm init phase certs etcd-ca
- ```
+ ```
+ kubeadm init phase certs etcd-ca
+ ```
- This creates two files:
+ This creates two files:
- - `/etc/kubernetes/pki/etcd/ca.crt`
- - `/etc/kubernetes/pki/etcd/ca.key`
+ - `/etc/kubernetes/pki/etcd/ca.crt`
+ - `/etc/kubernetes/pki/etcd/ca.key`
1. Create certificates for each member.
- ```sh
- kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
- kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
- kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
- kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
- cp -R /etc/kubernetes/pki /tmp/${HOST2}/
- # cleanup non-reusable certificates
- find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
-
- kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
- kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
- kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
- kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
- cp -R /etc/kubernetes/pki /tmp/${HOST1}/
- find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
-
- kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
- kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
- kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
- kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
- # No need to move the certs because they are for HOST0
-
- # clean up certs that should not be copied off this host
- find /tmp/${HOST2} -name ca.key -type f -delete
- find /tmp/${HOST1} -name ca.key -type f -delete
- ```
+ ```sh
+ kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
+ kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
+ kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
+ kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
+ cp -R /etc/kubernetes/pki /tmp/${HOST2}/
+ # cleanup non-reusable certificates
+ find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
+
+ kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
+ kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
+ kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
+ kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
+ cp -R /etc/kubernetes/pki /tmp/${HOST1}/
+ find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
+
+ kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
+ kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
+ kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
+ kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
+ # No need to move the certs because they are for HOST0
+
+ # clean up certs that should not be copied off this host
+ find /tmp/${HOST2} -name ca.key -type f -delete
+ find /tmp/${HOST1} -name ca.key -type f -delete
+ ```
1. Copy certificates and kubeadm configs.
- The certificates have been generated and now they must be moved to their
- respective hosts.
+ The certificates have been generated and now they must be moved to their
+ respective hosts.
- ```sh
- USER=ubuntu
- HOST=${HOST1}
- scp -r /tmp/${HOST}/* ${USER}@${HOST}:
- ssh ${USER}@${HOST}
- USER@HOST $ sudo -Es
- root@HOST $ chown -R root:root pki
- root@HOST $ mv pki /etc/kubernetes/
- ```
+ ```sh
+ USER=ubuntu
+ HOST=${HOST1}
+ scp -r /tmp/${HOST}/* ${USER}@${HOST}:
+ ssh ${USER}@${HOST}
+ USER@HOST $ sudo -Es
+ root@HOST $ chown -R root:root pki
+ root@HOST $ mv pki /etc/kubernetes/
+ ```
1. Ensure all expected files exist.
- The complete list of required files on `$HOST0` is:
-
- ```
- /tmp/${HOST0}
- └── kubeadmcfg.yaml
- ---
- /etc/kubernetes/pki
- ├── apiserver-etcd-client.crt
- ├── apiserver-etcd-client.key
- └── etcd
- ├── ca.crt
- ├── ca.key
- ├── healthcheck-client.crt
- ├── healthcheck-client.key
- ├── peer.crt
- ├── peer.key
- ├── server.crt
- └── server.key
- ```
-
- On `$HOST1`:
-
- ```
- $HOME
- └── kubeadmcfg.yaml
- ---
- /etc/kubernetes/pki
- ├── apiserver-etcd-client.crt
- ├── apiserver-etcd-client.key
- └── etcd
- ├── ca.crt
- ├── healthcheck-client.crt
- ├── healthcheck-client.key
- ├── peer.crt
- ├── peer.key
- ├── server.crt
- └── server.key
- ```
-
- On `$HOST2`:
-
- ```
- $HOME
- └── kubeadmcfg.yaml
- ---
- /etc/kubernetes/pki
- ├── apiserver-etcd-client.crt
- ├── apiserver-etcd-client.key
- └── etcd
- ├── ca.crt
- ├── healthcheck-client.crt
- ├── healthcheck-client.key
- ├── peer.crt
- ├── peer.key
- ├── server.crt
- └── server.key
- ```
+ The complete list of required files on `$HOST0` is:
+
+ ```
+ /tmp/${HOST0}
+ └── kubeadmcfg.yaml
+ ---
+ /etc/kubernetes/pki
+ ├── apiserver-etcd-client.crt
+ ├── apiserver-etcd-client.key
+ └── etcd
+ ├── ca.crt
+ ├── ca.key
+ ├── healthcheck-client.crt
+ ├── healthcheck-client.key
+ ├── peer.crt
+ ├── peer.key
+ ├── server.crt
+ └── server.key
+ ```
+
+ On `$HOST1`:
+
+ ```
+ $HOME
+ └── kubeadmcfg.yaml
+ ---
+ /etc/kubernetes/pki
+ ├── apiserver-etcd-client.crt
+ ├── apiserver-etcd-client.key
+ └── etcd
+ ├── ca.crt
+ ├── healthcheck-client.crt
+ ├── healthcheck-client.key
+ ├── peer.crt
+ ├── peer.key
+ ├── server.crt
+ └── server.key
+ ```
+
+ On `$HOST2`:
+
+ ```
+ $HOME
+ └── kubeadmcfg.yaml
+ ---
+ /etc/kubernetes/pki
+ ├── apiserver-etcd-client.crt
+ ├── apiserver-etcd-client.key
+ └── etcd
+ ├── ca.crt
+ ├── healthcheck-client.crt
+ ├── healthcheck-client.key
+ ├── peer.crt
+ ├── peer.key
+ ├── server.crt
+ └── server.key
+ ```
1. Create the static pod manifests.
- Now that the certificates and configs are in place it's time to create the
- manifests. On each host run the `kubeadm` command to generate a static manifest
- for etcd.
+ Now that the certificates and configs are in place it's time to create the
+ manifests. On each host run the `kubeadm` command to generate a static manifest
+ for etcd.
- ```sh
- root@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
- root@HOST1 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml
- root@HOST2 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml
- ```
+ ```sh
+ root@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
+ root@HOST1 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml
+ root@HOST2 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml
+ ```
1. Optional: Check the cluster health.
- ```sh
- docker run --rm -it \
- --net host \
- -v /etc/kubernetes:/etc/kubernetes registry.k8s.io/etcd:${ETCD_TAG} etcdctl \
- --cert /etc/kubernetes/pki/etcd/peer.crt \
- --key /etc/kubernetes/pki/etcd/peer.key \
- --cacert /etc/kubernetes/pki/etcd/ca.crt \
- --endpoints https://${HOST0}:2379 endpoint health --cluster
- ...
- https://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms
- https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms
- https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms
- ```
- - Set `${ETCD_TAG}` to the version tag of your etcd image. For example `3.4.3-0`. To see the etcd image and tag that kubeadm uses execute `kubeadm config images list --kubernetes-version ${K8S_VERSION}`, where `${K8S_VERSION}` is for example `v1.17.0`.
- - Set `${HOST0}`to the IP address of the host you are testing.
-
-
+ ```sh
+ docker run --rm -it \
+ --net host \
+ -v /etc/kubernetes:/etc/kubernetes registry.k8s.io/etcd:${ETCD_TAG} etcdctl \
+ --cert /etc/kubernetes/pki/etcd/peer.crt \
+ --key /etc/kubernetes/pki/etcd/peer.key \
+ --cacert /etc/kubernetes/pki/etcd/ca.crt \
+ --endpoints https://${HOST0}:2379 endpoint health --cluster
+ ...
+ https://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms
+ https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms
+ https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms
+ ```
+
+ - Set `${ETCD_TAG}` to the version tag of your etcd image. For example `3.4.3-0`. To see the etcd image and tag that kubeadm uses execute `kubeadm config images list --kubernetes-version ${K8S_VERSION}`, where `${K8S_VERSION}` is for example `v1.17.0`.
+ - Set `${HOST0}`to the IP address of the host you are testing.
## {{% heading "whatsnext" %}}
-
Once you have an etcd cluster with 3 working members, you can continue setting up a
highly available control plane using the
[external etcd method with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/).
-
diff --git a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md
index e1157383f4ec7..16cc1abf021d0 100644
--- a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md
+++ b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md
@@ -37,7 +37,7 @@ Dashboard also provides information on the state of Kubernetes resources in your
The Dashboard UI is not deployed by default. To deploy it, run the following command:
```
-kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
```
## Accessing the Dashboard UI
diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md
index 6ca4d396da9d7..ed707fb278ad6 100644
--- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md
+++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md
@@ -1,6 +1,7 @@
---
title: Access Clusters Using the Kubernetes API
content_type: task
+weight: 60
---
diff --git a/content/en/docs/tasks/administer-cluster/certificates.md b/content/en/docs/tasks/administer-cluster/certificates.md
index dcbd41a7e6c85..3da130ca64a80 100644
--- a/content/en/docs/tasks/administer-cluster/certificates.md
+++ b/content/en/docs/tasks/administer-cluster/certificates.md
@@ -1,7 +1,7 @@
---
title: Generate Certificates Manually
content_type: task
-weight: 20
+weight: 30
---
diff --git a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
index a365fd4ffccac..c3194b71805b2 100644
--- a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
+++ b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
@@ -1,6 +1,7 @@
---
title: Change the default StorageClass
content_type: task
+weight: 90
---
diff --git a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md
index 457fbd6332b43..ae6c303757aba 100644
--- a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md
+++ b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md
@@ -1,6 +1,7 @@
---
title: Change the Reclaim Policy of a PersistentVolume
content_type: task
+weight: 100
---
diff --git a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md
index 17473ac2895ba..f094d7806c12a 100644
--- a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md
+++ b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md
@@ -1,6 +1,7 @@
---
title: Upgrade A Cluster
content_type: task
+weight: 350
---
@@ -99,4 +100,4 @@ release with a newer device plugin API version, device plugins must be upgraded
both version before the node is upgraded in order to guarantee that device allocations
continue to complete successfully during the upgrade.
-Refer to [API compatiblity](docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md/#api-compatibility) and [Kubelet Device Manager API Versions](docs/reference/node/device-plugin-api-versions.md) for more details.
+Refer to [API compatibility](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#api-compatibility) and [Kubelet Device Manager API Versions](/docs/reference/node/device-plugin-api-versions/) for more details.
\ No newline at end of file
diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
index 5268eef369264..542a3a57c2938 100644
--- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
+++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
@@ -5,6 +5,7 @@ reviewers:
- jpbetz
title: Operating etcd clusters for Kubernetes
content_type: task
+weight: 270
---
diff --git a/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md b/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md
index 7d0890197bcbb..743e23d0bd1ae 100644
--- a/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md
+++ b/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md
@@ -5,6 +5,7 @@ reviewers:
title: "Migrate Replicated Control Plane To Use Cloud Controller Manager"
linkTitle: "Migrate Replicated Control Plane To Use Cloud Controller Manager"
content_type: task
+weight: 250
---
diff --git a/content/en/docs/tasks/administer-cluster/coredns.md b/content/en/docs/tasks/administer-cluster/coredns.md
index 43a75275b85a6..6da1414d138e1 100644
--- a/content/en/docs/tasks/administer-cluster/coredns.md
+++ b/content/en/docs/tasks/administer-cluster/coredns.md
@@ -4,6 +4,7 @@ reviewers:
title: Using CoreDNS for Service Discovery
min-kubernetes-server-version: v1.9
content_type: task
+weight: 380
---
diff --git a/content/en/docs/tasks/administer-cluster/cpu-management-policies.md b/content/en/docs/tasks/administer-cluster/cpu-management-policies.md
index b077415a05aac..a2e3932b3939d 100644
--- a/content/en/docs/tasks/administer-cluster/cpu-management-policies.md
+++ b/content/en/docs/tasks/administer-cluster/cpu-management-policies.md
@@ -7,6 +7,7 @@ reviewers:
content_type: task
min-kubernetes-server-version: v1.26
+weight: 140
---
diff --git a/content/en/docs/tasks/administer-cluster/declare-network-policy.md b/content/en/docs/tasks/administer-cluster/declare-network-policy.md
index ac9715b9092ca..4f7933624ec87 100644
--- a/content/en/docs/tasks/administer-cluster/declare-network-policy.md
+++ b/content/en/docs/tasks/administer-cluster/declare-network-policy.md
@@ -5,6 +5,7 @@ reviewers:
title: Declare Network Policy
min-kubernetes-server-version: v1.8
content_type: task
+weight: 180
---
This document helps you get started using the Kubernetes [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) to declare network policies that govern how pods communicate with each other.
diff --git a/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md b/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md
index a3732c68de6fe..b1939d96793ce 100644
--- a/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md
+++ b/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md
@@ -5,6 +5,7 @@ reviewers:
- wlan0
title: Developing Cloud Controller Manager
content_type: concept
+weight: 190
---
diff --git a/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md b/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md
index ab737115aeb78..c4f9e1fbb2859 100644
--- a/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md
+++ b/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md
@@ -5,6 +5,7 @@ reviewers:
title: Customizing DNS Service
content_type: task
min-kubernetes-server-version: v1.12
+weight: 160
---
@@ -104,7 +105,7 @@ The Corefile configuration includes the following [plugins](https://coredns.io/p
* [errors](https://coredns.io/plugins/errors/): Errors are logged to stdout.
* [health](https://coredns.io/plugins/health/): Health of CoreDNS is reported to
- `http://localhost:8080/health`. In this extended syntax `lameduck` will make theuprocess
+ `http://localhost:8080/health`. In this extended syntax `lameduck` will make the process
unhealthy then wait for 5 seconds before the process is shut down.
* [ready](https://coredns.io/plugins/ready/): An HTTP endpoint on port 8181 will return 200 OK,
when all plugins that are able to signal readiness have done so.
diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
index 755a6cc717ce9..2e26088a9486f 100644
--- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
+++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
@@ -5,6 +5,7 @@ reviewers:
title: Debugging DNS Resolution
content_type: task
min-kubernetes-server-version: v1.6
+weight: 170
---
diff --git a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
index bdcb35ada240d..3b37a8934021e 100644
--- a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
+++ b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
@@ -1,6 +1,7 @@
---
title: Autoscale the DNS Service in a Cluster
content_type: task
+weight: 80
---
diff --git a/content/en/docs/tasks/administer-cluster/enable-disable-api.md b/content/en/docs/tasks/administer-cluster/enable-disable-api.md
index a10de2c3b43b9..f8da90bddc446 100644
--- a/content/en/docs/tasks/administer-cluster/enable-disable-api.md
+++ b/content/en/docs/tasks/administer-cluster/enable-disable-api.md
@@ -1,6 +1,7 @@
---
title: Enable Or Disable A Kubernetes API
content_type: task
+weight: 200
---
@@ -20,7 +21,7 @@ The `runtime-config` command line argument also supports 2 special keys:
- `api/legacy`, representing only legacy APIs. Legacy APIs are any APIs that have been
explicitly [deprecated](/docs/reference/using-api/deprecation-policy/).
-For example, to turning off all API versions except v1, pass `--runtime-config=api/all=false,api/v1=true`
+For example, to turn off all API versions except v1, pass `--runtime-config=api/all=false,api/v1=true`
to the `kube-apiserver`.
## {{% heading "whatsnext" %}}
diff --git a/content/en/docs/tasks/administer-cluster/encrypt-data.md b/content/en/docs/tasks/administer-cluster/encrypt-data.md
index a740b890ac515..c683c5aa9b2c3 100644
--- a/content/en/docs/tasks/administer-cluster/encrypt-data.md
+++ b/content/en/docs/tasks/administer-cluster/encrypt-data.md
@@ -5,6 +5,7 @@ reviewers:
- enj
content_type: task
min-kubernetes-server-version: 1.13
+weight: 210
---
@@ -34,7 +35,7 @@ encryption configuration file must be the same! Otherwise, the `kube-apiserver`
decrypt data stored in the etcd.
{{< /caution >}}
-## Understanding the encryption at rest configuration.
+## Understanding the encryption at rest configuration
```yaml
apiVersion: apiserver.config.k8s.io/v1
@@ -92,7 +93,7 @@ the only recourse is to delete that key from the underlying etcd directly. Calls
read that resource will fail until it is deleted or a valid decryption key is provided.
{{< /caution >}}
-### Providers:
+### Providers
{{< table caption="Providers for Kubernetes encryption at rest" >}}
Name | Encryption | Strength | Speed | Key Length | Other Considerations
@@ -101,7 +102,7 @@ Name | Encryption | Strength | Speed | Key Length | Other Considerations
`secretbox` | XSalsa20 and Poly1305 | Strong | Faster | 32-byte | A newer standard and may not be considered acceptable in environments that require high levels of review.
`aesgcm` | AES-GCM with random nonce | Must be rotated every 200k writes | Fastest | 16, 24, or 32-byte | Is not recommended for use except when an automated key rotation scheme is implemented.
`aescbc` | AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding | Weak | Fast | 32-byte | Not recommended due to CBC's vulnerability to padding oracle attacks.
-`kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding (prior to v1.25), using AES-GCM starting from v1.25, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/)
+`kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding (prior to v1.25), using AES-GCM starting from v1.25, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/).
Each provider supports multiple keys - the keys are tried in order for decryption, and if the provider
is the first provider, the first key is used for encryption.
@@ -217,7 +218,9 @@ program to retrieve the contents of your secret data.
1. Using the `etcdctl` command line, read that Secret out of etcd:
- `ETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C`
+ ```
+ ETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C
+ ```
where `[...]` must be the additional arguments for connecting to the etcd server.
@@ -312,8 +315,7 @@ resources:
secret:
```
-Then run the following command to force decrypt
-all Secrets:
+Then run the following command to force decrypt all Secrets:
```shell
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
diff --git a/content/en/docs/tasks/administer-cluster/extended-resource-node.md b/content/en/docs/tasks/administer-cluster/extended-resource-node.md
index 797993f116f67..3e9aae76d6918 100644
--- a/content/en/docs/tasks/administer-cluster/extended-resource-node.md
+++ b/content/en/docs/tasks/administer-cluster/extended-resource-node.md
@@ -1,26 +1,19 @@
---
title: Advertise Extended Resources for a Node
content_type: task
+weight: 70
---
-
This page shows how to specify extended resources for a Node.
Extended resources allow cluster administrators to advertise node-level
resources that would otherwise be unknown to Kubernetes.
-
-
-
## {{% heading "prerequisites" %}}
-
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
-
-
-
## Get the names of your Nodes
@@ -38,7 +31,7 @@ the Kubernetes API server. For example, suppose one of your Nodes has four dongl
attached. Here's an example of a PATCH request that advertises four dongle resources
for your Node.
-```shell
+```
PATCH /api/v1/nodes//status HTTP/1.1
Accept: application/json
Content-Type: application/json-patch+json
@@ -68,9 +61,9 @@ Replace `` with the name of your Node:
```shell
curl --header "Content-Type: application/json-patch+json" \
---request PATCH \
---data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' \
-http://localhost:8001/api/v1/nodes//status
+ --request PATCH \
+ --data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' \
+ http://localhost:8001/api/v1/nodes//status
```
{{< note >}}
@@ -99,9 +92,9 @@ Once again, the output shows the dongle resource:
```yaml
Capacity:
- cpu: 2
- memory: 2049008Ki
- example.com/dongle: 4
+ cpu: 2
+ memory: 2049008Ki
+ example.com/dongle: 4
```
Now, application developers can create Pods that request a certain
@@ -177,9 +170,9 @@ Replace `` with the name of your Node:
```shell
curl --header "Content-Type: application/json-patch+json" \
---request PATCH \
---data '[{"op": "remove", "path": "/status/capacity/example.com~1dongle"}]' \
-http://localhost:8001/api/v1/nodes//status
+ --request PATCH \
+ --data '[{"op": "remove", "path": "/status/capacity/example.com~1dongle"}]' \
+ http://localhost:8001/api/v1/nodes//status
```
Verify that the dongle advertisement has been removed:
@@ -190,20 +183,13 @@ kubectl describe node | grep dongle
(you should not see any output)
-
-
-
## {{% heading "whatsnext" %}}
-
### For application developers
-* [Assign Extended Resources to a Container](/docs/tasks/configure-pod-container/extended-resource/)
+- [Assign Extended Resources to a Container](/docs/tasks/configure-pod-container/extended-resource/)
### For cluster administrators
-* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
-* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
-
-
-
+- [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
+- [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
diff --git a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md
index a9aaaacd46adc..6121f87098aff 100644
--- a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md
+++ b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md
@@ -5,6 +5,7 @@ reviewers:
- piosz
title: Guaranteed Scheduling For Critical Add-On Pods
content_type: concept
+weight: 220
---
diff --git a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md
index e923345d1ab17..39b8d30d6e6f5 100644
--- a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md
+++ b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md
@@ -1,6 +1,7 @@
---
title: IP Masquerade Agent User Guide
content_type: task
+weight: 230
---
diff --git a/content/en/docs/tasks/administer-cluster/kms-provider.md b/content/en/docs/tasks/administer-cluster/kms-provider.md
index 5900be0c4ff34..21e89321e6c2b 100644
--- a/content/en/docs/tasks/administer-cluster/kms-provider.md
+++ b/content/en/docs/tasks/administer-cluster/kms-provider.md
@@ -4,6 +4,7 @@ reviewers:
- enj
title: Using a KMS provider for data encryption
content_type: task
+weight: 370
---
This page shows how to configure a Key Management Service (KMS) provider and plugin to enable secret data encryption. Currently there are two KMS API versions. KMS v1 will continue to work while v2 develops in maturity. If you are not sure which KMS API version to pick, choose v1.
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md b/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md
index 93942c89187e3..7e15e32ca57f2 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md
@@ -1,7 +1,7 @@
---
title: Configuring a cgroup driver
content_type: task
-weight: 10
+weight: 20
---
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
index 18032fc4b3989..1ad41353c0ad9 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
@@ -78,9 +78,12 @@ etcd-ca Dec 28, 2029 23:36 UTC 9y no
front-proxy-ca Dec 28, 2029 23:36 UTC 9y no
```
-The command shows expiration/residual time for the client certificates in the `/etc/kubernetes/pki` folder and for the client certificate embedded in the KUBECONFIG files used by kubeadm (`admin.conf`, `controller-manager.conf` and `scheduler.conf`).
+The command shows expiration/residual time for the client certificates in the
+`/etc/kubernetes/pki` folder and for the client certificate embedded in the kubeconfig files used
+by kubeadm (`admin.conf`, `controller-manager.conf` and `scheduler.conf`).
-Additionally, kubeadm informs the user if the certificate is externally managed; in this case, the user should take care of managing certificate renewal manually/using other tools.
+Additionally, kubeadm informs the user if the certificate is externally managed; in this case, the
+user should take care of managing certificate renewal manually/using other tools.
{{< warning >}}
`kubeadm` cannot manage certificates signed by an external CA.
@@ -96,8 +99,10 @@ To repair an expired kubelet client certificate see
{{< warning >}}
On nodes created with `kubeadm init`, prior to kubeadm version 1.17, there is a
-[bug](https://github.com/kubernetes/kubeadm/issues/1753) where you manually have to modify the contents of `kubelet.conf`. After `kubeadm init` finishes, you should update `kubelet.conf` to point to the
-rotated kubelet client certificates, by replacing `client-certificate-data` and `client-key-data` with:
+[bug](https://github.com/kubernetes/kubeadm/issues/1753) where you manually have to modify the
+contents of `kubelet.conf`. After `kubeadm init` finishes, you should update `kubelet.conf` to
+point to the rotated kubelet client certificates, by replacing `client-certificate-data` and
+`client-key-data` with:
```yaml
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
@@ -107,16 +112,21 @@ client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
## Automatic certificate renewal
-kubeadm renews all the certificates during control plane [upgrade](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/).
+kubeadm renews all the certificates during control plane
+[upgrade](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/).
This feature is designed for addressing the simplest use cases;
-if you don't have specific requirements on certificate renewal and perform Kubernetes version upgrades regularly (less than 1 year in between each upgrade), kubeadm will take care of keeping your cluster up to date and reasonably secure.
+if you don't have specific requirements on certificate renewal and perform Kubernetes version
+upgrades regularly (less than 1 year in between each upgrade), kubeadm will take care of keeping
+your cluster up to date and reasonably secure.
{{< note >}}
It is a best practice to upgrade your cluster frequently in order to stay secure.
{{< /note >}}
-If you have more complex requirements for certificate renewal, you can opt out from the default behavior by passing `--certificate-renewal=false` to `kubeadm upgrade apply` or to `kubeadm upgrade node`.
+If you have more complex requirements for certificate renewal, you can opt out from the default
+behavior by passing `--certificate-renewal=false` to `kubeadm upgrade apply` or to `kubeadm
+upgrade node`.
{{< warning >}}
Prior to kubeadm version 1.17 there is a [bug](https://github.com/kubernetes/kubeadm/issues/1818)
@@ -145,14 +155,18 @@ If you are running an HA cluster, this command needs to be executed on all the c
{{< /warning >}}
{{< note >}}
-`certs renew` uses the existing certificates as the authoritative source for attributes (Common Name, Organization, SAN, etc.) instead of the kubeadm-config ConfigMap. It is strongly recommended to keep them both in sync.
+`certs renew` uses the existing certificates as the authoritative source for attributes (Common
+Name, Organization, SAN, etc.) instead of the `kubeadm-config` ConfigMap. It is strongly recommended
+to keep them both in sync.
{{< /note >}}
`kubeadm certs renew` provides the following options:
-The Kubernetes certificates normally reach their expiration date after one year.
+- The Kubernetes certificates normally reach their expiration date after one year.
-- `--csr-only` can be used to renew certificates with an external CA by generating certificate signing requests (without actually renewing certificates in place); see next paragraph for more information.
+- `--csr-only` can be used to renew certificates with an external CA by generating certificate
+ signing requests (without actually renewing certificates in place); see next paragraph for more
+ information.
- It's also possible to renew a single certificate instead of all.
@@ -161,19 +175,24 @@ The Kubernetes certificates normally reach their expiration date after one year.
This section provides more details about how to execute manual certificate renewal using the Kubernetes certificates API.
{{< caution >}}
-These are advanced topics for users who need to integrate their organization's certificate infrastructure into a kubeadm-built cluster. If the default kubeadm configuration satisfies your needs, you should let kubeadm manage certificates instead.
+These are advanced topics for users who need to integrate their organization's certificate
+infrastructure into a kubeadm-built cluster. If the default kubeadm configuration satisfies your
+needs, you should let kubeadm manage certificates instead.
{{< /caution >}}
### Set up a signer
The Kubernetes Certificate Authority does not work out of the box.
-You can configure an external signer such as [cert-manager](https://cert-manager.io/docs/configuration/ca/), or you can use the built-in signer.
+You can configure an external signer such as [cert-manager](https://cert-manager.io/docs/configuration/ca/),
+or you can use the built-in signer.
The built-in signer is part of [`kube-controller-manager`](/docs/reference/command-line-tools-reference/kube-controller-manager/).
-To activate the built-in signer, you must pass the `--cluster-signing-cert-file` and `--cluster-signing-key-file` flags.
+To activate the built-in signer, you must pass the `--cluster-signing-cert-file` and
+`--cluster-signing-key-file` flags.
-If you're creating a new cluster, you can use a kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3):
+If you're creating a new cluster, you can use a kubeadm
+[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/):
```yaml
apiVersion: kubeadm.k8s.io/v1beta3
@@ -186,7 +205,8 @@ controllerManager:
### Create certificate signing requests (CSR)
-See [Create CertificateSigningRequest](/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) for creating CSRs with the Kubernetes API.
+See [Create CertificateSigningRequest](/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest)
+for creating CSRs with the Kubernetes API.
## Renew certificates with external CA
@@ -194,7 +214,8 @@ This section provide more details about how to execute manual certificate renewa
To better integrate with external CAs, kubeadm can also produce certificate signing requests (CSRs).
A CSR represents a request to a CA for a signed certificate for a client.
-In kubeadm terms, any certificate that would normally be signed by an on-disk CA can be produced as a CSR instead. A CA, however, cannot be produced as a CSR.
+In kubeadm terms, any certificate that would normally be signed by an on-disk CA can be produced
+as a CSR instead. A CA, however, cannot be produced as a CSR.
### Create certificate signing requests (CSR)
@@ -216,7 +237,8 @@ when issuing a certificate.
* In `cfssl` you specify
[usages in the config file](https://github.com/cloudflare/cfssl/blob/master/doc/cmd/cfssl.txt#L170).
-After a certificate is signed using your preferred method, the certificate and the private key must be copied to the PKI directory (by default `/etc/kubernetes/pki`).
+After a certificate is signed using your preferred method, the certificate and the private key
+must be copied to the PKI directory (by default `/etc/kubernetes/pki`).
## Certificate authority (CA) rotation {#certificate-authority-rotation}
@@ -304,8 +326,8 @@ Instead, you can use the [`kubeadm kubeconfig user`](/docs/reference/setup-tools
command to generate kubeconfig files for additional users.
The command accepts a mixture of command line flags and
[kubeadm configuration](/docs/reference/config-api/kubeadm-config.v1beta3/) options.
-The generated kubeconfig will be written to stdout and can be piped to a file
-using `kubeadm kubeconfig user ... > somefile.conf`.
+The generated kubeconfig will be written to stdout and can be piped to a file using
+`kubeadm kubeconfig user ... > somefile.conf`.
Example configuration file that can be used with `--config`:
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md
index 0e5a48b49ec25..ec372fe231b24 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md
@@ -3,7 +3,7 @@ reviewers:
- sig-cluster-lifecycle
title: Reconfiguring a kubeadm cluster
content_type: task
-weight: 10
+weight: 30
---
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
index 9f2c4154b42d3..3df3a729b8727 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
@@ -3,7 +3,7 @@ reviewers:
- sig-cluster-lifecycle
title: Upgrading kubeadm clusters
content_type: task
-weight: 20
+weight: 40
---
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md
index e40dad68e6377..21c39c84d5f14 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md
@@ -2,17 +2,14 @@
title: Upgrading Windows nodes
min-kubernetes-server-version: 1.17
content_type: task
-weight: 40
+weight: 50
---
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
-This page explains how to upgrade a Windows node [created with kubeadm](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes).
-
-
-
+This page explains how to upgrade a Windows node created with kubeadm.
## {{% heading "prerequisites" %}}
@@ -21,9 +18,6 @@ This page explains how to upgrade a Windows node [created with kubeadm](/docs/ta
cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade). You will want to
upgrade the control plane nodes before upgrading your Windows nodes.
-
-
-
## Upgrading worker nodes
@@ -81,7 +75,8 @@ upgrade the control plane nodes before upgrading your Windows nodes.
```
{{< note >}}
-If you are running kube-proxy in a HostProcess container within a Pod, and not as a Windows Service, you can upgrade kube-proxy by applying a newer version of your kube-proxy manifests.
+If you are running kube-proxy in a HostProcess container within a Pod, and not as a Windows Service,
+you can upgrade kube-proxy by applying a newer version of your kube-proxy manifests.
{{< /note >}}
### Uncordon the node
@@ -94,6 +89,3 @@ bring the node back online by marking it schedulable:
kubectl uncordon
```
-
-
-
diff --git a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md
index 091488e792bcb..b16961d46e173 100644
--- a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md
+++ b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md
@@ -4,6 +4,7 @@ reviewers:
- dawnchen
title: Set Kubelet parameters via a config file
content_type: task
+weight: 330
---
@@ -53,7 +54,7 @@ the threshold values respectively.
## Start a Kubelet process configured via the config file
{{< note >}}
-If you use kubeadm to initialize your cluster, use the kubelet-config while creating your cluster with `kubeadmin init`.
+If you use kubeadm to initialize your cluster, use the kubelet-config while creating your cluster with `kubeadm init`.
See [configuring kubelet using kubeadm](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/) for details.
{{< /note >}}
diff --git a/content/en/docs/tasks/administer-cluster/kubelet-credential-provider.md b/content/en/docs/tasks/administer-cluster/kubelet-credential-provider.md
index 3da341dbccc0a..ae3d381a86e2c 100644
--- a/content/en/docs/tasks/administer-cluster/kubelet-credential-provider.md
+++ b/content/en/docs/tasks/administer-cluster/kubelet-credential-provider.md
@@ -6,6 +6,7 @@ reviewers:
description: Configure the kubelet's image credential provider plugin
content_type: task
min-kubernetes-server-version: v1.26
+weight: 120
---
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
@@ -82,8 +83,8 @@ providers:
#
# A match exists between an image and a matchImage when all of the below are true:
# - Both contain the same number of domain parts and each part matches.
- # - The URL path of an imageMatch must be a prefix of the target image URL path.
- # - If the imageMatch contains a port, then the port must match in the image as well.
+ # - The URL path of an matchImages must be a prefix of the target image URL path.
+ # - If the matchImages contains a port, then the port must match in the image as well.
#
# Example values of matchImages:
# - 123456789.dkr.ecr.us-east-1.amazonaws.com
@@ -142,7 +143,7 @@ A match exists between an image name and a `matchImage` entry when all of the be
* Both contain the same number of domain parts and each part matches.
* The URL path of match image must be a prefix of the target image URL path.
-* If the imageMatch contains a port, then the port must match in the image as well.
+* If the matchImages contains a port, then the port must match in the image as well.
Some example values of `matchImages` patterns are:
diff --git a/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md b/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md
index 90e4a11f63171..3ed95f98c6ca0 100644
--- a/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md
+++ b/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md
@@ -2,6 +2,7 @@
title: Running Kubernetes Node Components as a Non-root User
content_type: task
min-kubernetes-server-version: 1.22
+weight: 300
---
diff --git a/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md b/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md
index c982a9cb7cc40..9bd8e81771a74 100644
--- a/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md
+++ b/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md
@@ -1,6 +1,7 @@
---
title: Limit Storage Consumption
content_type: task
+weight: 240
---
diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/_index.md b/content/en/docs/tasks/administer-cluster/manage-resources/_index.md
index a98b234728126..797b69e0a3e86 100644
--- a/content/en/docs/tasks/administer-cluster/manage-resources/_index.md
+++ b/content/en/docs/tasks/administer-cluster/manage-resources/_index.md
@@ -1,4 +1,4 @@
---
title: Manage Memory, CPU, and API Resources
-weight: 20
+weight: 40
---
diff --git a/content/en/docs/tasks/administer-cluster/memory-manager.md b/content/en/docs/tasks/administer-cluster/memory-manager.md
index 55d61c3313c74..33d7b643fa476 100644
--- a/content/en/docs/tasks/administer-cluster/memory-manager.md
+++ b/content/en/docs/tasks/administer-cluster/memory-manager.md
@@ -7,6 +7,7 @@ reviewers:
content_type: task
min-kubernetes-server-version: v1.21
+weight: 410
---
diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md
index b10f75dd9ce71..8d46e32ff0c6c 100644
--- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md
+++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md
@@ -1,6 +1,6 @@
---
title: "Migrating from dockershim"
-weight: 10
+weight: 20
content_type: task
no_list: true
---
@@ -16,12 +16,12 @@ installations. Our [Dockershim Removal FAQ](/blog/2022/02/17/dockershim-faq/) is
to understand the problem better.
Dockershim was removed from Kubernetes with the release of v1.24.
-If you use Docker Engine via dockershim as your container runtime, and wish to upgrade to v1.24,
+If you use Docker Engine via dockershim as your container runtime and wish to upgrade to v1.24,
it is recommended that you either migrate to another runtime or find an alternative means to obtain Docker Engine support.
-Check out [container runtimes](/docs/setup/production-environment/container-runtimes/)
+Check out the [container runtimes](/docs/setup/production-environment/container-runtimes/)
section to know your options. Make sure to
[report issues](https://github.com/kubernetes/kubernetes/issues) you encountered
-with the migration. So the issue can be fixed in a timely manner and your cluster would be
+with the migration so the issues can be fixed in a timely manner and your cluster would be
ready for dockershim removal.
Your cluster might have more than one kind of node, although this is not a common
@@ -37,11 +37,11 @@ These tasks will help you to migrate:
## {{% heading "whatsnext" %}}
* Check out [container runtimes](/docs/setup/production-environment/container-runtimes/)
- to understand your options for a container runtime.
+ to understand your options for an alternative.
* There is a
[GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917)
- to track discussion about the deprecation and removal of dockershim.
-* If you found a defect or other technical concern relating to migrating away from dockershim,
+ to track the discussion about the deprecation and removal of dockershim.
+* If you find a defect or other technical concern relating to migrating away from dockershim,
you can [report an issue](https://github.com/kubernetes/kubernetes/issues/new/choose)
to the Kubernetes project.
diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md
index 5b6afe04e5abe..671322d569f9b 100644
--- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md
+++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md
@@ -1,6 +1,6 @@
---
title: "Changing the Container Runtime on a Node from Docker Engine to containerd"
-weight: 8
+weight: 10
content_type: task
---
diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md
index e66e636eca115..267d614ef9dc0 100644
--- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md
+++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md
@@ -3,7 +3,7 @@ title: Check whether dockershim removal affects you
content_type: task
reviewers:
- SergeyKanzhelev
-weight: 20
+weight: 50
---
diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use.md
index c4247f085a2b8..8e04dd7a6c5d3 100644
--- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use.md
+++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use.md
@@ -3,7 +3,7 @@ title: Find Out What Container Runtime is Used on a Node
content_type: task
reviewers:
- SergeyKanzhelev
-weight: 10
+weight: 30
---
diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md
index b9bdcd9a2dbbc..9bbba039e0d9c 100644
--- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md
+++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md
@@ -1,6 +1,6 @@
---
title: "Migrate Docker Engine nodes from dockershim to cri-dockerd"
-weight: 9
+weight: 20
content_type: task
---
diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md
index 496f25fa0c268..ab6f340beab74 100644
--- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md
+++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md
@@ -3,7 +3,7 @@ title: Migrating telemetry and security agents from dockershim
content_type: task
reviewers:
- SergeyKanzhelev
-weight: 70
+weight: 60
---
diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md
index 34e2b112efce2..5dd0453648d7e 100644
--- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md
+++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md
@@ -4,7 +4,7 @@ content_type: task
reviewers:
- mikebrow
- divya-mohan0209
-weight: 10
+weight: 40
---
@@ -129,7 +129,8 @@ cat << EOF | tee /etc/cni/net.d/10-containerd-net.conflist
},
{
"type": "portmap",
- "capabilities": {"portMappings": true}
+ "capabilities": {"portMappings": true},
+ "externalSetMarkChain": "KUBE-MARK-MASQ"
}
]
}
diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md
index e22cf651606b5..3fa2f64098cd8 100644
--- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md
+++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md
@@ -4,6 +4,7 @@ reviewers:
- janetkuo
title: Namespaces Walkthrough
content_type: task
+weight: 260
---
diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md
index cceaf646bc017..6af713b25c756 100644
--- a/content/en/docs/tasks/administer-cluster/namespaces.md
+++ b/content/en/docs/tasks/administer-cluster/namespaces.md
@@ -4,6 +4,7 @@ reviewers:
- janetkuo
title: Share a Cluster with Namespaces
content_type: task
+weight: 340
---
diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/_index.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/_index.md
index 31d4f7b5aee4c..1a570a2bc2554 100644
--- a/content/en/docs/tasks/administer-cluster/network-policy-provider/_index.md
+++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/_index.md
@@ -1,4 +1,4 @@
---
title: Install a Network Policy Provider
-weight: 30
+weight: 50
---
diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md
index 40733c4c96810..0cf26dcf8caff 100644
--- a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md
+++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md
@@ -3,7 +3,7 @@ reviewers:
- caseydavenport
title: Use Calico for NetworkPolicy
content_type: task
-weight: 10
+weight: 20
---
diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
index 9a496d39a644a..ebafa8527ab27 100644
--- a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
+++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
@@ -4,7 +4,7 @@ reviewers:
- aanm
title: Use Cilium for NetworkPolicy
content_type: task
-weight: 20
+weight: 30
---
diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md
index 673118e312b51..6ae0a5cd6f017 100644
--- a/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md
+++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md
@@ -3,7 +3,7 @@ reviewers:
- murali-reddy
title: Use Kube-router for NetworkPolicy
content_type: task
-weight: 30
+weight: 40
---
diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md
index 6a57d8cc0b2cf..999d2135c2b81 100644
--- a/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md
+++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md
@@ -3,7 +3,7 @@ reviewers:
- chrismarino
title: Romana for NetworkPolicy
content_type: task
-weight: 40
+weight: 50
---
diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md
index fcbc9c40458f6..631d3e6ba5718 100644
--- a/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md
+++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md
@@ -3,7 +3,7 @@ reviewers:
- bboreham
title: Weave Net for NetworkPolicy
content_type: task
-weight: 50
+weight: 60
---
diff --git a/content/en/docs/tasks/administer-cluster/nodelocaldns.md b/content/en/docs/tasks/administer-cluster/nodelocaldns.md
index 11f044e962b5f..2f0a16d8d7b22 100644
--- a/content/en/docs/tasks/administer-cluster/nodelocaldns.md
+++ b/content/en/docs/tasks/administer-cluster/nodelocaldns.md
@@ -5,6 +5,7 @@ reviewers:
- sftim
title: Using NodeLocal DNSCache in Kubernetes Clusters
content_type: task
+weight: 390
---
diff --git a/content/en/docs/tasks/administer-cluster/quota-api-object.md b/content/en/docs/tasks/administer-cluster/quota-api-object.md
index ad38f102d4854..f26ebaf23bde1 100644
--- a/content/en/docs/tasks/administer-cluster/quota-api-object.md
+++ b/content/en/docs/tasks/administer-cluster/quota-api-object.md
@@ -1,6 +1,7 @@
---
title: Configure Quotas for API Objects
content_type: task
+weight: 130
---
diff --git a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md
index e1effd8f05275..3f3d9e06ba46e 100644
--- a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md
+++ b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md
@@ -5,6 +5,7 @@ reviewers:
title: Reconfigure a Node's Kubelet in a Live Cluster
content_type: task
min-kubernetes-server-version: v1.11
+weight: 280
---
diff --git a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
index f39122790328f..8a12831e9d0fc 100644
--- a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
+++ b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
@@ -6,6 +6,7 @@ reviewers:
title: Reserve Compute Resources for System Daemons
content_type: task
min-kubernetes-server-version: 1.8
+weight: 290
---
@@ -133,6 +134,7 @@ with `.slice` appended.
{{< feature-state for_k8s_version="v1.17" state="stable" >}}
**Kubelet Flag**: `--reserved-cpus=0-3`
+**KubeletConfiguration Flag**: `reservedSystemCpus: 0-3`
`reserved-cpus` is meant to define an explicit CPU set for OS system daemons and
kubernetes system daemons. `reserved-cpus` is for systems that do not intend to
diff --git a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md
index b1a7e565480c4..13264bf6ef602 100644
--- a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md
+++ b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md
@@ -5,6 +5,7 @@ reviewers:
- wlan0
title: Cloud Controller Manager Administration
content_type: concept
+weight: 110
---
diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md
index f2ffbacac4392..cd617945efa29 100644
--- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md
+++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md
@@ -7,6 +7,7 @@ reviewers:
title: Safely Drain a Node
content_type: task
min-kubernetes-server-version: 1.5
+weight: 310
---
@@ -65,9 +66,17 @@ kubectl get nodes
Next, tell Kubernetes to drain the node:
```shell
-kubectl drain
+kubectl drain --ignore-daemonsets
+kubectl drain --ignore-daemonsets
```
+If there are pods managed by a DaemonSet, you will need to specify
+`--ignore-daemonsets` with `kubectl` to successfully drain the node. The `kubectl drain` subcommand on its own does not actually drain
+a node of its DaemonSet pods:
+the DaemonSet controller (part of the control plane) immediately replaces missing Pods with
+new equivalent Pods. The DaemonSet controller also creates Pods that ignore unschedulable
+taints, which allows the new Pods to launch onto a node that you are draining.
+
Once it returns (without giving an error), you can power down the node
(or equivalently, if on a cloud platform, delete the virtual machine backing the node).
If you leave the node in the cluster during the maintenance operation, you need to run
diff --git a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md
index d864bb1d32e6f..5ef8b086bed5c 100644
--- a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md
+++ b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md
@@ -5,6 +5,7 @@ reviewers:
- enj
title: Securing a Cluster
content_type: task
+weight: 320
---
diff --git a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md
index 367901b390e40..a66ca9319b013 100644
--- a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md
+++ b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md
@@ -3,6 +3,7 @@ title: Using sysctls in a Kubernetes Cluster
reviewers:
- sttts
content_type: task
+weight: 400
---
diff --git a/content/en/docs/tasks/administer-cluster/topology-manager.md b/content/en/docs/tasks/administer-cluster/topology-manager.md
index b02b2531b600f..7dac6b425624a 100644
--- a/content/en/docs/tasks/administer-cluster/topology-manager.md
+++ b/content/en/docs/tasks/administer-cluster/topology-manager.md
@@ -10,6 +10,7 @@ reviewers:
content_type: task
min-kubernetes-server-version: v1.18
+weight: 150
---
diff --git a/content/en/docs/tasks/administer-cluster/use-cascading-deletion.md b/content/en/docs/tasks/administer-cluster/use-cascading-deletion.md
index 15968e0c3ce0f..5a5ad45ebf944 100644
--- a/content/en/docs/tasks/administer-cluster/use-cascading-deletion.md
+++ b/content/en/docs/tasks/administer-cluster/use-cascading-deletion.md
@@ -1,6 +1,7 @@
---
title: Use Cascading Deletion in a Cluster
content_type: task
+weight: 360
---
diff --git a/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md b/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md
index 19f99ab4c8dcf..e672779f75c13 100644
--- a/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md
+++ b/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md
@@ -2,6 +2,7 @@
title: Verify Signed Kubernetes Artifacts
content_type: task
min-kubernetes-server-version: v1.26
+weight: 420
---
diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md
index 37dda8581e057..dfa20164948db 100644
--- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md
+++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md
@@ -42,11 +42,11 @@ characters.
### Use source files
-1. Store the credentials in files with the values encoded in base64:
+1. Store the credentials in files:
```shell
- echo -n 'admin' | base64 > ./username.txt
- echo -n 'S!B\*d$zDsb=' | base64 > ./password.txt
+ echo -n 'admin' > ./username.txt
+ echo -n 'S!B\*d$zDsb=' > ./password.txt
```
The `-n` flag ensures that the generated files do not have an extra newline
character at the end of the text. This is important because when `kubectl`
@@ -199,4 +199,4 @@ kubectl delete secret db-user-pass
- Read more about the [Secret concept](/docs/concepts/configuration/secret/)
- Learn how to [manage Secrets using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
-- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
\ No newline at end of file
+- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
diff --git a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md
index 4576b0f02b8a0..c952ab361cbbc 100644
--- a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md
+++ b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md
@@ -78,9 +78,9 @@ unless the Pod's grace period expires. For more details, see
[Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/).
{{< note >}}
-Kubernetes only sends the preStop event when a Pod is *terminated*.
-This means that the preStop hook is not invoked when the Pod is *completed*.
-This limitation is tracked in [issue #55087](https://github.com/kubernetes/kubernetes/issues/55807).
+Kubernetes only sends the preStop event when a Pod or a container in the Pod is *terminated*.
+This means that the preStop hook is not invoked when the Pod is *completed*.
+About this limitation, please see [Container hooks](/docs/concepts/containers/container-lifecycle-hooks/#container-hooks) for the detail.
{{< /note >}}
diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
index 308c078136236..19d0a9cfa33e9 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
@@ -388,9 +388,24 @@ to 1 second. Minimum value is 1.
* `successThreshold`: Minimum consecutive successes for the probe to be
considered successful after having failed. Defaults to 1. Must be 1 for liveness
and startup Probes. Minimum value is 1.
-* `failureThreshold`: When a probe fails, Kubernetes will
-try `failureThreshold` times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready.
-Defaults to 3. Minimum value is 1.
+* `failureThreshold`: After a probe fails `failureThreshold` times in a row, Kubernetes
+ considers that the overall check has failed: the container is _not_ ready / healthy /
+ live.
+ For the case of a startup or liveness probe, if at least `failureThreshold` probes have
+ failed, Kubernetes treats the container as unhealthy and triggers a restart for that
+ specific container. The kubelet takes the setting of `terminationGracePeriodSeconds`
+ for that container into account.
+ For a failed readiness probe, the kubelet continues running the container that failed
+ checks, and also continues to run more probes; because the check failed, the kubelet
+ sets the `Ready` [condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions)
+ on the Pod to `false`.
+* `terminationGracePeriodSeconds`: configure a grace period for the kubelet to wait
+ between triggering a shut down of the failed container, and then forcing the
+ container runtime to stop that container.
+ The default is to inherit the Pod-level value for `terminationGracePeriodSeconds`
+ (30 seconds if not specified), and the minimum value is 1.
+ See [probe-level `terminationGracePeriodSeconds`](#probe-level-terminationgraceperiodseconds)
+ for more detail.
{{< note >}}
Before Kubernetes 1.20, the field `timeoutSeconds` was not respected for exec probes:
diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md
index 77940781f04a7..5b783d2dcca01 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md
@@ -47,11 +47,12 @@ kubectl get pods/ -o yaml
```
In the output, you see a field `spec.serviceAccountName`.
-Kubernetes [automatically](/docs/user-guide/working-with-resources/#resources-are-automatically-modified)
+Kubernetes [automatically](/docs/concepts/overview/working-with-objects/object-management/)
sets that value if you don't specify it when you create a Pod.
An application running inside a Pod can access the Kubernetes API using
-automatically mounted service account credentials. See [accessing the Cluster](/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod) to learn more.
+automatically mounted service account credentials.
+See [accessing the Cluster](/docs/tasks/access-application-cluster/access-cluster/) to learn more.
When a Pod authenticates as a ServiceAccount, its level of access depends on the
[authorization plugin and policy](/docs/reference/access-authn-authz/authorization/#authorization-modules)
@@ -62,7 +63,8 @@ in use.
If you don't want the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
to automatically mount a ServiceAccount's API credentials, you can opt out of
the default behavior.
-You can opt out of automounting API credentials on `/var/run/secrets/kubernetes.io/serviceaccount/token` for a service account by setting `automountServiceAccountToken: false` on the ServiceAccount:
+You can opt out of automounting API credentials on `/var/run/secrets/kubernetes.io/serviceaccount/token`
+for a service account by setting `automountServiceAccountToken: false` on the ServiceAccount:
For example:
diff --git a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md
index 3b6bec6def564..1ee34aa225721 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md
@@ -45,7 +45,7 @@ restarts. Here is the configuration file for the Pod:
The output looks like this:
- ```shell
+ ```console
NAME READY STATUS RESTARTS AGE
redis 1/1 Running 0 13s
```
@@ -73,7 +73,7 @@ restarts. Here is the configuration file for the Pod:
The output is similar to this:
- ```shell
+ ```console
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379
root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash
@@ -91,7 +91,7 @@ restarts. Here is the configuration file for the Pod:
1. In your original terminal, watch for changes to the Redis Pod. Eventually,
you will see something like this:
- ```shell
+ ```console
NAME READY STATUS RESTARTS AGE
redis 1/1 Running 0 13s
redis 0/1 Completed 0 6m
diff --git a/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md b/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md
index cad26cf29a374..24b8efea5a8cd 100644
--- a/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md
+++ b/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md
@@ -228,7 +228,7 @@ To run HostProcess containers as a local user; A local usergroup must first be c
and the name of that local usergroup must be specified in the `runAsUserName` field in the deployment.
Prior to initializing the HostProcess container, a new **ephemeral** local user account to be created and joined to the specified usergroup, from which the container is run.
This provides a number a benefits including eliminating the need to manage passwords for local user accounts.
-passwords for local user accounts. An initial HostProcess container running as a service account can be used to
+An initial HostProcess container running as a service account can be used to
prepare the user groups for later HostProcess containers.
{{< note >}}
@@ -269,4 +269,4 @@ For more information please check out the [windows-host-process-containers-base-
- HostProcess containers fail to start with `failed to create user process token: failed to logon user: Access is denied.: unknown`
Ensure containerd is running as `LocalSystem` or `LocalService` service accounts. User accounts (even Administrator accounts) do not have permissions to create logon tokens for any of the supported [user accounts](#choosing-a-user-account).
-
\ No newline at end of file
+
diff --git a/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md b/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md
index 393d546623857..802f38a651a87 100644
--- a/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md
+++ b/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md
@@ -52,6 +52,9 @@ plugins:
# Array of namespaces to exempt.
namespaces: []
```
+{{< note >}}
+The above manifest needs to be specified via the `--admission-control-config-file` to kube-apiserver.
+{{< /note >}}
{{< note >}}
`pod-security.admission.config.k8s.io/v1` configuration requires v1.25+.
diff --git a/content/en/docs/tasks/configure-pod-container/security-context.md b/content/en/docs/tasks/configure-pod-container/security-context.md
index d5399ca9f58d2..7e1a04e9a439e 100644
--- a/content/en/docs/tasks/configure-pod-container/security-context.md
+++ b/content/en/docs/tasks/configure-pod-container/security-context.md
@@ -470,8 +470,7 @@ The more files and directories in the volume, the longer that relabelling takes.
In Kubernetes 1.25, the kubelet loses track of volume labels after restart. In
other words, then kubelet may refuse to start Pods with errors similar to "conflicting
SELinux labels of volume", while there are no conflicting labels in Pods. Make sure
-nodes are
-[fully drained](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/)
+nodes are [fully drained](/docs/tasks/administer-cluster/safely-drain-node/)
before restarting kubelet.
{{< /note >}}
@@ -519,4 +518,5 @@ kubectl delete pod security-context-demo-4
* [AllowPrivilegeEscalation design
document](https://git.k8s.io/design-proposals-archive/auth/no-new-privs.md)
* For more information about security mechanisms in Linux, see
-[Overview of Linux Kernel Security Features](https://www.linux.com/learn/overview-linux-kernel-security-features) (Note: Some information is out of date)
+ [Overview of Linux Kernel Security Features](https://www.linux.com/learn/overview-linux-kernel-security-features)
+ (Note: Some information is out of date)
diff --git a/content/en/docs/tasks/configure-pod-container/static-pod.md b/content/en/docs/tasks/configure-pod-container/static-pod.md
index 23191e1ffe688..99c5b7ee0fc1f 100644
--- a/content/en/docs/tasks/configure-pod-container/static-pod.md
+++ b/content/en/docs/tasks/configure-pod-container/static-pod.md
@@ -117,7 +117,7 @@ Similar to how [filesystem-hosted manifests](#configuration-files) work, the kub
refetches the manifest on a schedule. If there are changes to the list of static
Pods, the kubelet applies them.
-To use this approach:
+To use this approach:
1. Create a YAML file and store it on a web server so that you can pass the URL of that file to the kubelet.
@@ -225,6 +225,18 @@ crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
89db4553e1eeb docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106
```
+Once you identify the right container, you can get the logs for that container with `crictl`:
+
+```shell
+# Run these commands on the node where the container is running
+crictl logs
+```
+```console
+10.240.0.48 - - [16/Nov/2022:12:45:49 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-"
+10.240.0.48 - - [16/Nov/2022:12:45:50 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-"
+10.240.0.48 - - [16/Nove/2022:12:45:51 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-"
+```
+To find more about how to debug using `crictl`, please visit [_Debugging Kubernetes nodes with crictl_](https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/)
## Dynamic addition and removal of static pods
@@ -232,7 +244,7 @@ The running kubelet periodically scans the configured directory (`/etc/kubernete
```shell
# This assumes you are using filesystem-hosted static Pod configuration
-# Run these commands on the node where the kubelet is running
+# Run these commands on the node where the container is running
#
mv /etc/kubernetes/manifests/static-web.yaml /tmp
sleep 20
@@ -246,3 +258,12 @@ crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f427638871c35 docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106
```
+## {{% heading "whatsnext" %}}
+
+* [Generate static Pod manifests for control plane components](/docs/reference/setup-tools/kubeadm/implementation-details/#generate-static-pod-manifests-for-control-plane-components)
+* [Generate static Pod manifest for local etcd](/docs/reference/setup-tools/kubeadm/implementation-details/#generate-static-pod-manifest-for-local-etcd)
+* [Debugging Kubernetes nodes with `crictl`](/docs/tasks/debug/debug-cluster/crictl/)
+* [Learn more about `crictl`](https://github.com/kubernetes-sigs/cri-tools).
+* [Map `docker` CLI commands to `crictl`](/docs/reference/tools/map-crictl-dockercli/).
+* [Set up etcd instances as static pods managed by a kubelet](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)
+
diff --git a/content/en/docs/tasks/debug/_index.md b/content/en/docs/tasks/debug/_index.md
index da024f4af915a..0d990ec949dfd 100644
--- a/content/en/docs/tasks/debug/_index.md
+++ b/content/en/docs/tasks/debug/_index.md
@@ -43,14 +43,32 @@ and command-line interfaces (CLIs), such as [`kubectl`](/docs/reference/kubectl/
## Help! My question isn't covered! I need help now!
-### Stack Overflow
+### Stack Exchange, Stack Overflow, or Server Fault {#stack-exchange}
-Someone else from the community may have already asked a similar question or may
-be able to help with your problem. The Kubernetes team will also monitor
+If you have questions related to *software development* for your containerized app,
+you can ask those on [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes).
+
+If you have Kubernetes questions related to *cluster management* or *configuration*,
+you can ask those on
+[Server Fault](https://serverfault.com/questions/tagged/kubernetes).
+
+There are also several more specific Stack Exchange network sites which might
+be the right place to ask Kubernetes questions in areas such as
+[DevOps](https://devops.stackexchange.com/questions/tagged/kubernetes),
+[Software Engineering](https://softwareengineering.stackexchange.com/questions/tagged/kubernetes),
+or [InfoSec](https://security.stackexchange.com/questions/tagged/kubernetes).
+
+Someone else from the community may have already asked a similar question or
+may be able to help with your problem.
+
+The Kubernetes team will also monitor
[posts tagged Kubernetes](https://stackoverflow.com/questions/tagged/kubernetes).
-If there aren't any existing questions that help, **please [ensure that your question is on-topic on Stack Overflow](https://stackoverflow.com/help/on-topic)
-and that you read through the guidance on [how to ask a new question](https://stackoverflow.com/help/how-to-ask)**,
-before [asking a new one](https://stackoverflow.com/questions/ask?tags=kubernetes)!
+If there aren't any existing questions that help, **please ensure that your question
+is [on-topic on Stack Overflow](https://stackoverflow.com/help/on-topic),
+[Server Fault](https://serverfault.com/help/on-topic), or the Stack Exchange
+Network site you're asking on**, and read through the guidance on
+[how to ask a new question](https://stackoverflow.com/help/how-to-ask),
+before asking a new one!
### Slack
diff --git a/content/en/docs/tasks/debug/debug-cluster/_index.md b/content/en/docs/tasks/debug/debug-cluster/_index.md
index 29fb9a06ae71e..3278fdfa7d4ce 100644
--- a/content/en/docs/tasks/debug/debug-cluster/_index.md
+++ b/content/en/docs/tasks/debug/debug-cluster/_index.md
@@ -323,6 +323,7 @@ This is an incomplete list of things that could go wrong, and how to adjust your
[monitoring resource usage](/docs/tasks/debug/debug-cluster/resource-usage-monitoring/)
* Use Node Problem Detector to
[monitor node health](/docs/tasks/debug/debug-cluster/monitor-node-health/)
+* Use `kubectl debug node` to [debug Kubernetes nodes](/docs/tasks/debug/debug-cluster/kubectl-node-debug)
* Use `crictl` to [debug Kubernetes nodes](/docs/tasks/debug/debug-cluster/crictl/)
* Get more information about [Kubernetes auditing](/docs/tasks/debug/debug-cluster/audit/)
* Use `telepresence` to [develop and debug services locally](/docs/tasks/debug/debug-cluster/local-debugging/)
diff --git a/content/en/docs/tasks/debug/debug-cluster/kubectl-node-debug.md b/content/en/docs/tasks/debug/debug-cluster/kubectl-node-debug.md
new file mode 100644
index 0000000000000..98d1a7182cd45
--- /dev/null
+++ b/content/en/docs/tasks/debug/debug-cluster/kubectl-node-debug.md
@@ -0,0 +1,109 @@
+---
+title: Debugging Kubernetes Nodes With Kubectl
+content_type: task
+min-kubernetes-server-version: 1.20
+---
+
+
+This page shows how to debug a [node](/docs/concepts/architecture/nodes/)
+running on the Kubernetes cluster using `kubectl debug` command.
+
+## {{% heading "prerequisites" %}}
+
+
+{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+
+You need to have permission to create Pods and to assign those new Pods to arbitrary nodes.
+You also need to be authorized to create Pods that access filesystems from the host.
+
+
+
+
+## Debugging a Node using `kubectl debug node`
+
+Use the `kubectl debug node` command to deploy a Pod to a Node that you want to troubleshoot.
+This command is helpful in scenarios where you can't access your Node by using an SSH connection.
+When the Pod is created, the Pod opens an interactive shell on the Node.
+To create an interactive shell on a Node named “mynode”, run:
+
+```shell
+kubectl debug node/mynode -it --image=ubuntu
+```
+
+```console
+Creating debugging pod node-debugger-mynode-pdx84 with container debugger on node mynode.
+If you don't see a command prompt, try pressing enter.
+root@mynode:/#
+```
+
+The debug command helps to gather information and troubleshoot issues. Commands
+that you might use include `ip`, `ifconfig`, `nc`, `ping`, and `ps` and so on. You can also
+install other tools, such as `mtr`, `tcpdump`, and `curl`, from the respective package manager.
+
+{{< note >}}
+
+The debug commands may differ based on the image the debugging pod is using and
+these commands might need to be installed.
+
+{{< /note >}}
+
+The debugging Pod can access the root filesystem of the Node, mounted at `/host` in the Pod.
+If you run your kubelet in a filesystem namespace,
+the debugging Pod sees the root for that namespace, not for the entire node. For a typical Linux node,
+you can look at the following paths to find relevant logs:
+
+`/host/var/log/kubelet.log`
+: Logs from the `kubelet`, responsible for running containers on the node.
+
+`/host/var/log/kube-proxy.log`
+: Logs from `kube-proxy`, which is responsible for directing traffic to Service endpoints.
+
+`/host/var/log/containerd.log`
+: Logs from the `containerd` process running on the node.
+
+`/host/var/log/syslog`
+: Shows general messages and information regarding the system.
+
+`/host/var/log/kern.log`
+: Shows kernel logs.
+
+When creating a debugging session on a Node, keep in mind that:
+
+* `kubectl debug` automatically generates the name of the new pod, based on
+ the name of the node.
+* The root filesystem of the Node will be mounted at `/host`.
+* Although the container runs in the host IPC, Network, and PID namespaces,
+ the pod isn't privileged. This means that reading some process information might fail
+ because access to that information is restricted to superusers. For example, `chroot /host` will fail.
+ If you need a privileged pod, create it manually.
+
+## {{% heading "cleanup" %}}
+
+When you finish using the debugging Pod, delete it:
+
+```shell
+kubectl get pods
+```
+
+```none
+NAME READY STATUS RESTARTS AGE
+node-debugger-mynode-pdx84 0/1 Completed 0 8m1s
+```
+
+```shell
+# Change the pod name accordingly
+kubectl delete pod node-debugger-mynode-pdx84 --now
+```
+
+```none
+pod "node-debugger-mynode-pdx84" deleted
+```
+
+{{< note >}}
+
+The `kubectl debug node` command won't work if the Node is down (disconnected
+from the network, or kubelet dies and won't restart, etc.).
+Check [debugging a down/unreachable node ](/docs/tasks/debug/debug-cluster/#example-debugging-a-down-unreachable-node)
+in that case.
+
+{{< /note >}}
\ No newline at end of file
diff --git a/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md b/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md
index 34b4e0ed7d5fc..8592ada9c2e20 100644
--- a/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md
+++ b/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md
@@ -12,8 +12,8 @@ weight: 20
*Node Problem Detector* is a daemon for monitoring and reporting about a node's health.
You can run Node Problem Detector as a `DaemonSet` or as a standalone daemon.
Node Problem Detector collects information about node problems from various daemons
-and reports these conditions to the API server as [NodeCondition](/docs/concepts/architecture/nodes/#condition)
-and [Event](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#event-v1-core).
+and reports these conditions to the API server as Node [Condition](/docs/concepts/architecture/nodes/#condition)s
+or as [Event](/docs/reference/kubernetes-api/cluster-resources/event-v1)s.
To learn how to install and use Node Problem Detector, see
[Node Problem Detector project documentation](https://github.com/kubernetes/node-problem-detector).
@@ -26,16 +26,13 @@ To learn how to install and use Node Problem Detector, see
## Limitations
-* Node Problem Detector only supports file based kernel log.
- Log tools such as `journald` are not supported.
-
* Node Problem Detector uses the kernel log format for reporting kernel issues.
To learn how to extend the kernel log format, see [Add support for another log format](#support-other-log-format).
## Enabling Node Problem Detector
Some cloud providers enable Node Problem Detector as an {{< glossary_tooltip text="Addon" term_id="addons" >}}.
-You can also enable Node Problem Detector with `kubectl` or by creating an Addon pod.
+You can also enable Node Problem Detector with `kubectl` or by creating an Addon DaemonSet.
### Using kubectl to enable Node Problem Detector {#using-kubectl}
@@ -68,7 +65,7 @@ directory `/etc/kubernetes/addons/node-problem-detector` on a control plane node
## Overwrite the configuration
-The [default configuration](https://github.com/kubernetes/node-problem-detector/tree/v0.1/config)
+The [default configuration](https://github.com/kubernetes/node-problem-detector/tree/v0.8.12/config)
is embedded when building the Docker image of Node Problem Detector.
However, you can use a [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/)
@@ -100,54 +97,59 @@ This approach only applies to a Node Problem Detector started with `kubectl`.
Overwriting a configuration is not supported if a Node Problem Detector runs as a cluster Addon.
The Addon manager does not support `ConfigMap`.
-## Kernel Monitor
+## Problem Daemons
+
+A problem daemon is a sub-daemon of the Node Problem Detector. It monitors specific kinds of node
+problems and reports them to the Node Problem Detector.
+There are several types of supported problem daemons.
-*Kernel Monitor* is a system log monitor daemon supported in the Node Problem Detector.
-Kernel monitor watches the kernel log and detects known kernel issues following predefined rules.
+- A `SystemLogMonitor` type of daemon monitors the system logs and reports problems and metrics
+ according to predefined rules. You can customize the configurations for different log sources
+ such as [filelog](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/kernel-monitor-filelog.json),
+ [kmsg](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/kernel-monitor.json),
+ [kernel](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/kernel-monitor-counter.json),
+ [abrt](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/abrt-adaptor.json),
+ and [systemd](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/systemd-monitor-counter.json).
-The Kernel Monitor matches kernel issues according to a set of predefined rule list in
-[`config/kernel-monitor.json`](https://github.com/kubernetes/node-problem-detector/blob/v0.1/config/kernel-monitor.json). The rule list is extensible. You can expand the rule list by overwriting the
-configuration.
+- A `SystemStatsMonitor` type of daemon collects various health-related system stats as metrics.
+ You can customize its behavior by updating its
+ [configuration file](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/system-stats-monitor.json).
-### Add new NodeConditions
+- A `CustomPluginMonitor` type of daemon invokes and checks various node problems by running
+ user-defined scripts. You can use different custom plugin monitors to monitor different
+ problems and customize the daemon behavior by updating the
+ [configuration file](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/custom-plugin-monitor.json).
-To support a new `NodeCondition`, create a condition definition within the `conditions` field in
-`config/kernel-monitor.json`, for example:
+- A `HealthChecker` type of daemon checks the health of the kubelet and container runtime on a node.
-```json
-{
- "type": "NodeConditionType",
- "reason": "CamelCaseDefaultNodeConditionReason",
- "message": "arbitrary default node condition message"
-}
-```
+### Adding support for other log format {#support-other-log-format}
-### Detect new problems
+The system log monitor currently supports file-based logs, journald, and kmsg.
+Additional sources can be added by implementing a new
+[log watcher](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/pkg/systemlogmonitor/logwatchers/types/log_watcher.go).
-To detect new problems, you can extend the `rules` field in `config/kernel-monitor.json`
-with a new rule definition:
+### Adding custom plugin monitors
-```json
-{
- "type": "temporary/permanent",
- "condition": "NodeConditionOfPermanentIssue",
- "reason": "CamelCaseShortReason",
- "message": "regexp matching the issue in the kernel log"
-}
-```
+You can extend the Node Problem Detector to execute any monitor scripts written in any language by
+developing a custom plugin. The monitor scripts must conform to the plugin protocol in exit code
+and standard output. For more information, please refer to the
+[plugin interface proposal](https://docs.google.com/document/d/1jK_5YloSYtboj-DtfjmYKxfNnUxCAvohLnsH5aGCAYQ/edit#).
-### Configure path for the kernel log device {#kernel-log-device-path}
+## Exporter
-Check your kernel log path location in your operating system (OS) distribution.
-The Linux kernel [log device](https://www.kernel.org/doc/Documentation/ABI/testing/dev-kmsg) is usually presented as `/dev/kmsg`. However, the log path location varies by OS distribution.
-The `log` field in `config/kernel-monitor.json` represents the log path inside the container.
-You can configure the `log` field to match the device path as seen by the Node Problem Detector.
+An exporter reports the node problems and/or metrics to certain backends.
+The following exporters are supported:
-### Add support for another log format {#support-other-log-format}
+- **Kubernetes exporter**: this exporter reports node problems to the Kubernetes API server.
+ Temporary problems are reported as Events and permanent problems are reported as Node Conditions.
-Kernel monitor uses the
-[`Translator`](https://github.com/kubernetes/node-problem-detector/blob/v0.1/pkg/kernelmonitor/translator/translator.go) plugin to translate the internal data structure of the kernel log.
-You can implement a new translator for a new log format.
+- **Prometheus exporter**: this exporter reports node problems and metrics locally as Prometheus
+ (or OpenMetrics) metrics. You can specify the IP address and port for the exporter using command
+ line arguments.
+
+- **Stackdriver exporter**: this exporter reports node problems and metrics to the Stackdriver
+ Monitoring API. The exporting behavior can be customized using a
+ [configuration file](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/exporter/stackdriver-exporter.json).
@@ -160,4 +162,5 @@ Usually this is fine, because:
* The kernel log grows relatively slowly.
* A resource limit is set for the Node Problem Detector.
* Even under high load, the resource usage is acceptable. For more information, see the Node Problem Detector
-[benchmark result](https://github.com/kubernetes/node-problem-detector/issues/2#issuecomment-220255629).
+ [benchmark result](https://github.com/kubernetes/node-problem-detector/issues/2#issuecomment-220255629).
+
diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md
index 99fd7f4823df0..f472a94962616 100644
--- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md
+++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md
@@ -78,8 +78,8 @@ Removing an old version:
If this occurs, switch back to using `served:true` on the old version, migrate the
remaining clients to the new version and repeat this step.
1. Ensure the [upgrade of existing objects to the new stored version](#upgrade-existing-objects-to-a-new-stored-version) step has been completed.
- 1. Verify that the `storage` is set to `true` for the new version in the `spec.versions` list in the CustomResourceDefinition.
- 1. Verify that the old version is no longer listed in the CustomResourceDefinition `status.storedVersions`.
+ 1. Verify that the `storage` is set to `true` for the new version in the `spec.versions` list in the CustomResourceDefinition.
+ 1. Verify that the old version is no longer listed in the CustomResourceDefinition `status.storedVersions`.
1. Remove the old version from the CustomResourceDefinition `spec.versions` list.
1. Drop conversion support for the old version in conversion webhooks.
@@ -356,7 +356,7 @@ spec:
### Version removal
-An older API version cannot be dropped from a CustomResourceDefinition manifest until existing persisted data has been migrated to the newer API version for all clusters that served the older version of the custom resource, and the old version is removed from the `status.storedVersions` of the CustomResourceDefinition.
+An older API version cannot be dropped from a CustomResourceDefinition manifest until existing stored data has been migrated to the newer API version for all clusters that served the older version of the custom resource, and the old version is removed from the `status.storedVersions` of the CustomResourceDefinition.
```yaml
apiVersion: apiextensions.k8s.io/v1
@@ -1021,18 +1021,29 @@ Example of a response from a webhook indicating a conversion request failed, wit
## Writing, reading, and updating versioned CustomResourceDefinition objects
-When an object is written, it is persisted at the version designated as the
+When an object is written, it is stored at the version designated as the
storage version at the time of the write. If the storage version changes,
existing objects are never converted automatically. However, newly-created
or updated objects are written at the new storage version. It is possible for an
object to have been written at a version that is no longer served.
-When you read an object, you specify the version as part of the path. If you
-specify a version that is different from the object's persisted version,
-Kubernetes returns the object to you at the version you requested, but the
-persisted object is neither changed on disk, nor converted in any way
-(other than changing the `apiVersion` string) while serving the request.
+When you read an object, you specify the version as part of the path.
You can request an object at any version that is currently served.
+If you specify a version that is different from the object's stored version,
+Kubernetes returns the object to you at the version you requested, but the
+stored object is not changed on disk.
+
+What happens to the object that is being returned while serving the read
+request depends on what is specified in the CRD's `spec.conversion`:
+- if the default `strategy` value `None` is specified, the only modifications
+ to the object are changing the `apiVersion` string and perhaps [pruning
+ unknown fields](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#field-pruning)
+ (depending on the configuration). Note that this is unlikely to lead to good
+ results if the schemas differ between the storage and requested version.
+ In particular, you should not use this strategy if the same data is
+ represented in different fields between versions.
+- if [webhook conversion](#webhook-conversion) is specified, then this
+ mechanism controls the conversion.
If you update an existing object, it is rewritten at the version that is
currently the storage version. This is the only way that objects can change from
@@ -1040,23 +1051,24 @@ one version to another.
To illustrate this, consider the following hypothetical series of events:
-1. The storage version is `v1beta1`. You create an object. It is persisted in
- storage at version `v1beta1`
-2. You add version `v1` to your CustomResourceDefinition and designate it as
- the storage version.
-3. You read your object at version `v1beta1`, then you read the object again at
- version `v1`. Both returned objects are identical except for the apiVersion
- field.
-4. You create a new object. It is persisted in storage at version `v1`. You now
- have two objects, one of which is at `v1beta1`, and the other of which is at
- `v1`.
-5. You update the first object. It is now persisted at version `v1` since that
- is the current storage version.
+1. The storage version is `v1beta1`. You create an object. It is stored at version `v1beta1`
+2. You add version `v1` to your CustomResourceDefinition and designate it as
+ the storage version. Here the schemas for `v1` and `v1beta1` are identical,
+ which is typically the case when promoting an API to stable in the
+ Kubernetes ecosystem.
+3. You read your object at version `v1beta1`, then you read the object again at
+ version `v1`. Both returned objects are identical except for the apiVersion
+ field.
+4. You create a new object. It is stored at version `v1`. You now
+ have two objects, one of which is at `v1beta1`, and the other of which is at
+ `v1`.
+5. You update the first object. It is now stored at version `v1` since that
+ is the current storage version.
### Previous storage versions
The API server records each version which has ever been marked as the storage
-version in the status field `storedVersions`. Objects may have been persisted
+version in the status field `storedVersions`. Objects may have been stored
at any version that has ever been designated as a storage version. No objects
can exist in storage at a version that has never been a storage version.
@@ -1067,19 +1079,19 @@ procedure.
*Option 1:* Use the Storage Version Migrator
-1. Run the [storage Version migrator](https://github.com/kubernetes-sigs/kube-storage-version-migrator)
-2. Remove the old version from the CustomResourceDefinition `status.storedVersions` field.
+1. Run the [storage Version migrator](https://github.com/kubernetes-sigs/kube-storage-version-migrator)
+2. Remove the old version from the CustomResourceDefinition `status.storedVersions` field.
*Option 2:* Manually upgrade the existing objects to a new stored version
The following is an example procedure to upgrade from `v1beta1` to `v1`.
-1. Set `v1` as the storage in the CustomResourceDefinition file and apply it
- using kubectl. The `storedVersions` is now `v1beta1, v1`.
-2. Write an upgrade procedure to list all existing objects and write them with
- the same content. This forces the backend to write objects in the current
- storage version, which is `v1`.
-3. Remove `v1beta1` from the CustomResourceDefinition `status.storedVersions` field.
+1. Set `v1` as the storage in the CustomResourceDefinition file and apply it
+ using kubectl. The `storedVersions` is now `v1beta1, v1`.
+2. Write an upgrade procedure to list all existing objects and write them with
+ the same content. This forces the backend to write objects in the current
+ storage version, which is `v1`.
+3. Remove `v1beta1` from the CustomResourceDefinition `status.storedVersions` field.
{{< note >}}
The flag `--subresource` is used with the kubectl get, patch, edit, and replace commands to
diff --git a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md
index 5d3a861911c3b..729d9d4a4f7e2 100644
--- a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md
+++ b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md
@@ -9,18 +9,7 @@ weight: 10
-You can use a {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} to run {{< glossary_tooltip text="Jobs" term_id="job" >}}
-on a time-based schedule.
-These automated jobs run like [Cron](https://en.wikipedia.org/wiki/Cron) tasks on a Linux or UNIX system.
-
-Cron jobs are useful for creating periodic and recurring tasks, like running backups or sending emails.
-Cron jobs can also schedule individual tasks for a specific time, such as if you want to schedule a job for a low activity period.
-
-Cron jobs have limitations and idiosyncrasies.
-For example, in certain circumstances, a single cron job can create multiple jobs.
-Therefore, jobs should be idempotent.
-
-For more limitations, see [CronJobs](/docs/concepts/workloads/controllers/cron-jobs).
+This page shows how to run automated tasks using Kubernetes {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} object.
## {{% heading "prerequisites" %}}
@@ -123,97 +112,3 @@ kubectl delete cronjob hello
Deleting the cron job removes all the jobs and pods it created and stops it from creating additional jobs.
You can read more about removing jobs in [garbage collection](/docs/concepts/architecture/garbage-collection/).
-
-## Writing a CronJob Spec {#writing-a-cron-job-spec}
-
-As with all other Kubernetes objects, a CronJob must have `apiVersion`, `kind`, and `metadata` fields.
-For more information about working with Kubernetes objects and their
-{{< glossary_tooltip text="manifests" term_id="manifest" >}}, see the
-[managing resources](/docs/concepts/cluster-administration/manage-deployment/),
-and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents.
-
-Each manifest for a CronJob also needs a [`.spec`](/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status) section.
-
-{{< note >}}
-If you modify a CronJob, the changes you make will apply to new jobs that start to run after your modification
-is complete. Jobs (and their Pods) that have already started continue to run without changes.
-That is, the CronJob does _not_ update existing jobs, even if those remain running.
-{{< /note >}}
-
-### Schedule
-
-The `.spec.schedule` is a required field of the `.spec`.
-It takes a [Cron](https://en.wikipedia.org/wiki/Cron) format string, such as `0 * * * *` or `@hourly`,
-as schedule time of its jobs to be created and executed.
-
-The format also includes extended "Vixie cron" step values. As explained in the
-[FreeBSD manual](https://www.freebsd.org/cgi/man.cgi?crontab%285%29):
-
-> Step values can be used in conjunction with ranges. Following a range
-> with `/` specifies skips of the number's value through the
-> range. For example, `0-23/2` can be used in the hours field to specify
-> command execution every other hour (the alternative in the V7 standard is
-> `0,2,4,6,8,10,12,14,16,18,20,22`). Steps are also permitted after an
-> asterisk, so if you want to say "every two hours", just use `*/2`.
-
-{{< note >}}
-A question mark (`?`) in the schedule has the same meaning as an asterisk `*`, that is,
-it stands for any of available value for a given field.
-{{< /note >}}
-
-### Job Template
-
-The `.spec.jobTemplate` is the template for the job, and it is required.
-It has exactly the same schema as a [Job](/docs/concepts/workloads/controllers/job/), except that
-it is nested and does not have an `apiVersion` or `kind`.
-For information about writing a job `.spec`, see [Writing a Job Spec](/docs/concepts/workloads/controllers/job/#writing-a-job-spec).
-
-### Starting Deadline
-
-The `.spec.startingDeadlineSeconds` field is optional.
-It stands for the deadline in seconds for starting the job if it misses its scheduled time for any reason.
-After the deadline, the cron job does not start the job.
-Jobs that do not meet their deadline in this way count as failed jobs.
-If this field is not specified, the jobs have no deadline.
-
-If the `.spec.startingDeadlineSeconds` field is set (not null), the CronJob
-controller measures the time between when a job is expected to be created and
-now. If the difference is higher than that limit, it will skip this execution.
-
-For example, if it is set to `200`, it allows a job to be created for up to 200
-seconds after the actual schedule.
-
-### Concurrency Policy
-
-The `.spec.concurrencyPolicy` field is also optional.
-It specifies how to treat concurrent executions of a job that is created by this cron job.
-The spec may specify only one of the following concurrency policies:
-
-* `Allow` (default): The cron job allows concurrently running jobs
-* `Forbid`: The cron job does not allow concurrent runs; if it is time for a new job run and the
- previous job run hasn't finished yet, the cron job skips the new job run
-* `Replace`: If it is time for a new job run and the previous job run hasn't finished yet, the
- cron job replaces the currently running job run with a new job run
-
-Note that concurrency policy only applies to the jobs created by the same cron job.
-If there are multiple cron jobs, their respective jobs are always allowed to run concurrently.
-
-### Suspend
-
-The `.spec.suspend` field is also optional.
-If it is set to `true`, all subsequent executions are suspended.
-This setting does not apply to already started executions.
-Defaults to false.
-
-{{< caution >}}
-Executions that are suspended during their scheduled time count as missed jobs.
-When `.spec.suspend` changes from `true` to `false` on an existing cron job without a
-[starting deadline](#starting-deadline), the missed jobs are scheduled immediately.
-{{< /caution >}}
-
-### Jobs History Limits
-
-The `.spec.successfulJobsHistoryLimit` and `.spec.failedJobsHistoryLimit` fields are optional.
-These fields specify how many completed and failed jobs should be kept.
-By default, they are set to 3 and 1 respectively. Setting a limit to `0` corresponds to keeping
-none of the corresponding kind of jobs after they finish.
diff --git a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md
index 643b57cc3b2f0..2b54f2f409482 100644
--- a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md
+++ b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md
@@ -315,7 +315,7 @@ kind: Deployment
metadata:
annotations:
# ...
- # The annotation contains the updated image to nginx 1.11.9,
+ # The annotation contains the updated image to nginx 1.16.1,
# but does not contain the updated replicas to 2
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment",
@@ -513,7 +513,7 @@ kind: Deployment
metadata:
annotations:
# ...
- # The annotation contains the updated image to nginx 1.11.9,
+ # The annotation contains the updated image to nginx 1.16.1,
# but does not contain the updated replicas to 2
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment",
diff --git a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md
index 2c9c94c70740a..4bd40719ac201 100644
--- a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md
+++ b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md
@@ -434,7 +434,7 @@ kubectl patch deployment patch-demo --patch '{"spec": {"template": {"spec": {"co
The flag `--subresource=[subresource-name]` is used with kubectl commands like get, patch,
edit and replace to fetch and update `status` and `scale` subresources of the resources
(applicable for kubectl version v1.24 or more). This flag is used with all the API resources
-(built-in and CRs) which has `status` or `scale` subresource. Deployment is one of the
+(built-in and CRs) that have `status` or `scale` subresource. Deployment is one of the
examples which supports these subresources.
Here's a manifest for a Deployment that has two replicas:
diff --git a/content/en/docs/tasks/run-application/access-api-from-pod.md b/content/en/docs/tasks/run-application/access-api-from-pod.md
index d56f624cd561b..41d6ea478e579 100644
--- a/content/en/docs/tasks/run-application/access-api-from-pod.md
+++ b/content/en/docs/tasks/run-application/access-api-from-pod.md
@@ -42,10 +42,18 @@ securely with the API server.
### Directly accessing the REST API
-While running in a Pod, the Kubernetes apiserver is accessible via a Service named
-`kubernetes` in the `default` namespace. Therefore, Pods can use the
-`kubernetes.default.svc` hostname to query the API server. Official client libraries
-do this automatically.
+While running in a Pod, your container can create an HTTPS URL for the Kubernetes API
+server by fetching the `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT_HTTPS`
+environment variables. The API server's in-cluster address is also published to a
+Service named `kubernetes` in the `default` namespace so that pods may reference
+`kubernetes.default.svc` as a DNS name for the local API server.
+
+{{< note >}}
+Kubernetes does not guarantee that the API server has a valid certificate for
+the hostname `kubernetes.default.svc`;
+however, the control plane **is** expected to present a valid certificate for the
+hostname or IP address that `$KUBERNETES_SERVICE_HOST` represents.
+{{< /note >}}
The recommended way to authenticate to the API server is with a
[service account](/docs/tasks/configure-pod-container/configure-service-account/)
diff --git a/content/en/docs/tasks/tools/included/_index.md b/content/en/docs/tasks/tools/included/_index.md
index 2da0437b8235a..3313378500fa4 100644
--- a/content/en/docs/tasks/tools/included/_index.md
+++ b/content/en/docs/tasks/tools/included/_index.md
@@ -3,4 +3,8 @@ title: "Tools Included"
description: "Snippets to be included in the main kubectl-installs-*.md pages."
headless: true
toc_hide: true
+_build:
+ list: never
+ render: never
+ publishResources: false
---
\ No newline at end of file
diff --git a/content/en/docs/tasks/tools/included/kubectl-convert-overview.md b/content/en/docs/tasks/tools/included/kubectl-convert-overview.md
index b1799d52ea212..681741645265a 100644
--- a/content/en/docs/tasks/tools/included/kubectl-convert-overview.md
+++ b/content/en/docs/tasks/tools/included/kubectl-convert-overview.md
@@ -4,6 +4,10 @@ description: >-
A kubectl plugin that allows you to convert manifests from one version
of a Kubernetes API to a different version.
headless: true
+_build:
+ list: never
+ render: never
+ publishResources: false
---
A plugin for Kubernetes command-line tool `kubectl`, which allows you to convert manifests between different API
diff --git a/content/en/docs/tasks/tools/included/kubectl-whats-next.md b/content/en/docs/tasks/tools/included/kubectl-whats-next.md
index 4b0da49bbcd97..ea77a0a607975 100644
--- a/content/en/docs/tasks/tools/included/kubectl-whats-next.md
+++ b/content/en/docs/tasks/tools/included/kubectl-whats-next.md
@@ -2,6 +2,10 @@
title: "What's next?"
description: "What's next after installing kubectl."
headless: true
+_build:
+ list: never
+ render: never
+ publishResources: false
---
* [Install Minikube](https://minikube.sigs.k8s.io/docs/start/)
diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md
index 2f4a759e4e613..3c0a77b70e0ba 100644
--- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md
+++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md
@@ -2,6 +2,10 @@
title: "bash auto-completion on Linux"
description: "Some optional configuration for bash auto-completion on Linux."
headless: true
+_build:
+ list: never
+ render: never
+ publishResources: false
---
### Introduction
diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
index 47243c575ac61..04db11388510b 100644
--- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
+++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
@@ -2,6 +2,10 @@
title: "bash auto-completion on macOS"
description: "Some optional configuration for bash auto-completion on macOS."
headless: true
+_build:
+ list: never
+ render: never
+ publishResources: false
---
### Introduction
@@ -51,8 +55,7 @@ brew install bash-completion@2
As stated in the output of this command, add the following to your `~/.bash_profile` file:
```bash
-export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
-[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
+brew_etc="$(brew --prefix)/etc" && [[ -r "${brew_etc}/profile.d/bash_completion.sh" ]] && . "${brew_etc}/profile.d/bash_completion.sh"
```
Reload your shell and verify that bash-completion v2 is correctly installed with `type _init_completion`.
diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-fish.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-fish.md
index a64d0e184c223..b98460c554ca3 100644
--- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-fish.md
+++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-fish.md
@@ -2,8 +2,16 @@
title: "fish auto-completion"
description: "Optional configuration to enable fish shell auto-completion."
headless: true
+_build:
+ list: never
+ render: never
+ publishResources: false
---
+{{< note >}}
+Autocomplete for Fish requires kubectl 1.23 or later.
+{{< /note >}}
+
The kubectl completion script for Fish can be generated with the command `kubectl completion fish`. Sourcing the completion script in your shell enables kubectl autocompletion.
To do so in all your shell sessions, add the following line to your `~/.config/fish/config.fish` file:
diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-pwsh.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-pwsh.md
index 12e5d60c5d29b..66acd343b0c20 100644
--- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-pwsh.md
+++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-pwsh.md
@@ -2,6 +2,10 @@
title: "PowerShell auto-completion"
description: "Some optional configuration for powershell auto-completion."
headless: true
+_build:
+ list: never
+ render: never
+ publishResources: false
---
The kubectl completion script for PowerShell can be generated with the command `kubectl completion powershell`.
diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
index 176bdeeeb12eb..dd6c4fd48ff95 100644
--- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
+++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
@@ -2,6 +2,10 @@
title: "zsh auto-completion"
description: "Some optional configuration for zsh auto-completion."
headless: true
+_build:
+ list: never
+ render: never
+ publishResources: false
---
The kubectl completion script for Zsh can be generated with the command `kubectl completion zsh`. Sourcing the completion script in your shell enables kubectl autocompletion.
diff --git a/content/en/docs/tasks/tools/included/verify-kubectl.md b/content/en/docs/tasks/tools/included/verify-kubectl.md
index fbd92e4cb6795..78246912657e6 100644
--- a/content/en/docs/tasks/tools/included/verify-kubectl.md
+++ b/content/en/docs/tasks/tools/included/verify-kubectl.md
@@ -2,6 +2,10 @@
title: "verify kubectl install"
description: "How to verify kubectl."
headless: true
+_build:
+ list: never
+ render: never
+ publishResources: false
---
In order for kubectl to find and access a Kubernetes cluster, it needs a
diff --git a/content/en/docs/tasks/tools/install-kubectl-windows.md b/content/en/docs/tasks/tools/install-kubectl-windows.md
index 240e3807a7cb1..0e7bc7c53e070 100644
--- a/content/en/docs/tasks/tools/install-kubectl-windows.md
+++ b/content/en/docs/tasks/tools/install-kubectl-windows.md
@@ -56,7 +56,7 @@ The following methods exist for installing kubectl on Windows:
- Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result:
```powershell
- $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
+ $(Get-FileHash -Algorithm SHA256 .\kubectl.exe).Hash -eq $(Get-Content .\kubectl.exe.sha256)
```
1. Append or prepend the `kubectl` binary folder to your `PATH` environment variable.
diff --git a/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html
index fd3db09a42996..153137dc91a04 100644
--- a/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html
+++ b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html
@@ -17,9 +17,6 @@
-
- To interact with the Terminal, please use the desktop/tablet version
-
The Control Plane is responsible for managing the cluster. The Control Plane coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates.
-
A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes control plane. The node should also have tools for handling container operations, such as containerd or Docker. A Kubernetes cluster that handles production traffic should have a minimum of three nodes because if one node goes down, both an etcd member and a control plane instance are lost, and redundancy is compromised. You can mitigate this risk by adding more control plane nodes.
+
A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes control plane. The node should also have tools for handling container operations, such as containerd or Docker. A Kubernetes cluster that handles production traffic should have a minimum of three nodes because if one node goes down, both an etcd member and a control plane instance are lost, and redundancy is compromised. You can mitigate this risk by adding more control plane nodes.
- To interact with the Terminal, please use the desktop/tablet version
-
@@ -37,4 +34,4 @@
-
+
diff --git a/content/en/docs/tutorials/security/apparmor.md b/content/en/docs/tutorials/security/apparmor.md
index 07b9fae3e8a6f..55d632ddb2665 100644
--- a/content/en/docs/tutorials/security/apparmor.md
+++ b/content/en/docs/tutorials/security/apparmor.md
@@ -3,7 +3,7 @@ reviewers:
- stclair
title: Restrict a Container's Access to Resources with AppArmor
content_type: tutorial
-weight: 10
+weight: 30
---
diff --git a/content/en/docs/tutorials/security/cluster-level-pss.md b/content/en/docs/tutorials/security/cluster-level-pss.md
index 1748ebb19c754..07273c3be8ee9 100644
--- a/content/en/docs/tutorials/security/cluster-level-pss.md
+++ b/content/en/docs/tutorials/security/cluster-level-pss.md
@@ -41,56 +41,55 @@ that are most appropriate for your configuration, do the following:
1. Create a cluster with no Pod Security Standards applied:
- ```shell
- kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.24.0
- ```
+ ```shell
+ kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.24.0
+ ```
The output is similar to this:
- ```
- Creating cluster "psa-wo-cluster-pss" ...
- ✓ Ensuring node image (kindest/node:v1.24.0) 🖼
- ✓ Preparing nodes 📦
- ✓ Writing configuration 📜
- ✓ Starting control-plane 🕹️
- ✓ Installing CNI 🔌
- ✓ Installing StorageClass 💾
- Set kubectl context to "kind-psa-wo-cluster-pss"
- You can now use your cluster with:
-
- kubectl cluster-info --context kind-psa-wo-cluster-pss
-
- Thanks for using kind! 😊
-
- ```
+ ```
+ Creating cluster "psa-wo-cluster-pss" ...
+ ✓ Ensuring node image (kindest/node:v1.24.0) 🖼
+ ✓ Preparing nodes 📦
+ ✓ Writing configuration 📜
+ ✓ Starting control-plane 🕹️
+ ✓ Installing CNI 🔌
+ ✓ Installing StorageClass 💾
+ Set kubectl context to "kind-psa-wo-cluster-pss"
+ You can now use your cluster with:
+
+ kubectl cluster-info --context kind-psa-wo-cluster-pss
+
+ Thanks for using kind! 😊
+ ```
1. Set the kubectl context to the new cluster:
- ```shell
- kubectl cluster-info --context kind-psa-wo-cluster-pss
- ```
+ ```shell
+ kubectl cluster-info --context kind-psa-wo-cluster-pss
+ ```
The output is similar to this:
- ```
- Kubernetes control plane is running at https://127.0.0.1:61350
+ ```
+ Kubernetes control plane is running at https://127.0.0.1:61350
- CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
-
- To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
- ```
-
-1. Get a list of namespaces in the cluster:
-
- ```shell
- kubectl get ns
- ```
- The output is similar to this:
- ```
- NAME STATUS AGE
- default Active 9m30s
- kube-node-lease Active 9m32s
- kube-public Active 9m32s
- kube-system Active 9m32s
- local-path-storage Active 9m26s
- ```
+ CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
+
+ To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
+ ```
+
+1. Get a list of namespaces in the cluster:
+
+ ```shell
+ kubectl get ns
+ ```
+ The output is similar to this:
+ ```
+ NAME STATUS AGE
+ default Active 9m30s
+ kube-node-lease Active 9m32s
+ kube-public Active 9m32s
+ kube-system Active 9m32s
+ local-path-storage Active 9m26s
+ ```
1. Use `--dry-run=server` to understand what happens when different Pod Security Standards
are applied:
@@ -100,7 +99,7 @@ that are most appropriate for your configuration, do the following:
kubectl label --dry-run=server --overwrite ns --all \
pod-security.kubernetes.io/enforce=privileged
```
- The output is similar to this:
+ The output is similar to this:
```
namespace/default labeled
namespace/kube-node-lease labeled
@@ -113,7 +112,7 @@ that are most appropriate for your configuration, do the following:
kubectl label --dry-run=server --overwrite ns --all \
pod-security.kubernetes.io/enforce=baseline
```
- The output is similar to this:
+ The output is similar to this:
```
namespace/default labeled
namespace/kube-node-lease labeled
@@ -127,11 +126,11 @@ that are most appropriate for your configuration, do the following:
```
3. Restricted
- ```shell
+ ```shell
kubectl label --dry-run=server --overwrite ns --all \
pod-security.kubernetes.io/enforce=restricted
```
- The output is similar to this:
+ The output is similar to this:
```
namespace/default labeled
namespace/kube-node-lease labeled
@@ -179,72 +178,72 @@ following:
1. Create a configuration file that can be consumed by the Pod Security
Admission Controller to implement these Pod Security Standards:
- ```
- mkdir -p /tmp/pss
- cat < /tmp/pss/cluster-level-pss.yaml
- apiVersion: apiserver.config.k8s.io/v1
- kind: AdmissionConfiguration
- plugins:
- - name: PodSecurity
- configuration:
- apiVersion: pod-security.admission.config.k8s.io/v1
- kind: PodSecurityConfiguration
- defaults:
- enforce: "baseline"
- enforce-version: "latest"
- audit: "restricted"
- audit-version: "latest"
- warn: "restricted"
- warn-version: "latest"
- exemptions:
- usernames: []
- runtimeClasses: []
- namespaces: [kube-system]
- EOF
- ```
-
- {{< note >}}
- `pod-security.admission.config.k8s.io/v1` configuration requires v1.25+.
- For v1.23 and v1.24, use [v1beta1](https://v1-24.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/).
- For v1.22, use [v1alpha1](https://v1-22.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/).
- {{< /note >}}
+ ```
+ mkdir -p /tmp/pss
+ cat < /tmp/pss/cluster-level-pss.yaml
+ apiVersion: apiserver.config.k8s.io/v1
+ kind: AdmissionConfiguration
+ plugins:
+ - name: PodSecurity
+ configuration:
+ apiVersion: pod-security.admission.config.k8s.io/v1
+ kind: PodSecurityConfiguration
+ defaults:
+ enforce: "baseline"
+ enforce-version: "latest"
+ audit: "restricted"
+ audit-version: "latest"
+ warn: "restricted"
+ warn-version: "latest"
+ exemptions:
+ usernames: []
+ runtimeClasses: []
+ namespaces: [kube-system]
+ EOF
+ ```
+
+ {{< note >}}
+ `pod-security.admission.config.k8s.io/v1` configuration requires v1.25+.
+ For v1.23 and v1.24, use [v1beta1](https://v1-24.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/).
+ For v1.22, use [v1alpha1](https://v1-22.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/).
+ {{< /note >}}
1. Configure the API server to consume this file during cluster creation:
- ```
- cat < /tmp/pss/cluster-config.yaml
- kind: Cluster
- apiVersion: kind.x-k8s.io/v1alpha4
- nodes:
- - role: control-plane
- kubeadmConfigPatches:
- - |
- kind: ClusterConfiguration
- apiServer:
- extraArgs:
- admission-control-config-file: /etc/config/cluster-level-pss.yaml
- extraVolumes:
- - name: accf
- hostPath: /etc/config
- mountPath: /etc/config
- readOnly: false
- pathType: "DirectoryOrCreate"
- extraMounts:
- - hostPath: /tmp/pss
- containerPath: /etc/config
- # optional: if set, the mount is read-only.
- # default false
- readOnly: false
- # optional: if set, the mount needs SELinux relabeling.
- # default false
- selinuxRelabel: false
- # optional: set propagation mode (None, HostToContainer or Bidirectional)
- # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
- # default None
- propagation: None
- EOF
- ```
+ ```
+ cat < /tmp/pss/cluster-config.yaml
+ kind: Cluster
+ apiVersion: kind.x-k8s.io/v1alpha4
+ nodes:
+ - role: control-plane
+ kubeadmConfigPatches:
+ - |
+ kind: ClusterConfiguration
+ apiServer:
+ extraArgs:
+ admission-control-config-file: /etc/config/cluster-level-pss.yaml
+ extraVolumes:
+ - name: accf
+ hostPath: /etc/config
+ mountPath: /etc/config
+ readOnly: false
+ pathType: "DirectoryOrCreate"
+ extraMounts:
+ - hostPath: /tmp/pss
+ containerPath: /etc/config
+ # optional: if set, the mount is read-only.
+ # default false
+ readOnly: false
+ # optional: if set, the mount needs SELinux relabeling.
+ # default false
+ selinuxRelabel: false
+ # optional: set propagation mode (None, HostToContainer or Bidirectional)
+ # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
+ # default None
+ propagation: None
+ EOF
+ ```
{{}}
If you use Docker Desktop with KinD on macOS, you can
@@ -256,56 +255,57 @@ following:
these Pod Security Standards:
```shell
- kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml
+ kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml
```
The output is similar to this:
```
- Creating cluster "psa-with-cluster-pss" ...
- ✓ Ensuring node image (kindest/node:v1.24.0) 🖼
- ✓ Preparing nodes 📦
- ✓ Writing configuration 📜
- ✓ Starting control-plane 🕹️
- ✓ Installing CNI 🔌
- ✓ Installing StorageClass 💾
- Set kubectl context to "kind-psa-with-cluster-pss"
- You can now use your cluster with:
+ Creating cluster "psa-with-cluster-pss" ...
+ ✓ Ensuring node image (kindest/node:v1.24.0) 🖼
+ ✓ Preparing nodes 📦
+ ✓ Writing configuration 📜
+ ✓ Starting control-plane 🕹️
+ ✓ Installing CNI 🔌
+ ✓ Installing StorageClass 💾
+ Set kubectl context to "kind-psa-with-cluster-pss"
+ You can now use your cluster with:
- kubectl cluster-info --context kind-psa-with-cluster-pss
+ kubectl cluster-info --context kind-psa-with-cluster-pss
- Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
- ```
+ Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
+ ```
-1. Point kubectl to the cluster
+1. Point kubectl to the cluster:
```shell
- kubectl cluster-info --context kind-psa-with-cluster-pss
- ```
+ kubectl cluster-info --context kind-psa-with-cluster-pss
+ ```
The output is similar to this:
- ```
- Kubernetes control plane is running at https://127.0.0.1:63855
- CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
+ ```
+ Kubernetes control plane is running at https://127.0.0.1:63855
+
+ CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
- To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
- ```
+ To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
+ ```
1. Create the following Pod specification for a minimal configuration in the default namespace:
- ```
- cat < /tmp/pss/nginx-pod.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: nginx
- spec:
- containers:
- - image: nginx
- name: nginx
- ports:
- - containerPort: 80
- EOF
- ```
+ ```
+ cat < /tmp/pss/nginx-pod.yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+ ports:
+ - containerPort: 80
+ EOF
+ ```
1. Create the Pod in the cluster:
```shell
- kubectl apply -f /tmp/pss/nginx-pod.yaml
+ kubectl apply -f /tmp/pss/nginx-pod.yaml
```
The output is similar to this:
```
@@ -315,9 +315,14 @@ following:
## Clean up
-Run `kind delete cluster --name psa-with-cluster-pss` and
-`kind delete cluster --name psa-wo-cluster-pss` to delete the clusters you
-created.
+Now delete the clusters which you created above by running the following command:
+
+```shell
+kind delete cluster --name psa-with-cluster-pss
+```
+```shell
+kind delete cluster --name psa-wo-cluster-pss
+```
## {{% heading "whatsnext" %}}
diff --git a/content/en/docs/tutorials/security/ns-level-pss.md b/content/en/docs/tutorials/security/ns-level-pss.md
index d35df5904a5a9..64aaf64832a56 100644
--- a/content/en/docs/tutorials/security/ns-level-pss.md
+++ b/content/en/docs/tutorials/security/ns-level-pss.md
@@ -1,7 +1,7 @@
---
title: Apply Pod Security Standards at the Namespace Level
content_type: tutorial
-weight: 10
+weight: 20
---
{{% alert title="Note" %}}
@@ -155,7 +155,11 @@ with no warnings.
## Clean up
-Run `kind delete cluster --name psa-ns-level` to delete the cluster created.
+Now delete the cluster which you created above by running the following command:
+
+```shell
+kind delete cluster --name psa-ns-level
+```
## {{% heading "whatsnext" %}}
diff --git a/content/en/docs/tutorials/security/seccomp.md b/content/en/docs/tutorials/security/seccomp.md
index 6187d198f1971..3a445afacfe41 100644
--- a/content/en/docs/tutorials/security/seccomp.md
+++ b/content/en/docs/tutorials/security/seccomp.md
@@ -5,7 +5,7 @@ reviewers:
- saschagrunert
title: Restrict a Container's Syscalls with seccomp
content_type: tutorial
-weight: 20
+weight: 40
min-kubernetes-server-version: v1.22
---
@@ -265,6 +265,44 @@ docker exec -it kind-worker bash -c \
}
```
+## Create Pod that uses the container runtime default seccomp profile
+
+Most container runtimes provide a sane set of default syscalls that are allowed
+or not. You can adopt these defaults for your workload by setting the seccomp
+type in the security context of a pod or container to `RuntimeDefault`.
+
+{{< note >}}
+If you have the `SeccompDefault` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
+enabled, then Pods use the `RuntimeDefault` seccomp profile whenever
+no other seccomp profile is specified. Otherwise, the default is `Unconfined`.
+{{< /note >}}
+
+Here's a manifest for a Pod that requests the `RuntimeDefault` seccomp profile
+for all its containers:
+
+{{< codenew file="pods/security/seccomp/ga/default-pod.yaml" >}}
+
+Create that Pod:
+```shell
+kubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/default-pod.yaml
+```
+
+```shell
+kubectl get pod default-pod
+```
+
+The Pod should be showing as having started successfully:
+```
+NAME READY STATUS RESTARTS AGE
+default-pod 1/1 Running 0 20s
+```
+
+Finally, now that you saw that work OK, clean up:
+
+```shell
+kubectl delete pod default-pod --wait --now
+```
+
## Create a Pod with a seccomp profile for syscall auditing
To start off, apply the `audit.json` profile, which will log all syscalls of the
@@ -493,43 +531,6 @@ kubectl delete service fine-pod --wait
kubectl delete pod fine-pod --wait --now
```
-## Create Pod that uses the container runtime default seccomp profile
-
-Most container runtimes provide a sane set of default syscalls that are allowed
-or not. You can adopt these defaults for your workload by setting the seccomp
-type in the security context of a pod or container to `RuntimeDefault`.
-
-{{< note >}}
-If you have the `SeccompDefault` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) enabled, then Pods use the `RuntimeDefault` seccomp profile whenever
-no other seccomp profile is specified. Otherwise, the default is `Unconfined`.
-{{< /note >}}
-
-Here's a manifest for a Pod that requests the `RuntimeDefault` seccomp profile
-for all its containers:
-
-{{< codenew file="pods/security/seccomp/ga/default-pod.yaml" >}}
-
-Create that Pod:
-```shell
-kubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/default-pod.yaml
-```
-
-```shell
-kubectl get pod default-pod
-```
-
-The Pod should be showing as having started successfully:
-```
-NAME READY STATUS RESTARTS AGE
-default-pod 1/1 Running 0 20s
-```
-
-Finally, now that you saw that work OK, clean up:
-
-```shell
-kubectl delete pod default-pod --wait --now
-```
-
## {{% heading "whatsnext" %}}
You can learn more about Linux seccomp:
diff --git a/content/en/docs/tutorials/services/connect-applications-service.md b/content/en/docs/tutorials/services/connect-applications-service.md
index dfa3023063920..be8202cd97891 100644
--- a/content/en/docs/tutorials/services/connect-applications-service.md
+++ b/content/en/docs/tutorials/services/connect-applications-service.md
@@ -15,7 +15,12 @@ weight: 20
Now that you have a continuously running, replicated application you can expose it on a network.
-Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.
+Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on.
+Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly
+create links between pods or map container ports to host ports. This means that containers within
+a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other
+without NAT. The rest of this document elaborates on how you can run reliable services on such a
+networking model.
This tutorial uses a simple nginx web server to demonstrate the concept.
@@ -49,16 +54,32 @@ kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs
[map[ip:10.244.2.5]]
```
-You should be able to ssh into any node in your cluster and use a tool such as `curl` to make queries against both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same `containerPort`, and access them from any other pod or node in your cluster using the assigned IP address for the Service. If you want to arrange for a specific port on the host Node to be forwarded to backing Pods, you can - but the networking model should mean that you do not need to do so.
+You should be able to ssh into any node in your cluster and use a tool such as `curl`
+to make queries against both IPs. Note that the containers are *not* using port 80 on
+the node, nor are there any special NAT rules to route traffic to the pod. This means
+you can run multiple nginx pods on the same node all using the same `containerPort`,
+and access them from any other pod or node in your cluster using the assigned IP
+address for the Service. If you want to arrange for a specific port on the host
+Node to be forwarded to backing Pods, you can - but the networking model should
+mean that you do not need to do so.
-
-You can read more about the [Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) if you're curious.
+You can read more about the
+[Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model)
+if you're curious.
## Creating a Service
-So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the Deployment will create new ones, with different IPs. This is the problem a Service solves.
+So we have pods running nginx in a flat, cluster wide, address space. In theory,
+you could talk to these pods directly, but what happens when a node dies? The pods
+die with it, and the Deployment will create new ones, with different IPs. This is
+the problem a Service solves.
-A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
+A Kubernetes Service is an abstraction which defines a logical set of Pods running
+somewhere in your cluster, that all provide the same functionality. When created,
+each Service is assigned a unique IP address (also called clusterIP). This address
+is tied to the lifespan of the Service, and will not change while the Service is alive.
+Pods can be configured to talk to the Service, and know that communication to the
+Service will be automatically load-balanced out to some pod that is a member of the Service.
You can create a Service for your 2 nginx replicas with `kubectl expose`:
@@ -112,8 +133,12 @@ Labels: run=my-nginx
Annotations:
Selector: run=my-nginx
Type: ClusterIP
+IP Family Policy: SingleStack
+IP Families: IPv4
IP: 10.0.162.149
+IPs: 10.0.162.149
Port: 80/TCP
+TargetPort: 80/TCP
Endpoints: 10.244.2.5:80,10.244.3.4:80
Session Affinity: None
Events:
@@ -136,10 +161,12 @@ about the [service proxy](/docs/concepts/services-networking/service/#virtual-ip
Kubernetes supports 2 primary modes of finding a Service - environment variables
and DNS. The former works out of the box while the latter requires the
[CoreDNS cluster addon](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/coredns).
+
{{< note >}}
-If the service environment variables are not desired (because possible clashing with expected program ones,
-too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks`
-flag to `false` on the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core).
+If the service environment variables are not desired (because possible clashing
+with expected program ones, too many variables to process, only using DNS, etc)
+you can disable this mode by setting the `enableServiceLinks` flag to `false` on
+the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core).
{{< /note >}}
@@ -193,7 +220,8 @@ KUBERNETES_SERVICE_PORT_HTTPS=443
### DNS
-Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:
+Kubernetes offers a DNS cluster addon Service that automatically assigns dns names
+to other Services. You can check if it's running on your cluster:
```shell
kubectl get services kube-dns --namespace=kube-system
@@ -204,7 +232,13 @@ kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 8m
```
The rest of this section will assume you have a Service with a long lived IP
-(my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`). If CoreDNS isn't running, you can enable it referring to the [CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes) or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns). Let's run another curl application to test this:
+(my-nginx), and a DNS server that has assigned a name to that IP. Here we use
+the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the
+Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`).
+If CoreDNS isn't running, you can enable it referring to the
+[CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes)
+or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns).
+Let's run another curl application to test this:
```shell
kubectl run curl --image=radial/busyboxplus:curl -i --tty
@@ -227,13 +261,18 @@ Address 1: 10.0.162.149
## Securing the Service
-Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
+Till now we have only accessed the nginx server from within the cluster. Before
+exposing the Service to the internet, you want to make sure the communication
+channel is secure. For this, you will need:
* Self signed certificates for https (unless you already have an identity certificate)
* An nginx server configured to use the certificates
* A [secret](/docs/concepts/configuration/secret/) that makes the certificates accessible to pods
-You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short:
+You can acquire all these from the
+[nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/).
+This requires having go and make tools installed. If you don't want to install those,
+then follow the manual steps later. In short:
```shell
make keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt
@@ -272,7 +311,9 @@ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -ou
cat /d/tmp/nginx.crt | base64
cat /d/tmp/nginx.key | base64
```
-Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.
+
+Use the output from the previous commands to create a yaml file as follows.
+The base64 encoded value should all be on a single line.
```yaml
apiVersion: "v1"
@@ -296,7 +337,8 @@ NAME TYPE DATA AGE
nginxsecret kubernetes.io/tls 2 1m
```
-Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
+Now modify your nginx replicas to start an https server using the certificate
+in the secret, and the Service, to expose both ports (80 and 443):
{{< codenew file="service/networking/nginx-secure-app.yaml" >}}
@@ -327,9 +369,12 @@ node $ curl -k https://10.244.3.5
Welcome to nginx!
```
-Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
-so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
-Let's test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
+Note how we supplied the `-k` parameter to curl in the last step, this is because
+we don't know anything about the pods running nginx at certificate generation time,
+so we have to tell curl to ignore the CName mismatch. By creating a Service we
+linked the CName used in the certificate with the actual DNS name used by pods
+during Service lookup. Let's test this from a pod (the same secret is being reused
+for simplicity, the pod only needs nginx.crt to access the Service):
{{< codenew file="service/networking/curlpod.yaml" >}}
@@ -391,7 +436,8 @@ $ curl https://: -k
Welcome to nginx!
```
-Let's now recreate the Service to use a cloud load balancer. Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
+Let's now recreate the Service to use a cloud load balancer.
+Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
```shell
kubectl edit svc my-nginx
@@ -407,8 +453,8 @@ curl https:// -k
Welcome to nginx!
```
-The IP address in the `EXTERNAL-IP` column is the one that is available on the public internet. The `CLUSTER-IP` is only available inside your
-cluster/private cloud network.
+The IP address in the `EXTERNAL-IP` column is the one that is available on the public internet.
+The `CLUSTER-IP` is only available inside your cluster/private cloud network.
Note that on AWS, type `LoadBalancer` creates an ELB, which uses a (long)
hostname, not an IP. It's too long to fit in the standard `kubectl get svc`
diff --git a/content/en/examples/admin/sched/my-scheduler.yaml b/content/en/examples/admin/sched/my-scheduler.yaml
index 5addf9e0e6ad3..fa1c65bf9a462 100644
--- a/content/en/examples/admin/sched/my-scheduler.yaml
+++ b/content/en/examples/admin/sched/my-scheduler.yaml
@@ -30,6 +30,20 @@ roleRef:
name: system:volume-scheduler
apiGroup: rbac.authorization.k8s.io
---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: my-scheduler-extension-apiserver-authentication-reader
+ namespace: kube-system
+roleRef:
+ kind: Role
+ name: extension-apiserver-authentication-reader
+ apiGroup: rbac.authorization.k8s.io
+subjects:
+- kind: ServiceAccount
+ name: my-scheduler
+ namespace: kube-system
+---
apiVersion: v1
kind: ConfigMap
metadata:
diff --git a/content/en/examples/application/php-apache.yaml b/content/en/examples/application/php-apache.yaml
index d29d2b91593f3..a194dce6f958a 100644
--- a/content/en/examples/application/php-apache.yaml
+++ b/content/en/examples/application/php-apache.yaml
@@ -6,7 +6,6 @@ spec:
selector:
matchLabels:
run: php-apache
- replicas: 1
template:
metadata:
labels:
diff --git a/content/en/examples/application/ssa/nginx-deployment-replicas-only.yaml b/content/en/examples/application/ssa/nginx-deployment-replicas-only.yaml
deleted file mode 100644
index 0848ba0e218d0..0000000000000
--- a/content/en/examples/application/ssa/nginx-deployment-replicas-only.yaml
+++ /dev/null
@@ -1,6 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: nginx-deployment
-spec:
- replicas: 3
diff --git a/content/en/examples/concepts/policy/limit-range/problematic-limit-range.yaml b/content/en/examples/concepts/policy/limit-range/problematic-limit-range.yaml
index 5fbfe632c0a77..ee89cb79faf4f 100644
--- a/content/en/examples/concepts/policy/limit-range/problematic-limit-range.yaml
+++ b/content/en/examples/concepts/policy/limit-range/problematic-limit-range.yaml
@@ -12,3 +12,4 @@ spec:
cpu: "1"
min:
cpu: 100m
+ type: Container
diff --git a/content/en/examples/examples_test.go b/content/en/examples/examples_test.go
index 495d8435884ad..670131237dad9 100644
--- a/content/en/examples/examples_test.go
+++ b/content/en/examples/examples_test.go
@@ -46,6 +46,9 @@ import (
api "k8s.io/kubernetes/pkg/apis/core"
"k8s.io/kubernetes/pkg/apis/core/validation"
+ // "k8s.io/kubernetes/pkg/apis/flowcontrol"
+ // flowcontrol_validation "k8s.io/kubernetes/pkg/apis/flowcontrol/validation"
+
"k8s.io/kubernetes/pkg/apis/networking"
networking_validation "k8s.io/kubernetes/pkg/apis/networking/validation"
@@ -152,9 +155,17 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
AllowDownwardAPIHugePages: true,
AllowInvalidPodDeletionCost: false,
AllowIndivisibleHugePagesValues: true,
- AllowWindowsHostProcessField: true,
AllowExpandedDNSConfig: true,
}
+ netValidationOptions := networking_validation.NetworkPolicyValidationOptions{
+ AllowInvalidLabelValueInSelector: false,
+ }
+ pdbValidationOptions := policy_validation.PodDisruptionBudgetValidationOptions{
+ AllowInvalidLabelValueInSelector: false,
+ }
+ clusterroleValidationOptions := rbac_validation.ClusterRoleValidationOptions{
+ AllowInvalidLabelValueInSelector: false,
+ }
// Enable CustomPodDNS for testing
// feature.DefaultFeatureGate.Set("CustomPodDNS=true")
@@ -245,11 +256,31 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
t.Namespace = api.NamespaceDefault
}
errors = apps_validation.ValidateStatefulSet(t, podValidationOptions)
+ case *apps.DaemonSet:
+ if t.Namespace == "" {
+ t.Namespace = api.NamespaceDefault
+ }
+ errors = apps_validation.ValidateDaemonSet(t, podValidationOptions)
+ case *apps.Deployment:
+ if t.Namespace == "" {
+ t.Namespace = api.NamespaceDefault
+ }
+ errors = apps_validation.ValidateDeployment(t, podValidationOptions)
+ case *apps.ReplicaSet:
+ if t.Namespace == "" {
+ t.Namespace = api.NamespaceDefault
+ }
+ errors = apps_validation.ValidateReplicaSet(t, podValidationOptions)
case *autoscaling.HorizontalPodAutoscaler:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
errors = autoscaling_validation.ValidateHorizontalPodAutoscaler(t)
+ case *batch.CronJob:
+ if t.Namespace == "" {
+ t.Namespace = api.NamespaceDefault
+ }
+ errors = batch_validation.ValidateCronJobCreate(t, podValidationOptions)
case *batch.Job:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
@@ -261,58 +292,31 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
t.ObjectMeta.Name = "skip-for-good"
}
errors = job.Strategy.Validate(nil, t)
- case *apps.DaemonSet:
- if t.Namespace == "" {
- t.Namespace = api.NamespaceDefault
- }
- errors = apps_validation.ValidateDaemonSet(t, podValidationOptions)
- case *apps.Deployment:
- if t.Namespace == "" {
- t.Namespace = api.NamespaceDefault
- }
- errors = apps_validation.ValidateDeployment(t, podValidationOptions)
+ // case *flowcontrol.FlowSchema:
+ // TODO: This is still failing
+ // errors = flowcontrol_validation.ValidateFlowSchema(t)
case *networking.Ingress:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
errors = networking_validation.ValidateIngressCreate(t)
case *networking.IngressClass:
- /*
- if t.Namespace == "" {
- t.Namespace = api.NamespaceDefault
- }
- gv := schema.GroupVersion{
- Group: networking.GroupName,
- Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version,
- }
- */
errors = networking_validation.ValidateIngressClass(t)
-
- case *policy.PodSecurityPolicy:
- errors = policy_validation.ValidatePodSecurityPolicy(t)
- case *apps.ReplicaSet:
- if t.Namespace == "" {
- t.Namespace = api.NamespaceDefault
- }
- errors = apps_validation.ValidateReplicaSet(t, podValidationOptions)
- case *batch.CronJob:
- if t.Namespace == "" {
- t.Namespace = api.NamespaceDefault
- }
- errors = batch_validation.ValidateCronJobCreate(t, podValidationOptions)
case *networking.NetworkPolicy:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
- errors = networking_validation.ValidateNetworkPolicy(t)
+ errors = networking_validation.ValidateNetworkPolicy(t, netValidationOptions)
+ case *policy.PodSecurityPolicy:
+ errors = policy_validation.ValidatePodSecurityPolicy(t)
case *policy.PodDisruptionBudget:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
- errors = policy_validation.ValidatePodDisruptionBudget(t)
+ errors = policy_validation.ValidatePodDisruptionBudget(t, pdbValidationOptions)
case *rbac.ClusterRole:
// clusterole does not accept namespace
- errors = rbac_validation.ValidateClusterRole(t)
+ errors = rbac_validation.ValidateClusterRole(t, clusterroleValidationOptions)
case *rbac.ClusterRoleBinding:
// clusterolebinding does not accept namespace
errors = rbac_validation.ValidateClusterRoleBinding(t)
@@ -383,6 +387,14 @@ func TestExampleObjectSchemas(t *testing.T) {
// Please help maintain the alphabeta order in the map
cases := map[string]map[string][]runtime.Object{
+ "access": {
+ "endpoints-aggregated": {&rbac.ClusterRole{}},
+ },
+ "access/certificate-signing-request": {
+ "clusterrole-approve": {&rbac.ClusterRole{}},
+ "clusterrole-create": {&rbac.ClusterRole{}},
+ "clusterrole-sign": {&rbac.ClusterRole{}},
+ },
"admin": {
"namespace-dev": {&api.Namespace{}},
"namespace-prod": {&api.Namespace{}},
@@ -396,6 +408,7 @@ func TestExampleObjectSchemas(t *testing.T) {
"dns-horizontal-autoscaler": {&api.ServiceAccount{}, &rbac.ClusterRole{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}},
"dnsutils": {&api.Pod{}},
},
+ // TODO: "admin/konnectivity" is not include yet.
"admin/logging": {
"fluentd-sidecar-config": {&api.ConfigMap{}},
"two-files-counter-pod": {&api.Pod{}},
@@ -474,10 +487,6 @@ func TestExampleObjectSchemas(t *testing.T) {
"application/hpa": {
"php-apache": {&autoscaling.HorizontalPodAutoscaler{}},
},
- "application/nginx": {
- "nginx-deployment": {&apps.Deployment{}},
- "nginx-svc": {&api.Service{}},
- },
"application/job": {
"cronjob": {&batch.CronJob{}},
"job-tmpl": {&batch.Job{}},
@@ -492,6 +501,10 @@ func TestExampleObjectSchemas(t *testing.T) {
"redis-pod": {&api.Pod{}},
"redis-service": {&api.Service{}},
},
+ "application/mongodb": {
+ "mongo-deployment": {&apps.Deployment{}},
+ "mongo-service": {&api.Service{}},
+ },
"application/mysql": {
"mysql-configmap": {&api.ConfigMap{}},
"mysql-deployment": {&api.Service{}, &apps.Deployment{}},
@@ -499,6 +512,14 @@ func TestExampleObjectSchemas(t *testing.T) {
"mysql-services": {&api.Service{}, &api.Service{}},
"mysql-statefulset": {&apps.StatefulSet{}},
},
+ "application/nginx": {
+ "nginx-deployment": {&apps.Deployment{}},
+ "nginx-svc": {&api.Service{}},
+ },
+ "application/ssa": {
+ "nginx-deployment": {&apps.Deployment{}},
+ "nginx-deployment-no-replicas": {&apps.Deployment{}},
+ },
"application/web": {
"web": {&api.Service{}, &apps.StatefulSet{}},
"web-parallel": {&api.Service{}, &apps.StatefulSet{}},
@@ -510,9 +531,15 @@ func TestExampleObjectSchemas(t *testing.T) {
"application/zookeeper": {
"zookeeper": {&api.Service{}, &api.Service{}, &policy.PodDisruptionBudget{}, &apps.StatefulSet{}},
},
+ "concepts/policy/limit-range": {
+ "example-conflict-with-limitrange-cpu": {&api.Pod{}},
+ "problematic-limit-range": {&api.LimitRange{}},
+ "example-no-conflict-with-limitrange-cpu": {&api.Pod{}},
+ },
"configmap": {
"configmaps": {&api.ConfigMap{}, &api.ConfigMap{}},
"configmap-multikeys": {&api.ConfigMap{}},
+ "configure-pod": {&api.Pod{}},
},
"controllers": {
"daemonset": {&apps.DaemonSet{}},
@@ -558,7 +585,9 @@ func TestExampleObjectSchemas(t *testing.T) {
"pod-with-affinity-anti-affinity": {&api.Pod{}},
"pod-with-node-affinity": {&api.Pod{}},
"pod-with-pod-affinity": {&api.Pod{}},
+ "pod-with-scheduling-gates": {&api.Pod{}},
"pod-with-toleration": {&api.Pod{}},
+ "pod-without-scheduling-gates": {&api.Pod{}},
"private-reg-pod": {&api.Pod{}},
"share-process-namespace": {&api.Pod{}},
"simple-pod": {&api.Pod{}},
@@ -624,6 +653,11 @@ func TestExampleObjectSchemas(t *testing.T) {
"pv-volume": {&api.PersistentVolume{}},
"redis": {&api.Pod{}},
},
+ "pods/topology-spread-constraints": {
+ "one-constraint": {&api.Pod{}},
+ "one-constraint-with-nodeaffinity": {&api.Pod{}},
+ "two-constraints": {&api.Pod{}},
+ },
"policy": {
"baseline-psp": {&policy.PodSecurityPolicy{}},
"example-psp": {&policy.PodSecurityPolicy{}},
@@ -633,6 +667,19 @@ func TestExampleObjectSchemas(t *testing.T) {
"zookeeper-pod-disruption-budget-maxunavailable": {&policy.PodDisruptionBudget{}},
"zookeeper-pod-disruption-budget-minavailable": {&policy.PodDisruptionBudget{}},
},
+ /* TODO: This doesn't work yet.
+ "priority-and-fairness": {
+ "health-for-strangers": {&flowcontrol.FlowSchema{}},
+ },
+ */
+ "secret/serviceaccount": {
+ "mysecretname": {&api.Secret{}},
+ },
+ "security": {
+ "podsecurity-baseline": {&api.Namespace{}},
+ "podsecurity-privileged": {&api.Namespace{}},
+ "podsecurity-restricted": {&api.Namespace{}},
+ },
"service": {
"nginx-service": {&api.Service{}},
"load-balancer-example": {&apps.Deployment{}},
@@ -664,6 +711,7 @@ func TestExampleObjectSchemas(t *testing.T) {
"name-virtual-host-ingress-no-third-host": {&networking.Ingress{}},
"namespaced-params": {&networking.IngressClass{}},
"networkpolicy": {&networking.NetworkPolicy{}},
+ "networkpolicy-multiport-egress": {&networking.NetworkPolicy{}},
"network-policy-allow-all-egress": {&networking.NetworkPolicy{}},
"network-policy-allow-all-ingress": {&networking.NetworkPolicy{}},
"network-policy-default-deny-egress": {&networking.NetworkPolicy{}},
diff --git a/content/en/examples/priority-and-fairness/health-for-strangers.yaml b/content/en/examples/priority-and-fairness/health-for-strangers.yaml
index c57e2cae37245..5b44c8c987d48 100644
--- a/content/en/examples/priority-and-fairness/health-for-strangers.yaml
+++ b/content/en/examples/priority-and-fairness/health-for-strangers.yaml
@@ -7,14 +7,14 @@ spec:
priorityLevelConfiguration:
name: exempt
rules:
- - nonResourceRules:
- - nonResourceURLs:
- - "/healthz"
- - "/livez"
- - "/readyz"
- verbs:
- - "*"
- subjects:
- - kind: Group
- group:
- name: system:unauthenticated
+ - nonResourceRules:
+ - nonResourceURLs:
+ - "/healthz"
+ - "/livez"
+ - "/readyz"
+ verbs:
+ - "*"
+ subjects:
+ - kind: Group
+ group:
+ name: "system:unauthenticated"
diff --git a/content/en/examples/service/networking/networkpolicy-multiport-egress.yaml b/content/en/examples/service/networking/networkpolicy-multiport-egress.yaml
new file mode 100644
index 0000000000000..f4c914bbec7d0
--- /dev/null
+++ b/content/en/examples/service/networking/networkpolicy-multiport-egress.yaml
@@ -0,0 +1,20 @@
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: multi-port-egress
+ namespace: default
+spec:
+ podSelector:
+ matchLabels:
+ role: db
+ policyTypes:
+ - Egress
+ egress:
+ - to:
+ - ipBlock:
+ cidr: 10.0.0.0/24
+ ports:
+ - protocol: TCP
+ port: 32000
+ endPort: 32768
+
diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md
index d3eead78be039..dc80e19baf225 100644
--- a/content/en/releases/patch-releases.md
+++ b/content/en/releases/patch-releases.md
@@ -78,9 +78,9 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
-| December 2022 | 2022-12-02 | 2022-12-08 |
-| January 2023 | 2023-01-13 | 2023-01-18 |
| February 2023 | 2023-02-10 | 2023-02-15 |
+| March 2023 | 2023-03-10 | 2023-03-15 |
+| April 2023 | 2023-04-07 | 2023-04-12 |
## Detailed Release History for Active Branches
diff --git a/content/es/docs/concepts/configuration/configmap.md b/content/es/docs/concepts/configuration/configmap.md
index ce16f99aca605..d3fb00cf9037c 100644
--- a/content/es/docs/concepts/configuration/configmap.md
+++ b/content/es/docs/concepts/configuration/configmap.md
@@ -204,7 +204,7 @@ Cuando un ConfigMap está siendo utilizado en un {{< glossary_tooltip text="volu
El {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} comprueba si el ConfigMap montado está actualizado cada periodo de sincronización.
Sin embargo, el {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} utiliza su caché local para obtener el valor actual del ConfigMap.
El tipo de caché es configurable usando el campo `ConfigMapAndSecretChangeDetectionStrategy` en el
-[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
+[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
Un ConfigMap puede ser propagado por vista (default), ttl-based, o simplemente redirigiendo
todas las consultas directamente a la API.
Como resultado, el retraso total desde el momento que el ConfigMap es actualizado hasta el momento
diff --git a/content/es/docs/concepts/configuration/secret.md b/content/es/docs/concepts/configuration/secret.md
index 969078a67a4e6..1025ebd78519d 100644
--- a/content/es/docs/concepts/configuration/secret.md
+++ b/content/es/docs/concepts/configuration/secret.md
@@ -520,7 +520,7 @@ Cuando se actualiza un Secret que ya se está consumiendo en un volumen, las cla
Kubelet está verificando si el Secret montado esta actualizado en cada sincronización periódica.
Sin embargo, está usando su caché local para obtener el valor actual del Secret.
El tipo de caché es configurable usando el (campo `ConfigMapAndSecretChangeDetectionStrategy` en
-[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)).
+[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubelet/config/v1beta1/types.go)).
Puede ser propagado por el reloj (default), ttl-based, o simplemente redirigiendo
todas las solicitudes a kube-apiserver directamente.
Como resultado, el retraso total desde el momento en que se actualiza el Secret hasta el momento en que se proyectan las nuevas claves en el Pod puede ser tan largo como el periodo de sincronización de kubelet + retraso de
diff --git a/content/es/docs/concepts/security/overview.md b/content/es/docs/concepts/security/overview.md
index 9bee65b8c6407..d07fa1e46452b 100644
--- a/content/es/docs/concepts/security/overview.md
+++ b/content/es/docs/concepts/security/overview.md
@@ -52,6 +52,7 @@ Proveedor IaaS | Link |
Alibaba Cloud | https://www.alibabacloud.com/trust-center |
Amazon Web Services | https://aws.amazon.com/security/ |
Google Cloud Platform | https://cloud.google.com/security/ |
+Huawei Cloud | https://www.huaweicloud.com/securecenter/overallsafety.html |
IBM Cloud | https://www.ibm.com/cloud/security |
Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
Oracle Cloud Infrastructure | https://www.oracle.com/security/ |
diff --git a/content/es/docs/concepts/storage/storage-capacity.md b/content/es/docs/concepts/storage/storage-capacity.md
index e7292328446fb..2df4481a5bb55 100644
--- a/content/es/docs/concepts/storage/storage-capacity.md
+++ b/content/es/docs/concepts/storage/storage-capacity.md
@@ -46,7 +46,7 @@ En ese caso, el planificador sólo considera los nodos para el Pod que tienen su
Para los volúmenes con el modo de enlace de volumen `Immediate`, el controlador de almacenamiento decide dónde crear el volumen, independientemente de los pods que usarán el volumen.
Luego, el planificador programa los pods en los nodos donde el volumen está disponible después de que se haya creado.
-Para los [volúmenes efímeros de CSI](/docs/concepts/storage/volumes/#csi),
+Para los [volúmenes efímeros de CSI](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes),
la planificación siempre ocurre sin considerar la capacidad de almacenamiento. Esto se basa en la suposición de que este tipo de volumen solo lo utilizan controladores CSI especiales que son locales a un nodo y no necesitan allí recursos importantes.
## Replanificación
diff --git a/content/fr/docs/concepts/architecture/nodes.md b/content/fr/docs/concepts/architecture/nodes.md
index 8fba2050530a9..e64b7c28ae664 100644
--- a/content/fr/docs/concepts/architecture/nodes.md
+++ b/content/fr/docs/concepts/architecture/nodes.md
@@ -13,7 +13,7 @@ Un nœud est une machine de travail dans Kubernetes, connue auparavant sous le n
Un nœud peut être une machine virtuelle ou une machine physique, selon le cluster.
Chaque nœud contient les services nécessaires à l'exécution de [pods](/docs/concepts/workloads/pods/pod/) et est géré par les composants du master.
Les services sur un nœud incluent le [container runtime](/docs/concepts/overview/components/#node-components), kubelet et kube-proxy.
-Consultez la section [Le Nœud Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) dans le document de conception de l'architecture pour plus de détails.
+Consultez la section [Le Nœud Kubernetes](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) dans le document de conception de l'architecture pour plus de détails.
diff --git a/content/fr/docs/concepts/configuration/secret.md b/content/fr/docs/concepts/configuration/secret.md
index e381b3d531cc9..bad71e79d7c47 100644
--- a/content/fr/docs/concepts/configuration/secret.md
+++ b/content/fr/docs/concepts/configuration/secret.md
@@ -563,7 +563,7 @@ Le programme dans un conteneur est responsable de la lecture des secrets des fic
Lorsqu'un secret déjà consommé dans un volume est mis à jour, les clés projetées sont finalement mises à jour également.
Kubelet vérifie si le secret monté est récent à chaque synchronisation périodique.
Cependant, il utilise son cache local pour obtenir la valeur actuelle du Secret.
-Le type de cache est configurable à l'aide de le champ `ConfigMapAndSecretChangeDetectionStrategy` dans la structure [KubeletConfiguration](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
+Le type de cache est configurable à l'aide de le champ `ConfigMapAndSecretChangeDetectionStrategy` dans la structure [KubeletConfiguration](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
Il peut être soit propagé via watch (par défaut), basé sur ttl, ou simplement redirigé toutes les requêtes vers directement kube-apiserver.
Par conséquent, le délai total entre le moment où le secret est mis à jour et le moment où de nouvelles clés sont projetées sur le pod peut être aussi long que la période de synchronisation du kubelet + le délai de propagation du cache, où le délai de propagation du cache dépend du type de cache choisi (cela équivaut au delai de propagation du watch, ttl du cache, ou bien zéro).
diff --git a/content/fr/docs/concepts/storage/persistent-volumes.md b/content/fr/docs/concepts/storage/persistent-volumes.md
index f4fc315a20f70..ad88ec14caadb 100644
--- a/content/fr/docs/concepts/storage/persistent-volumes.md
+++ b/content/fr/docs/concepts/storage/persistent-volumes.md
@@ -203,7 +203,7 @@ Cependant, le chemin particulier spécifié dans la partie `volumes` du template
### Redimensionnement des PVC
-{{< feature-state for_k8s_version="v1.11" state="beta" >}}
+{{< feature-state for_k8s_version="v1.24" state="stable" >}}
La prise en charge du redimensionnement des PersistentVolumeClaims (PVCs) est désormais activée par défaut.
Vous pouvez redimensionner les types de volumes suivants:
diff --git a/content/fr/docs/concepts/workloads/controllers/replicaset.md b/content/fr/docs/concepts/workloads/controllers/replicaset.md
index 820c8bb42d317..3204d352777e5 100644
--- a/content/fr/docs/concepts/workloads/controllers/replicaset.md
+++ b/content/fr/docs/concepts/workloads/controllers/replicaset.md
@@ -258,7 +258,7 @@ curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/repli
### Supprimer juste un ReplicaSet
-Vous pouvez supprimer un ReplicaSet sans affecter ses pods à l’aide de [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) avec l'option `--cascade=false`.
+Vous pouvez supprimer un ReplicaSet sans affecter ses pods à l’aide de [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) avec l'option `--cascade=orphan`.
Lorsque vous utilisez l'API REST ou la bibliothèque `client-go`, vous devez définir `propagationPolicy` sur `Orphan`.
Par exemple :
```shell
diff --git a/content/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
index 9983bde03c73f..19149f43a4e5e 100644
--- a/content/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
+++ b/content/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
@@ -1,6 +1,6 @@
---
-title: Création d'un Cluster a master unique avec kubeadm
-description: Création d'un Cluster a master unique avec kubeadm
+title: Création d'un Cluster à master unique avec kubeadm
+description: Création d'un Cluster à master unique avec kubeadm
content_type: task
weight: 30
---
@@ -9,7 +9,7 @@ weight: 30
**kubeadm** vous aide à démarrer un cluster Kubernetes minimum,
viable et conforme aux meilleures pratiques. Avec kubeadm, votre cluster
-doit passer les [tests de Conformance Kubernetes](https://kubernetes.io/blog/2017/10/software-conformance-certification).
+doit passer les [tests de Conformité Kubernetes](https://kubernetes.io/blog/2017/10/software-conformance-certification).
Kubeadm prend également en charge d'autres fonctions du cycle de vie, telles que les mises
à niveau, la rétrogradation et la gestion des
[bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/).
@@ -676,7 +676,7 @@ si le master est irrécupérable, votre cluster peut perdre ses données et peut
partir de zéro. L'ajout du support HA (plusieurs serveurs etcd, plusieurs API servers, etc.)
à kubeadm est encore en cours de developpement.
- Contournement: régulièrement [sauvegarder etcd](https://coreos.com/etcd/docs/latest/admin_guide.html).
+ Contournement: régulièrement [sauvegarder etcd](https://etcd.io/docs/v3.5/op-guide/recovery/).
le répertoire des données etcd configuré par kubeadm se trouve dans `/var/lib/etcd` sur le master.
## Diagnostic {#troubleshooting}
diff --git a/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md
index 297cbd700ea21..2d9af18838152 100644
--- a/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md
+++ b/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md
@@ -97,7 +97,7 @@ resources:
cpu: 500m
```
-Utilisez `kubectl top` pour récupérer les métriques du pod :
+Utilisez `kubectl top` pour récupérer les métriques du Pod :
```shell
kubectl top pod cpu-demo --namespace=cpu-example
diff --git a/content/fr/includes/partner-script.js b/content/fr/includes/partner-script.js
deleted file mode 100644
index 78103493f61fc..0000000000000
--- a/content/fr/includes/partner-script.js
+++ /dev/null
@@ -1,1609 +0,0 @@
-;(function () {
- var partners = [
- {
- type: 0,
- name: 'Sysdig',
- logo: 'sys_dig',
- link: 'https://sysdig.com/blog/monitoring-kubernetes-with-sysdig-cloud/',
- blurb: "Sysdig est la société de renseignements sur les conteneurs. Sysdig a créé la seule plate-forme unifiée pour la surveillance, la sécurité et le dépannage dans une architecture compatible avec les microservices. "
- },
- {
- type: 0,
- name: 'Puppet',
- logo: 'puppet',
- link: 'https://puppet.com/blog/announcing-kream-and-new-kubernetes-helm-and-docker-modules',
- blurb: "Nous avons développé des outils et des produits pour que votre adoption de Kubernetes soit aussi efficace que possible, et qu'elle couvre l'ensemble du cycle de vos flux de travail, du développement à la production. Et maintenant, Puppet Pipelines for Containers est votre tableau de bord complet DevOps pour Kubernetes. "
- },
- {
- type: 0,
- name: 'Citrix',
- logo: 'citrix',
- link: 'https://www.citrix.com/networking/microservices.html',
- blurb: "Netscaler CPX offre aux développeurs d'applications toutes les fonctionnalités dont ils ont besoin pour équilibrer leurs microservices et leurs applications conteneurisées avec Kubernetes."
- },
- {
- type: 0,
- name: 'Cockroach Labs',
- logo: 'cockroach_labs',
- link: 'https://www.cockroachlabs.com/blog/running-cockroachdb-on-kubernetes/',
- blurb: 'CockroachDB est une base de données SQL distribuée dont le modèle de réplication et de capacité de survie intégré se combine à Kubernetes pour simplifier réellement les données.'
- },
- {
- type: 2,
- name: 'Weaveworks',
- logo: 'weave_works',
- link: ' https://weave.works/kubernetes',
- blurb: 'Weaveworks permet aux développeurs et aux équipes de développement / développement de connecter, déployer, sécuriser, gérer et dépanner facilement les microservices dans Kubernetes.'
- },
- {
- type: 0,
- name: 'Intel',
- logo: 'intel',
- link: 'https://tectonic.com/press/intel-coreos-collaborate-on-openstack-with-kubernetes.html',
- blurb: "Activer GIFEE (l'infrastructure de Google pour tous les autres), pour exécuter les déploiements OpenStack sur Kubernetes."
- },
- {
- type: 3,
- name: 'Platform9',
- logo: 'platform9',
- link: 'https://platform9.com/products/kubernetes/',
- blurb: "Platform9 est la société open source en tant que service qui exploite tout le bien de Kubernetes et le fournit sous forme de service géré."
- },
- {
- type: 0,
- name: 'Datadog',
- logo: 'datadog',
- link: 'http://docs.datadoghq.com/integrations/kubernetes/',
- blurb: 'Observabilité totale pour les infrastructures et applications dynamiques. Inclut des alertes de précision, des analyses et des intégrations profondes de Kubernetes. '
- },
- {
- type: 0,
- name: 'AppFormix',
- logo: 'appformix',
- link: 'http://www.appformix.com/solutions/appformix-for-kubernetes/',
- blurb: "AppFormix est un service d'optimisation des performances d'infrastructure cloud aidant les entreprises à rationaliser leurs opérations cloud sur n'importe quel cloud Kubernetes. "
- },
- {
- type: 0,
- name: 'Crunchy',
- logo: 'crunchy',
- link: 'http://info.crunchydata.com/blog/advanced-crunchy-containers-for-postgresql',
- blurb: 'Crunchy PostgreSQL Container Suite est un ensemble de conteneurs permettant de gérer PostgreSQL avec des microservices DBA exploitant Kubernetes et Helm.'
- },
- {
- type: 0,
- name: 'Aqua',
- logo: 'aqua',
- link: 'http://blog.aquasec.com/security-best-practices-for-kubernetes-deployment',
- blurb: "Sécurité complète et automatisée pour vos conteneurs s'exécutant sur Kubernetes."
- },
- {
- type: 0,
- name: 'Distelli',
- logo: 'distelli',
- link: 'https://www.distelli.com/',
- blurb: "Pipeline de vos référentiels sources vers vos clusters Kubernetes sur n'importe quel cloud."
- },
- {
- type: 0,
- name: 'Nuage networks',
- logo: 'nuagenetworks',
- link: 'https://github.com/nuagenetworks/nuage-kubernetes',
- blurb: "La plate-forme Nuage SDN fournit une mise en réseau à base de règles entre les pods Kubernetes et les environnements autres que Kubernetes avec une surveillance de la visibilité et de la sécurité."
- },
- {
- type: 0,
- name: 'Sematext',
- logo: 'sematext',
- link: 'https://sematext.com/kubernetes/',
- blurb: 'Journalisation et surveillance: collecte et traitement automatiques des métriques, des événements et des journaux pour les pods à découverte automatique et les noeuds Kubernetes.'
- },
- {
- type: 0,
- name: 'Diamanti',
- logo: 'diamanti',
- link: 'https://www.diamanti.com/products/',
- blurb: "Diamanti déploie des conteneurs à performances garanties en utilisant Kubernetes dans la première appliance hyperconvergée spécialement conçue pour les applications conteneurisées."
- },
- {
- type: 0,
- name: 'Aporeto',
- logo: 'aporeto',
- link: 'https://aporeto.com/trireme',
- blurb: "Aporeto sécurise par défaut les applications natives en nuage sans affecter la vélocité des développeurs et fonctionne à toute échelle, sur n'importe quel nuage."
- },
- {
- type: 2,
- name: 'Giant Swarm',
- logo: 'giantswarm',
- link: 'https://giantswarm.io',
- blurb: "Giant Swarm vous permet de créer et d'utiliser simplement et rapidement des clusters Kubernetes à la demande, sur site ou dans le cloud. Contactez Garm Swarm pour en savoir plus sur le meilleur moyen d'exécuter des applications natives en nuage où que vous soyez."
- },
- {
- type: 3,
- name: 'Giant Swarm',
- logo: 'giantswarm',
- link: 'https://giantswarm.io/product/',
- blurb: "Giant Swarm vous permet de créer et d'utiliser simplement et rapidement des clusters Kubernetes à la demande, sur site ou dans le cloud. Contactez Garm Swarm pour en savoir plus sur le meilleur moyen d'exécuter des applications natives en nuage où que vous soyez."
- },
- {
- type: 3,
- name: 'Hasura',
- logo: 'hasura',
- link: 'https://hasura.io',
- blurb: "Hasura est un PaaS basé sur Kubernetes et un BaaS basé sur Postgres qui accélère le développement d'applications avec des composants prêts à l'emploi."
- },
- {
- type: 3,
- name: 'Mirantis',
- logo: 'mirantis',
- link: 'https://www.mirantis.com/software/kubernetes/',
- blurb: 'Mirantis - Plateforme Cloud Mirantis'
- },
- {
- type: 2,
- name: 'Mirantis',
- logo: 'mirantis',
- link: 'https://content.mirantis.com/Containerizing-OpenStack-on-Kubernetes-Video-Landing-Page.html',
- blurb: "Mirantis construit et gère des clouds privés avec des logiciels open source tels que OpenStack, déployés sous forme de conteneurs orchestrés par Kubernetes."
- },
- {
- type: 0,
- name: 'Kubernetic',
- logo: 'kubernetic',
- link: 'https://kubernetic.com/',
- blurb: 'Kubernetic est un client Kubernetes Desktop qui simplifie et démocratise la gestion de clusters pour DevOps.'
- },
- {
- type: 1,
- name: 'Reactive Ops',
- logo: 'reactive_ops',
- link: 'https://www.reactiveops.com/the-kubernetes-experts/',
- blurb: "ReactiveOps a écrit l'automatisation des meilleures pratiques pour l'infrastructure sous forme de code sur GCP & AWS utilisant Kubernetes, vous aidant ainsi à construire et à maintenir une infrastructure de classe mondiale pour une fraction du prix d'une embauche interne."
- },
- {
- type: 2,
- name: 'Livewyer',
- logo: 'livewyer',
- link: 'https://livewyer.io/services/kubernetes-experts/',
- blurb: "Les experts de Kubernetes qui implémentent des applications intégrées et permettent aux équipes informatiques de tirer le meilleur parti de la technologie conteneurisée."
- },
- {
- type: 2,
- name: 'Samsung SDS',
- logo: 'samsung_sds',
- link: 'http://www.samsungsdsa.com/cloud-infrastructure_kubernetes',
- blurb: "L'équipe Cloud Native Computing de Samsung SDS propose des conseils d'experts couvrant tous les aspects techniques liés à la création de services destinés à un cluster Kubernetes."
- },
- {
- type: 2,
- name: 'Container Solutions',
- logo: 'container_solutions',
- link: 'http://container-solutions.com/resources/kubernetes/',
- blurb: 'Container Solutions est une société de conseil en logiciels haut de gamme qui se concentre sur les infrastructures programmables. Elle offre notre expertise en développement, stratégie et opérations logicielles pour vous aider à innover à grande vitesse et à grande échelle.'
- },
- {
- type: 4,
- name: 'Container Solutions',
- logo: 'container_solutions',
- link: 'http://container-solutions.com/resources/kubernetes/',
- blurb: 'Container Solutions est une société de conseil en logiciels haut de gamme qui se concentre sur les infrastructures programmables. Elle offre notre expertise en développement, stratégie et opérations logicielles pour vous aider à innover à grande vitesse et à grande échelle.'
- },
- {
- type: 2,
- name: 'Jetstack',
- logo: 'jetstack',
- link: 'https://www.jetstack.io/',
- blurb: "Jetstack est une organisation entièrement centrée sur Kubernetes. Ils vous aideront à tirer le meilleur parti de Kubernetes grâce à des services professionnels spécialisés et à des outils open source. Entrez en contact et accélérez votre projet."
- },
- {
- type: 0,
- name: 'Tigera',
- logo: 'tigera',
- link: 'http://docs.projectcalico.org/latest/getting-started/kubernetes/',
- blurb: "Tigera crée des solutions de réseautage en nuage natif hautes performances et basées sur des règles pour Kubernetes."
- },
- {
- type: 1,
- name: 'Harbur',
- logo: 'harbur',
- link: 'https://harbur.io/',
- blurb: "Basé à Barcelone, Harbur est un cabinet de conseil qui aide les entreprises à déployer des solutions d'auto-guérison basées sur les technologies de conteneur"
- },
- {
- type: 0,
- name: 'Spotinst',
- logo: 'spotinst',
- link: 'http://blog.spotinst.com/2016/08/04/elastigroup-kubernetes-minions-steroids/',
- blurb: "Votre Kubernetes à 80% de moins. Exécutez des charges de travail K8s sur des instances ponctuelles avec une disponibilité totale pour économiser au moins 80% de la mise à l'échelle automatique de vos Kubernetes avec une efficacité maximale dans des environnements hétérogènes."
- },
- {
- type: 2,
- name: 'InwinSTACK',
- logo: 'inwinstack',
- link: 'http://www.inwinstack.com/index.php/en/solutions-en/',
- blurb: "Notre service de conteneur exploite l'infrastructure basée sur OpenStack et son moteur Magnum d'orchestration de conteneur pour gérer les clusters Kubernetes."
- },
- {
- type: 4,
- name: 'InwinSTACK',
- logo: 'inwinstack',
- link: 'http://www.inwinstack.com/index.php/en/solutions-en/',
- blurb: "Notre service de conteneur exploite l'infrastructure basée sur OpenStack et son moteur Magnum d'orchestration de conteneur pour gérer les clusters Kubernetes."
- },
- {
- type: 3,
- name: 'InwinSTACK',
- logo: 'inwinstack',
- link: 'https://github.com/inwinstack/kube-ansible',
- blurb: 'inwinSTACK - être-ansible'
- },
- {
- type: 1,
- name: 'Semantix',
- logo: 'semantix',
- link: 'http://www.semantix.com.br/',
- blurb: "Semantix est une entreprise qui travaille avec l’analyse de données et les systèmes distribués. Kubernetes est utilisé pour orchestrer des services pour nos clients."
- },
- {
- type: 0,
- name: 'ASM Technologies Limited',
- logo: 'asm',
- link: 'http://www.asmtech.com/',
- blurb: "Notre portefeuille de chaînes logistiques technologiques permet à vos logiciels d'être accessibles, viables et disponibles plus efficacement."
- },
- {
- type: 1,
- name: 'InfraCloud Technologies',
- logo: 'infracloud',
- link: 'http://blog.infracloud.io/state-of-kubernetes/',
- blurb: "InfraCloud Technologies est une société de conseil en logiciels qui fournit des services dans les conteneurs, le cloud et le développement."
- },
- {
- type: 0,
- name: 'SignalFx',
- logo: 'signalfx',
- link: 'https://github.com/signalfx/integrations/tree/master/kubernetes',
- blurb: "Obtenez une visibilité en temps réel sur les métriques et les alertes les plus intelligentes pour les architectures actuelles, y compris une intégration poussée avec Kubernetes"
- },
- {
- type: 0,
- name: 'NATS',
- logo: 'nats',
- link: 'https://github.com/pires/kubernetes-nats-cluster',
- blurb: "NATS est un système de messagerie natif en nuage simple, sécurisé et évolutif."
- },
- {
- type: 2,
- name: 'RX-M',
- logo: 'rxm',
- link: 'http://rx-m.com/training/kubernetes-training/',
- blurb: 'Services de formation et de conseil Kubernetes Dev, DevOps et Production neutres sur le marché.'
- },
- {
- type: 4,
- name: 'RX-M',
- logo: 'rxm',
- link: 'http://rx-m.com/training/kubernetes-training/',
- blurb: 'Services de formation et de conseil Kubernetes Dev, DevOps et Production neutres sur le marché.'
- },
- {
- type: 1,
- name: 'Emerging Technology Advisors',
- logo: 'eta',
- link: 'https://www.emergingtechnologyadvisors.com/services/kubernetes.html',
- blurb: "ETA aide les entreprises à concevoir, mettre en œuvre et gérer des applications évolutives utilisant Kubernetes sur un cloud public ou privé."
- },
- {
- type: 0,
- name: 'CloudPlex.io',
- logo: 'cloudplex',
- link: 'http://www.cloudplex.io',
- blurb: "CloudPlex permet aux équipes d'exploitation de déployer, d'orchestrer, de gérer et de surveiller de manière visuelle l'infrastructure, les applications et les services dans un cloud public ou privé."
- },
- {
- type: 2,
- name: 'Kumina',
- logo: 'kumina',
- link: 'https://www.kumina.nl/managed_kubernetes',
- blurb: "Kumina combine la puissance de Kubernetes à plus de 10 ans d'expérience dans les opérations informatiques. Nous créons, construisons et prenons en charge des solutions Kubernetes entièrement gérées sur votre choix d’infrastructure. Nous fournissons également des services de conseil et de formation."
- },
- {
- type: 0,
- name: 'CA Technologies',
- logo: 'ca',
- link: 'https://docops.ca.com/ca-continuous-delivery-director/integrations/en/plug-ins/kubernetes-plug-in',
- blurb: "Le plug-in Kubernetes de CA Continuous Delivery Director orchestre le déploiement d'applications conteneurisées dans un pipeline de version de bout en bout."
- },
- {
- type: 0,
- name: 'CoScale',
- logo: 'coscale',
- link: 'http://www.coscale.com/blog/how-to-monitor-your-kubernetes-cluster',
- blurb: "Surveillance complète de la pile de conteneurs et de microservices orchestrés par Kubernetes. Propulsé par la détection des anomalies pour trouver les problèmes plus rapidement."
- },
- {
- type: 2,
- name: 'Supergiant.io',
- logo: 'supergiant',
- link: 'https://supergiant.io/blog/supergiant-packing-algorithm-unique-save-money',
- blurb: 'Supergiant autoscales hardware pour Kubernetes. Open-source, il facilite le déploiement, la gestion et la montée en charge des applications haute disponibilité, distribuées et à haute disponibilité. '
- },
- {
- type: 0,
- name: 'Avi Networks',
- logo: 'avinetworks',
- link: 'https://kb.avinetworks.com/avi-vantage-openshift-installation-guide/',
- blurb: "La structure des services applicatifs élastiques d'Avis fournit un réseau L4-7 évolutif, riche en fonctionnalités et intégré pour les environnements K8S."
- },
- {
- type: 1,
- name: 'Codecrux web technologies pvt ltd',
- logo: 'codecrux',
- link: 'http://codecrux.com/kubernetes/',
- blurb: "Chez CodeCrux, nous aidons votre organisation à tirer le meilleur parti de Containers et de Kubernetes, quel que soit le stade où vous vous trouvez"
- },
- {
- type: 0,
- name: 'Greenqloud',
- logo: 'qstack',
- link: 'https://www.qstack.com/application-orchestration/',
- blurb: "Qstack fournit des clusters Kubernetes sur site auto-réparables avec une interface utilisateur intuitive pour la gestion de l'infrastructure et de Kubernetes."
- },
- {
- type: 1,
- name: 'StackOverdrive.io',
- logo: 'stackoverdrive',
- link: 'http://www.stackoverdrive.net/kubernetes-consulting/',
- blurb: "StackOverdrive aide les organisations de toutes tailles à tirer parti de Kubernetes pour l’orchestration et la gestion par conteneur."
- },
- {
- type: 0,
- name: 'StackIQ, Inc.',
- logo: 'stackiq',
- link: 'https://www.stackiq.com/kubernetes/',
- blurb: "Avec Stacki et la palette Stacki pour Kubernetes, vous pouvez passer du métal nu aux conteneurs en un seul passage très rapidement et facilement."
- },
- {
- type: 0,
- name: 'Cobe',
- logo: 'cobe',
- link: 'https://cobe.io/product-page/',
- blurb: 'Gérez les clusters Kubernetes avec un modèle direct et interrogeable qui capture toutes les relations et les données de performance dans un contexte entièrement visualisé.'
- },
- {
- type: 0,
- name: 'Datawire',
- logo: 'datawire',
- link: 'http://www.datawire.io',
- blurb: "Les outils open source de Datawires permettent à vos développeurs de microservices d’être extrêmement productifs sur Kubernetes, tout en laissant les opérateurs dormir la nuit."
- },
- {
- type: 0,
- name: 'Mashape, Inc.',
- logo: 'kong',
- link: 'https://getkong.org/install/kubernetes/',
- blurb: "Kong est une couche d'API open source évolutive qui s'exécute devant toute API RESTful et peut être provisionnée à un cluster Kubernetes."
- },
- {
- type: 0,
- name: 'F5 Networks',
- logo: 'f5networks',
- link: 'http://github.com/f5networks',
- blurb: "Nous avons une intégration de LB dans Kubernetes."
- },
- {
- type: 1,
- name: 'Lovable Tech',
- logo: 'lovable',
- link: 'http://lovable.tech/',
- blurb: "Des ingénieurs, des concepteurs et des consultants stratégiques de classe mondiale vous aident à expédier une technologie Web et mobile attrayante."
- },
- {
- type: 0,
- name: 'StackState',
- logo: 'stackstate',
- link: 'http://stackstate.com/platform/container-monitoring',
- blurb: "Analyse opérationnelle entre les équipes et les outils. Inclut la visualisation de la topologie, l'analyse des causes premières et la détection des anomalies pour Kubernetes."
- },
- {
- type: 1,
- name: 'INEXCCO INC',
- logo: 'inexcco',
- link: 'https://www.inexcco.com/',
- blurb: "Fort talent pour DevOps et Cloud travaillant avec plusieurs clients sur des implémentations de kubernetes et de helm."
- },
- {
- type: 2,
- name: 'Bitnami',
- logo: 'bitnami',
- link: 'http://bitnami.com/kubernetes',
- blurb: "Bitnami propose à Kubernetes un catalogue d'applications et de blocs de construction d'applications fiables, à jour et faciles à utiliser."
- },
- {
- type: 1,
- name: 'Nebulaworks',
- logo: 'nebulaworks',
- link: 'http://www.nebulaworks.com/container-platforms',
- blurb: "Nebulaworks fournit des services destinés à aider l'entreprise à adopter des plates-formes de conteneurs modernes et des processus optimisés pour permettre l'innovation à grande échelle."
- },
- {
- type: 1,
- name: 'EASYNUBE',
- logo: 'easynube',
- link: 'http://easynube.co.uk/devopsnube/',
- blurb: "EasyNube fournit l'architecture, la mise en œuvre et la gestion d'applications évolutives à l'aide de Kubernetes et Openshift."
- },
- {
- type: 1,
- name: 'Opcito Technologies',
- logo: 'opcito',
- link: 'http://www.opcito.com/kubernetes/',
- blurb: "Opcito est une société de conseil en logiciels qui utilise Kubernetes pour aider les organisations à concevoir, concevoir et déployer des applications hautement évolutives."
- },
- {
- type: 0,
- name: 'code by Dell EMC',
- logo: 'codedellemc',
- link: 'https://blog.codedellemc.com',
- blurb: "Respecté en tant que chef de file de la persistance du stockage pour les applications conteneurisées. Contribution importante au K8 et à l'écosystème."
- },
- {
- type: 0,
- name: 'Instana',
- logo: 'instana',
- link: 'https://www.instana.com/supported-technologies/',
- blurb: "Instana surveille les performances des applications, de l'infrastructure, des conteneurs et des services déployés sur un cluster Kubernetes."
- },
- {
- type: 0,
- name: 'Netsil',
- logo: 'netsil',
- link: 'https://netsil.com/kubernetes/',
- blurb: "Générez une carte de topologie d'application découverte automatiquement en temps réel! Surveillez les pods et les espaces de noms Kubernetes sans aucune instrumentation de code."
- },
- {
- type: 2,
- name: 'Treasure Data',
- logo: 'treasuredata',
- link: 'https://fluentd.treasuredata.com/kubernetes-logging/',
- blurb: "Fluentd Enterprise apporte une journalisation intelligente et sécurisée à Kubernetes, ainsi que des intégrations avec des serveurs tels que Splunk, Kafka ou AWS S3."
- },
- {
- type: 2,
- name: 'Kenzan',
- logo: 'Kenzan',
- link: 'http://kenzan.com/?ref=kubernetes',
- blurb: "Nous fournissons des services de conseil personnalisés en nous basant sur Kubernetes. Cela concerne le développement de la plate-forme, les pipelines de distribution et le développement d'applications au sein de Kubernetes."
- },
- {
- type: 2,
- name: 'New Context',
- logo: 'newcontext',
- link: 'https://www.newcontext.com/devsecops-infrastructure-automation-orchestration/',
- blurb: "Nouveau contexte construit et optimise les implémentations et les migrations Kubernetes sécurisées, de la conception initiale à l'automatisation et à la gestion de l'infrastructure."
- },
- {
- type: 2,
- name: 'Banzai',
- logo: 'banzai',
- link: 'https://banzaicloud.com/platform/',
- blurb: "Banzai Cloud apporte le cloud natif à l'entreprise et simplifie la transition vers les microservices sur Kubernetes."
- },
- {
- type: 3,
- name: 'Kublr',
- logo: 'kublr',
- link: 'http://kublr.com',
- blurb: "Kublr - Accélérez et contrôlez le déploiement, la mise à l'échelle, la surveillance et la gestion de vos applications conteneurisées."
- },
- {
- type: 1,
- name: 'ControlPlane',
- logo: 'controlplane',
- link: 'https://control-plane.io',
- blurb: "Nous sommes un cabinet de conseil basé à Londres, spécialisé dans la sécurité et la livraison continue. Nous offrons des services de conseil et de formation."
- },
- {
- type: 3,
- name: 'Nirmata',
- logo: 'nirmata',
- link: 'https://www.nirmata.com/',
- blurb: 'Nirmata - Nirmata Managed Kubernetes'
- },
- {
- type: 2,
- name: 'Nirmata',
- logo: 'nirmata',
- link: 'https://www.nirmata.com/',
- blurb: "Nirmata est une plate-forme logicielle qui aide les équipes de DevOps à fournir des solutions de gestion de conteneurs basées sur Kubernetes, de qualité professionnelle et indépendantes des fournisseurs de cloud."
- },
- {
- type: 3,
- name: 'TenxCloud',
- logo: 'tenxcloud',
- link: 'https://tenxcloud.com',
- blurb: 'TenxCloud - Moteur de conteneur TenxCloud (TCE)'
- },
- {
- type: 2,
- name: 'TenxCloud',
- logo: 'tenxcloud',
- link: 'https://www.tenxcloud.com/',
- blurb: "Fondé en octobre 2014, TenxCloud est l'un des principaux fournisseurs de services d'informatique en nuage de conteneurs en Chine, couvrant notamment la plate-forme cloud PaaS pour conteneurs, la gestion de micro-services, DevOps, les tests de développement, AIOps, etc. Fournir des produits et des solutions PaaS de cloud privé aux clients des secteurs de la finance, de l’énergie, des opérateurs, de la fabrication, de l’éducation et autres."
- },
- {
- type: 0,
- name: 'Twistlock',
- logo: 'twistlock',
- link: 'https://www.twistlock.com/',
- blurb: "La sécurité à l'échelle Kubernetes: Twistlock vous permet de déployer sans crainte, en vous assurant que vos images et vos conteneurs sont exempts de vulnérabilités et protégés au moment de l'exécution."
- },
- {
- type: 0,
- name: 'Endocode AG',
- logo: 'endocode',
- link: 'https://endocode.com/kubernetes/',
- blurb: 'Endocode pratique et enseigne la méthode open source. Noyau à cluster - Dev to Ops. Nous proposons des formations, des services et une assistance Kubernetes. '
- },
- {
- type: 2,
- name: 'Accenture',
- logo: 'accenture',
- link: 'https://www.accenture.com/us-en/service-application-containers',
- blurb: 'Architecture, mise en œuvre et exploitation de solutions Kubernetes de classe mondiale pour les clients cloud.'
- },
- {
- type: 1,
- name: 'Biarca',
- logo: 'biarca',
- link: 'http://biarca.io/',
- blurb: "Biarca est un fournisseur de services cloud et des domaines d’intervention clés. Les domaines d’intervention clés de Biarca incluent les services d’adoption en nuage, les services d’infrastructure, les services DevOps et les services d’application. Biarca s'appuie sur Kubernetes pour fournir des solutions conteneurisées."
- },
- {
- type: 2,
- name: 'Claranet',
- logo: 'claranet',
- link: 'http://www.claranet.co.uk/hosting/google-cloud-platform-consulting-managed-services',
- blurb: "Claranet aide les utilisateurs à migrer vers le cloud et à tirer pleinement parti du nouveau monde qu’il offre. Nous consultons, concevons, construisons et gérons de manière proactive l'infrastructure et les outils d'automatisation appropriés pour permettre aux clients d'atteindre cet objectif."
- },
- {
- type: 1,
- name: 'CloudKite',
- logo: 'cloudkite',
- link: 'https://cloudkite.io/',
- blurb: "CloudKite.io aide les entreprises à créer et à maintenir des logiciels hautement automatisés, résilients et extrêmement performants sur Kubernetes."
- },
- {
- type: 2,
- name: 'CloudOps',
- logo: 'CloudOps',
- link: 'https://www.cloudops.com/services/docker-and-kubernetes-workshops/',
- blurb: "CloudOps vous met au contact de l'écosystème K8s via un atelier / laboratoire. Obtenez des K8 prêts à l'emploi dans les nuages de votre choix avec nos services gérés."
- },
- {
- type: 2,
- name: 'Ghostcloud',
- logo: 'ghostcloud',
- link: 'https://www.ghostcloud.cn/ecos-kubernetes',
- blurb: "EcOS est un PaaS / CaaS de niveau entreprise basé sur Docker et Kubernetes, ce qui facilite la configuration, le déploiement et la gestion des applications conteneurisées."
- },
- {
- type: 3,
- name: 'Ghostcloud',
- logo: 'ghostcloud',
- link: 'https://www.ghostcloud.cn/ecos-kubernetes',
- blurb: "EcOS est un PaaS / CaaS de niveau entreprise basé sur Docker et Kubernetes, ce qui facilite la configuration, le déploiement et la gestion des applications conteneurisées."
- },
- {
- type: 2,
- name: 'Contino',
- logo: 'contino',
- link: 'https://www.contino.io/',
- blurb: "Nous aidons les entreprises à adopter DevOps, les conteneurs et le cloud computing. Contino est un cabinet de conseil mondial qui permet aux organisations réglementées d’accélérer l’innovation en adoptant des approches modernes de la fourniture de logiciels."
- },
- {
- type: 2,
- name: 'Booz Allen Hamilton',
- logo: 'boozallenhamilton',
- link: 'https://www.boozallen.com/',
- blurb: "Booz Allen collabore avec des clients des secteurs public et privé pour résoudre leurs problèmes les plus difficiles en combinant conseil, analyse, opérations de mission, technologie, livraison de systèmes, cybersécurité, ingénierie et expertise en innovation."
- },
- {
- type: 1,
- name: 'BigBinary',
- logo: 'bigbinary',
- link: 'http://blog.bigbinary.com/categories/Kubernetes',
- blurb: "Fournisseur de solutions numériques pour les clients fédéraux et commerciaux, comprenant DevSecOps, des plates-formes cloud, une stratégie de transformation, des solutions cognitives et l'UX."
- },
- {
- type: 0,
- name: 'CloudPerceptions',
- logo: 'cloudperceptions',
- link: 'https://www.meetup.com/Triangle-Kubernetes-Meetup/files/',
- blurb: "Solution de sécurité des conteneurs pour les petites et moyennes entreprises qui envisagent d'exécuter Kubernetes sur une infrastructure partagée."
- },
- {
- type: 2,
- name: 'Creationline, Inc.',
- logo: 'creationline',
- link: 'https://www.creationline.com/ci',
- blurb: 'Solution totale pour la gestion des ressources informatiques par conteneur.'
- },
- {
- type: 0,
- name: 'DataCore Software',
- logo: 'datacore',
- link: 'https://www.datacore.com/solutions/virtualization/containerization',
- blurb: "DataCore fournit à Kubernetes un stockage de blocs universel hautement disponible et hautement performant, ce qui améliore radicalement la vitesse de déploiement."
- },
- {
- type: 0,
- name: 'Elastifile',
- logo: 'elastifile',
- link: 'https://www.elastifile.com/stateful-containers',
- blurb: "La structure de données multi-cloud d’Elastifile offre un stockage persistant défini par logiciel et hautement évolutif, conçu pour le logiciel Kubernetes."
- },
- {
- type: 0,
- name: 'GitLab',
- logo: 'gitlab',
- link: 'https://about.gitlab.com/2016/11/14/idea-to-production/',
- blurb: "Avec GitLab et Kubernetes, vous pouvez déployer un pipeline CI / CD complet avec plusieurs environnements, des déploiements automatiques et une surveillance automatique."
- },
- {
- type: 0,
- name: 'Gravitational, Inc.',
- logo: 'gravitational',
- link: 'https://gravitational.com/telekube/',
- blurb: "Telekube associe Kubernetes à Teleport, notre serveur SSH moderne, afin que les opérateurs puissent gérer à distance une multitude de déploiements d'applications K8."
- },
- {
- type: 0,
- name: 'Hitachi Data Systems',
- logo: 'hitachi',
- link: 'https://www.hds.com/en-us/products-solutions/application-solutions/unified-compute-platform-with-kubernetes-orchestration.html',
- blurb: "Créez les applications dont vous avez besoin pour conduire votre entreprise - DÉVELOPPEZ ET DÉPLOYEZ DES APPLICATIONS PLUS RAPIDEMENT ET PLUS FIABLES."
- },
- {
- type: 1,
- name: 'Infosys Technologies',
- logo: 'infosys',
- link: 'https://www.infosys.com',
- blurb: "Monolithique à microservices sur openshift est une offre que nous développons dans le cadre de la pratique open source."
- },
- {
- type: 0,
- name: 'JFrog',
- logo: 'jfrog',
- link: 'https://www.jfrog.com/use-cases/12584/',
- blurb: "Vous pouvez utiliser Artifactory pour stocker et gérer toutes les images de conteneur de votre application, les déployer sur Kubernetes et configurer un pipeline de construction, de test et de déploiement à l'aide de Jenkins et d'Artifactory. Une fois qu'une image est prête à être déployée, Artifactory peut déclencher un déploiement de mise à jour propagée dans un cluster Kubernetes sans interruption - automatiquement!"
- },
- {
- type: 0,
- name: 'Navops by Univa',
- logo: 'navops',
- link: 'https://www.navops.io',
- blurb: "Navops est une suite de produits qui permet aux entreprises de tirer pleinement parti de Kubernetes et permet de gérer rapidement et efficacement des conteneurs à grande échelle."
- },
- {
- type: 0,
- name: 'NeuVector',
- logo: 'neuvector',
- link: 'http://neuvector.com/solutions-for-kubernetes-security/',
- blurb: "NeuVector fournit une solution de sécurité réseau intelligente pour les conteneurs et les applications, intégrée et optimisée pour Kubernetes."
- },
- {
- type: 1,
- name: 'OpsZero',
- logo: 'opszero',
- link: 'https://www.opszero.com/kubernetes.html',
- blurb: 'opsZero fournit DevOps pour les startups. Nous construisons et entretenons votre infrastructure Kubernetes et Cloud pour accélérer votre cycle de publication. '
- },
- {
- type: 1,
- name: 'Shiwaforce.com Ltd.',
- logo: 'shiwaforce',
- link: 'https://www.shiwaforce.com/en/',
- blurb: "Shiwaforce.com est le partenaire agile de la transformation numérique. Nos solutions suivent les changements de l'entreprise rapidement, facilement et à moindre coût."
- },
- {
- type: 1,
- name: 'SoftServe',
- logo: 'softserve',
- link: 'https://www.softserveinc.com/en-us/blogs/kubernetes-travis-ci/',
- blurb: "SoftServe permet à ses clients d’adopter des modèles de conception d’applications modernes et de bénéficier de grappes Kubernetes entièrement intégrées, hautement disponibles et économiques, à n’importe quelle échelle."
- },
- {
- type: 1,
- name: 'Solinea',
- logo: 'solinea',
- link: 'https://www.solinea.com/cloud-consulting-services/container-microservices-offerings',
- blurb: "Solinea est un cabinet de conseil en transformation numérique qui permet aux entreprises de créer des solutions innovantes en adoptant l'informatique en nuage native."
- },
- {
- type: 1,
- name: 'Sphere Software, LLC',
- logo: 'spheresoftware',
- link: 'https://sphereinc.com/kubernetes/',
- blurb: "L'équipe d'experts de Sphere Software permet aux clients de concevoir et de mettre en œuvre des applications évolutives à l'aide de Kubernetes dans Google Cloud, AWS et Azure."
- },
- {
- type: 1,
- name: 'Altoros',
- logo: 'altoros',
- link: 'https://www.altoros.com/container-orchestration-tools-enablement.html',
- blurb: "Déploiement et configuration de Kubernetes, Optimisation de solutions existantes, formation des développeurs à l'utilisation de Kubernetes, assistance."
- },
- {
- type: 0,
- name: 'Cloudbase Solutions',
- logo: 'cloudbase',
- link: 'https://cloudbase.it/kubernetes',
- blurb: "Cloudbase Solutions assure l'interopérabilité multi-cloud de Kubernetes pour les déploiements Windows et Linux basés sur des technologies open source."
- },
- {
- type: 0,
- name: 'Codefresh',
- logo: 'codefresh',
- link: 'https://codefresh.io/kubernetes-deploy/',
- blurb: 'Codefresh est une plate-forme complète DevOps conçue pour les conteneurs et Kubernetes. Avec les pipelines CI / CD, la gestion des images et des intégrations profondes dans Kubernetes et Helm. '
- },
- {
- type: 0,
- name: 'NetApp',
- logo: 'netapp',
- link: 'http://netapp.io/2016/12/23/introducing-trident-dynamic-persistent-volume-provisioner-kubernetes/',
- blurb: "Provisionnement dynamique et prise en charge du stockage persistant."
- },
- {
- type: 0,
- name: 'OpenEBS',
- logo: 'OpenEBS',
- link: 'https://openebs.io/',
- blurb: "OpenEBS est un stockage conteneurisé de conteneurs étroitement intégré à Kubernetes et basé sur le stockage en bloc distribué et la conteneurisation du contrôle du stockage. OpenEBS dérive de l’intention des K8 et d’autres codes YAML ou JSON, tels que les SLA de qualité de service par conteneur, les stratégies de réplication et de hiérarchisation, etc. OpenEBS est conforme à l'API EBS."
- },
- {
- type: 3,
- name: 'Google Kubernetes Engine',
- logo: 'google',
- link: 'https://cloud.google.com/kubernetes-engine/',
- blurb: "Google - Moteur Google Kubernetes"
- },
- {
- type: 1,
- name: 'Superorbital',
- logo: 'superorbital',
- link: 'https://superorbit.al/workshops/kubernetes/',
- blurb: "Aider les entreprises à naviguer dans les eaux Cloud Native grâce au conseil et à la formation Kubernetes."
- },
- {
- type: 3,
- name: 'Apprenda',
- logo: 'apprenda',
- link: 'https://apprenda.com/kismatic/',
- blurb: 'Apprenda - Kismatic Enterprise Toolkit (KET)'
- },
- {
- type: 3,
- name: 'Red Hat',
- logo: 'redhat',
- link: 'https://www.openshift.com',
- blurb: "Red Hat - OpenShift Online et OpenShift Container Platform"
- },
- {
- type: 3,
- name: 'Rancher',
- logo: 'rancher',
- link: 'http://rancher.com/kubernetes/',
- blurb: 'Rancher Inc. - Rancher Kubernetes'
- },
- {
- type: 3,
- name: 'Canonical',
- logo: 'canonical',
- link: 'https://www.ubuntu.com/kubernetes',
- blurb: "La distribution canonique de Kubernetes vous permet d’exploiter à la demande des grappes Kubernetes sur n’importe quel infrastructure de cloud public ou privée majeure."
- },
- {
- type: 2,
- name: 'Canonical',
- logo: 'canonical',
- link: 'https://www.ubuntu.com/kubernetes',
- blurb: 'Canonical Ltd. - Distribution canonique de Kubernetes'
- },
- {
- type: 3,
- name: 'Cisco',
- logo: 'cisco',
- link: 'https://www.cisco.com',
- blurb: 'Cisco Systems - Plateforme de conteneur Cisco'
- },
- {
- type: 3,
- name: 'Cloud Foundry',
- logo: 'cff',
- link: 'https://www.cloudfoundry.org/container-runtime/',
- blurb: "Cloud Foundry - Durée d'exécution du conteneur Cloud Foundry"
- },
- {
- type: 3,
- name: 'IBM',
- logo: 'ibm',
- link: 'https://www.ibm.com/cloud/container-service',
- blurb: 'IBM - Service IBM Cloud Kubernetes'
- },
- {
- type: 2,
- name: 'IBM',
- logo: 'ibm',
- link: 'https://www.ibm.com/cloud/container-service/',
- blurb: "Le service de conteneur IBM Cloud combine Docker et Kubernetes pour fournir des outils puissants, des expériences utilisateur intuitives, ainsi qu'une sécurité et une isolation intégrées pour permettre la livraison rapide d'applications tout en tirant parti des services de cloud computing, notamment des capacités cognitives de Watson."
- },
- {
- type: 3,
- name: 'Samsung',
- logo: 'samsung_sds',
- link: 'https://github.com/samsung-cnct/kraken',
- blurb: "Samsung SDS - Kraken"
- },
- {
- type: 3,
- name: 'IBM',
- logo: 'ibm',
- link: 'https://www.ibm.com/cloud-computing/products/ibm-cloud-private/',
- blurb: 'IBM - IBM Cloud Private'
- },
- {
- type: 3,
- name: 'Kinvolk',
- logo: 'kinvolk',
- link: 'https://github.com/kinvolk/kube-spawn',
- blurb: "Kinvolk - cube-spawn"
- },
- {
- type: 3,
- name: 'Heptio',
- logo: 'heptio',
- link: 'https://aws.amazon.com/quickstart/architecture/heptio-kubernetes',
- blurb: 'Heptio - AWS-Quickstart'
- },
- {
- type: 2,
- name: 'Heptio',
- logo: 'heptio',
- link: 'http://heptio.com',
- blurb: "Heptio aide les entreprises de toutes tailles à se rapprocher de la communauté dynamique de Kubernetes."
- },
- {
- type: 3,
- name: 'StackPointCloud',
- logo: 'stackpoint',
- link: 'https://stackpoint.io',
- blurb: 'StackPointCloud - StackPointCloud'
- },
- {
- type: 2,
- name: 'StackPointCloud',
- logo: 'stackpoint',
- link: 'https://stackpoint.io',
- blurb: 'StackPointCloud propose une large gamme de plans de support pour les clusters Kubernetes gérés construits via son plan de contrôle universel pour Kubernetes Anywhere.'
- },
- {
- type: 3,
- name: 'Caicloud',
- logo: 'caicloud',
- link: 'https://caicloud.io/products/compass',
- blurb: 'Caicloud - Compass'
- },
- {
- type: 2,
- name: 'Caicloud',
- logo: 'caicloud',
- link: 'https://caicloud.io/',
- blurb: "Fondée par d'anciens membres de Googlers et les premiers contributeurs de Kubernetes, Caicloud s'appuie sur Kubernetes pour fournir des produits de conteneur qui ont servi avec succès les entreprises Fortune 500, et utilise également Kubernetes comme véhicule pour offrir une expérience d'apprentissage en profondeur ultra-rapide."
- },
- {
- type: 3,
- name: 'Alibaba',
- logo: 'alibaba',
- link: 'https://www.aliyun.com/product/containerservice?spm=5176.8142029.388261.219.3836dbccRpJ5e9',
- blurb: 'Alibaba Cloud - Alibaba Cloud Container Service'
- },
- {
- type: 3,
- name: 'Tencent',
- logo: 'tencent',
- link: 'https://cloud.tencent.com/product/ccs?lang=en',
- blurb: 'Tencent Cloud - Tencent Cloud Container Service'
- },
- {
- type: 3,
- name: 'Huawei',
- logo: 'huawei',
- link: 'http://www.huaweicloud.com/product/cce.html',
- blurb: 'Huawei - Huawei Cloud Container Engine'
- },
- {
- type: 2,
- name: 'Huawei',
- logo: 'huawei',
- link: 'http://developer.huawei.com/ict/en/site-paas',
- blurb: "FusionStage est un produit Platform as a Service de niveau entreprise, dont le cœur est basé sur la technologie de conteneur open source traditionnelle, notamment Kubernetes et Docker."
- },
- {
- type: 3,
- name: 'Google',
- logo: 'google',
- link: 'https://github.com/kubernetes/kubernetes/tree/master/cluster',
- blurb: "Google - kube-up.sh sur Google Compute Engine"
- },
- {
- type: 3,
- name: 'Poseidon',
- logo: 'poseidon',
- link: 'https://typhoon.psdn.io/',
- blurb: 'Poséidon - Typhon'
- },
- {
- type: 3,
- name: 'Netease',
- logo: 'netease',
- link: 'https://www.163yun.com/product/container-service-dedicated',
- blurb: 'Netease - Netease Container Service Dedicated'
- },
- {
- type: 2,
- name: 'Loodse',
- logo: 'loodse',
- link: 'https://loodse.com',
- blurb: "Loodse propose des formations et des conseils sur Kubernetes, et organise régulièrement des événements liés à l’Europe."
- },
- {
- type: 4,
- name: 'Loodse',
- logo: 'loodse',
- link: 'https://loodse.com',
- blurb: "Loodse propose des formations et des conseils sur Kubernetes, et organise régulièrement des événements liés à l’Europe."
- },
- {
- type: 4,
- name: 'LF Training',
- logo: 'lf-training',
- link: 'https://training.linuxfoundation.org/',
- blurb: "Le programme de formation de la Linux Foundation associe les connaissances de base étendues aux possibilités de mise en réseau dont les participants ont besoin pour réussir dans leur carrière."
- },
- {
- type: 3,
- name: 'Loodse',
- logo: 'loodse',
- link: 'https://loodse.com',
- blurb: 'Pilots - Moteur de conteneur Kubermatic'
- },
- {
- type: 1,
- name: 'LTI',
- logo: 'lti',
- link: 'https://www.lntinfotech.com/',
- blurb: "LTI aide les entreprises à concevoir, développer et prendre en charge des applications natives de cloud évolutives utilisant Docker et Kubernetes pour un cloud privé ou public."
- },
- {
- type: 3,
- name: 'Microsoft',
- logo: 'microsoft',
- link: 'https://github.com/Azure/acs-engine',
- blurb: 'Microsoft - Azure acs-engine'
- },
- {
- type: 3,
- name: 'Microsoft',
- logo: 'microsoft',
- link: 'https://docs.microsoft.com/en-us/azure/aks/',
- blurb: 'Microsoft - Azure Container Service AKS'
- },
- {
- type: 3,
- name: 'Oracle',
- logo: 'oracle',
- link: 'http://www.wercker.com/product',
- blurb: 'Oracle - Oracle Container Engine'
- },
- {
- type: 3,
- name: 'Oracle',
- logo: 'oracle',
- link: 'https://github.com/oracle/terraform-kubernetes-installer',
- blurb: "Oracle - Programme d'installation Oracle Terraform Kubernetes"
- },
- {
- type: 3,
- name: 'Mesosphere',
- logo: 'mesosphere',
- link: 'https://mesosphere.com/kubernetes/',
- blurb: 'Mésosphère - Kubernetes sur DC / OS'
- },
- {
- type: 3,
- name: 'Appscode',
- logo: 'appscode',
- link: 'https://appscode.com/products/cloud-deployment/',
- blurb: 'Appscode - Pharmer'
- },
- {
- type: 3,
- name: 'SAP',
- logo: 'sap',
- link: 'https://cloudplatform.sap.com/index.html',
- blurb: 'SAP - Cloud Platform - Gardener (pas encore publié)'
- },
- {
- type: 3,
- name: 'Oracle',
- logo: 'oracle',
- link: 'https://www.oracle.com/linux/index.html',
- blurb: 'Oracle - Oracle Linux Container Services à utiliser avec Kubernetes'
- },
- {
- type: 3,
- name: 'CoreOS',
- logo: 'coreos',
- link: 'https://github.com/kubernetes-incubator/bootkube',
- blurb: 'CoreOS - bootkube'
- },
- {
- type: 2,
- name: 'CoreOS',
- logo: 'coreos',
- link: 'https://coreos.com/',
- blurb: 'Tectonic est le produit Kubernetes destiné aux entreprises, conçu par CoreOS. Il ajoute des fonctionnalités clés pour vous permettre de gérer, mettre à jour et contrôler les clusters en production. '
- },
- {
- type: 3,
- name: 'Weaveworks',
- logo: 'weave_works',
- link: '/docs/setup/independent/create-cluster-kubeadm/',
- blurb: Weaveworks - kubeadm
- },
- {
- type: 3,
- name: 'Joyent',
- logo: 'joyent',
- link: 'https://github.com/joyent/triton-kubernetes',
- blurb: 'Joyent - Triton Kubernetes'
- },
- {
- type: 3,
- name: 'Wise2c',
- logo: 'wise2c',
- link: 'http://www.wise2c.com/solution',
- blurb: "Technologie Wise2C - WiseCloud"
- },
- {
- type: 2,
- name: 'Wise2c',
- logo: 'wise2c',
- link: 'http://www.wise2c.com',
- blurb: "Utilisation de Kubernetes pour fournir au secteur financier une solution de diffusion continue informatique et de gestion de conteneur de niveau entreprise."
- },
- {
- type: 3,
- name: 'Docker',
- logo: 'docker',
- link: 'https://www.docker.com/enterprise-edition',
- blurb: 'Docker - Docker Enterprise Edition'
- },
- {
- type: 3,
- name: 'Daocloud',
- logo: 'daocloud',
- link: 'http://www.daocloud.io/dce',
- blurb: 'DaoCloud - DaoCloud Enterprise'
- },
- {
- type: 2,
- name: 'Daocloud',
- logo: 'daocloud',
- link: 'http://www.daocloud.io/dce',
- blurb: "Nous fournissons une plate-forme d’application native en nuage de niveau entreprise prenant en charge Kubernetes et Docker Swarm."
- },
- {
- type: 4,
- name: 'Daocloud',
- logo: 'daocloud',
- link: 'http://www.daocloud.io/dce',
- blurb: "Nous fournissons une plate-forme d’application native en nuage de niveau entreprise prenant en charge Kubernetes et Docker Swarm."
- },
- {
- type: 3,
- name: 'SUSE',
- logo: 'suse',
- link: 'https://www.suse.com/products/caas-platform/',
- blurb: 'SUSE - Plateforme SUSE CaaS (conteneur en tant que service)'
- },
- {
- type: 3,
- name: 'Pivotal',
- logo: 'pivotal',
- link: 'https://cloud.vmware.com/pivotal-container-service',
- blurb: 'Pivotal / VMware - Service de conteneur Pivotal (PKS)'
- },
- {
- type: 3,
- name: 'VMware',
- logo: 'vmware',
- link: 'https://cloud.vmware.com/pivotal-container-service',
- blurb: 'Pivotal / VMware - Service de conteneur Pivotal (PKS)'
- },
- {
- type: 3,
- name: 'Alauda',
- logo: 'alauda',
- link: 'http://www.alauda.cn/product/detail/id/68.html',
- blurb: 'Alauda - Alauda EE'
- },
- {
- type: 4,
- name: 'Alauda',
- logo: 'alauda',
- link: 'http://www.alauda.cn/product/detail/id/68.html',
- blurb: "Alauda fournit aux offres Kubernetes-Centric Enterprise Platform-as-a-Service un objectif précis: fournir des fonctionnalités Cloud Native et les meilleures pratiques DevOps aux clients professionnels de tous les secteurs en Chine."
- },
- {
- type: 2,
- name: 'Alauda',
- logo: 'alauda',
- link: 'www.alauda.io',
- blurb: "Alauda fournit aux offres Kubernetes-Centric Enterprise Platform-as-a-Service un objectif précis: fournir des fonctionnalités Cloud Native et les meilleures pratiques DevOps aux clients professionnels de tous les secteurs en Chine."
- },
- {
- type: 3,
- name: 'EasyStack',
- logo: 'easystack',
- link: 'https://easystack.cn/eks/',
- blurb: 'EasyStack - Service EasyStack Kubernetes (ECS)'
- },
- {
- type: 3,
- name: 'CoreOS',
- logo: 'coreos',
- link: 'https://coreos.com/tectonic/',
- blurb: 'CoreOS - Tectonique'
- },
- {
- type: 0,
- name: 'GoPaddle',
- logo: 'gopaddle',
- link: 'https://gopaddle.io',
- blurb: "goPaddle est une plate-forme DevOps pour les développeurs Kubernetes. Il simplifie la création et la maintenance du service Kubernetes grâce à la conversion de source en image, à la gestion des versions et des versions, à la gestion d'équipe, aux contrôles d'accès et aux journaux d'audit, à la fourniture en un seul clic de grappes Kubernetes sur plusieurs clouds à partir d'une console unique."
- },
- {
- type: 0,
- name: 'Vexxhost',
- logo: 'vexxhost',
- link: 'https://vexxhost.com/public-cloud/container-services/kubernetes/',
- blurb: "VEXXHOST offre un service de gestion de conteneurs haute performance optimisé par Kubernetes et OpenStack Magnum."
- },
- {
- type: 1,
- name: 'Component Soft',
- logo: 'componentsoft',
- link: 'https://www.componentsoft.eu/?p=3925',
- blurb: "Component Soft propose des formations, des conseils et une assistance autour des technologies de cloud ouvert telles que Kubernetes, Docker, Openstack et Ceph."
- },
- {
- type: 0,
- name: 'Datera',
- logo: 'datera',
- link: 'http://www.datera.io/kubernetes/',
- blurb: "Datera fournit un stockage de blocs élastiques autogéré de haute performance avec un provisionnement en libre-service pour déployer Kubernetes à grande échelle."
- },
- {
- type: 0,
- name: 'Containership',
- logo: 'containership',
- link: 'https://containership.io/',
- blurb: "Containership est une offre kubernetes gérée indépendamment du cloud qui prend en charge le provisionnement automatique de plus de 14 fournisseurs de cloud."
- },
- {
- type: 0,
- name: 'Pure Storage',
- logo: 'pure_storage',
- link: 'https://hub.docker.com/r/purestorage/k8s/',
- blurb: "Notre pilote flexvol et notre provisioning dynamique permettent aux périphériques de stockage FlashArray / Flashblade d'être utilisés en tant que stockage persistant de première classe à partir de Kubernetes."
- },
- {
- type: 0,
- name: 'Elastisys',
- logo: 'elastisys',
- link: 'https://elastisys.com/kubernetes/',
- blurb: "Mise à l'échelle automatique prédictive - détecte les variations de charge de travail récurrentes, les pics de trafic irréguliers, etc. Utilise les K8 dans n’importe quel cloud public ou privé."
- },
- {
- type: 0,
- name: 'Portworx',
- logo: 'portworx',
- link: 'https://portworx.com/use-case/kubernetes-storage/',
- blurb: "Avec Portworx, vous pouvez gérer n'importe quelle base de données ou service avec état sur toute infrastructure utilisant Kubernetes. Vous obtenez une couche de gestion de données unique pour tous vos services avec état, quel que soit leur emplacement."
- },
- {
- type: 1,
- name: 'Object Computing, Inc.',
- logo: 'objectcomputing',
- link: 'https://objectcomputing.com/services/software-engineering/devops/kubernetes-services',
- blurb: "Notre gamme de services de conseil DevOps comprend le support, le développement et la formation de Kubernetes."
- },
- {
- type: 1,
- name: 'Isotoma',
- logo: 'isotoma',
- link: 'https://www.isotoma.com/blog/2017/10/24/containerisation-tips-for-using-kubernetes-with-aws/',
- blurb: "Basés dans le nord de l'Angleterre, les partenaires Amazon qui fournissent des solutions Kubernetes sur AWS pour la réplication et le développement natif."
- },
- {
- type: 1,
- name: 'Servian',
- logo: 'servian',
- link: 'https://www.servian.com/cloud-and-technology/',
- blurb: "Basé en Australie, Servian fournit des services de conseil, de conseil et de gestion pour la prise en charge des cas d'utilisation de kubernètes centrés sur les applications et les données."
- },
- {
- type: 1,
- name: 'Redzara',
- logo: 'redzara',
- link: 'http://redzara.com/cloud-service',
- blurb: "Redzara possède une vaste et approfondie expérience dans l'automatisation du Cloud, franchissant à présent une étape gigantesque en fournissant une offre de services de conteneur et des services à ses clients."
- },
- {
- type: 0,
- name: 'Dataspine',
- logo: 'dataspine',
- link: 'http://dataspine.xyz/',
- blurb: "Dataspine est en train de créer une plate-forme de déploiement sécurisée, élastique et sans serveur pour les charges de travail ML / AI de production au-dessus des k8s."
- },
- {
- type: 1,
- name: 'CloudBourne',
- logo: 'cloudbourne',
- link: 'https://cloudbourne.com/kubernetes-enterprise-hybrid-cloud/',
- blurb: "Vous voulez optimiser l'automatisation de la construction, du déploiement et de la surveillance avec Kubernetes? Nous pouvons aider."
- },
- {
- type: 0,
- name: 'CloudBourne',
- logo: 'cloudbourne',
- link: 'https://cloudbourne.com/',
- blurb: "Notre plate-forme cloud hybride AppZ peut vous aider à atteindre vos objectifs de transformation numérique en utilisant les puissants Kubernetes."
- },
- {
- type: 3,
- name: 'BoCloud',
- logo: 'bocloud',
- link: 'http://www.bocloud.com.cn/en/index.html',
- blurb: 'BoCloud - BeyondcentContainer'
- },
- {
- type: 2,
- name: 'Naitways',
- logo: 'naitways',
- link: 'https://www.naitways.com/',
- blurb: "Naitways est un opérateur (AS57119), un intégrateur et un fournisseur de services cloud (le nôtre!). Nous visons à fournir des services à valeur ajoutée grâce à notre maîtrise de l’ensemble de la chaîne de valeur (infrastructure, réseau, compétences humaines). Le cloud privé et public est disponible via Kubernetes, qu'il soit géré ou non."
- },
- {
- type: 2,
- name: 'Kinvolk',
- logo: 'kinvolk',
- link: 'https://kinvolk.io/kubernetes/',
- blurb: 'Kinvolk offre un support technique et opérationnel à Kubernetes, du cluster au noyau. Les entreprises leaders dans le cloud font confiance à Kinvolk pour son expertise approfondie de Linux. '
- },
- {
- type: 1,
- name: 'Cascadeo Corporation',
- logo: 'cascadeo',
- link: 'http://www.cascadeo.com/',
- blurb: "Cascadeo conçoit, implémente et gère des charges de travail conteneurisées avec Kubernetes, tant pour les applications existantes que pour les projets de développement en amont."
- },
- {
- type: 1,
- name: 'Elastisys AB',
- logo: 'elastisys',
- link: 'https://elastisys.com/services/#kubernetes',
- blurb: "Nous concevons, construisons et exploitons des clusters Kubernetes. Nous sommes des experts des infrastructures Kubernetes hautement disponibles et auto-optimisées."
- },
- {
- type: 1,
- name: 'Greenfield Guild',
- logo: 'greenfield',
- link: 'http://greenfieldguild.com/',
- blurb: "La guilde Greenfield construit des solutions open source de qualité et offre une formation et une assistance pour Kubernetes dans tous les environnements."
- },
- {
- type: 1,
- name: 'PolarSeven',
- logo: 'polarseven',
- link: 'https://polarseven.com/what-we-do/kubernetes/',
- blurb: "Pour démarrer avec Kubernetes (K8), nos consultants PolarSeven peuvent vous aider à créer un environnement dockerized entièrement fonctionnel pour exécuter et déployer vos applications."
- },
- {
- type: 1,
- name: 'Kloia',
- logo: 'kloia',
- link: 'https://kloia.com/kubernetes/',
- blurb: 'Kloia est une société de conseil en développement et en microservices qui aide ses clients à faire migrer leur environnement vers des plates-formes cloud afin de créer des environnements plus évolutifs et sécurisés. Nous utilisons Kubernetes pour fournir à nos clients des solutions complètes tout en restant indépendantes du cloud. '
- },
- {
- type: 0,
- name: 'Bluefyre',
- logo: 'bluefyre',
- link: 'https://www.bluefyre.io',
- blurb: "Bluefyre offre une plate-forme de sécurité d'abord destinée aux développeurs, native de Kubernetes. Bluefyre aide votre équipe de développement à envoyer du code sécurisé sur Kubernetes plus rapidement!"
- },
- {
- type: 0,
- name: 'Harness',
- logo: 'harness',
- link: 'https://harness.io/harness-continuous-delivery/secret-sauce/smart-automation/',
- blurb: "Harness propose une livraison continue, car un service assurera une prise en charge complète des applications conteneurisées et des clusters Kubernetes."
- },
- {
- type: 0,
- name: 'VMware - Wavefront',
- logo: 'wavefront',
- link: 'https://www.wavefront.com/solutions/container-monitoring/',
- blurb: "La plate-forme Wavefront fournit des analyses et une surveillance basées sur des mesures pour Kubernetes et des tableaux de bord de conteneurs pour DevOps et des équipes de développeurs, offrant une visibilité sur les services de haut niveau ainsi que sur des mesures de conteneurs granulaires."
- },
- {
- type: 0,
- name: 'Bloombase, Inc.',
- logo: 'bloombase',
- link: 'https://www.bloombase.com/go/kubernetes',
- blurb: "Bloombase fournit un cryptage de données au repos avec une bande passante élevée et une défense en profondeur pour verrouiller les joyaux de la couronne Kubernetes à grande échelle."
- },
- {
- type: 0,
- name: 'Kasten',
- logo: 'kasten',
- link: 'https://kasten.io/product/',
- blurb: "Kasten fournit des solutions d'entreprise spécialement conçues pour gérer la complexité opérationnelle de la gestion des données dans les environnements en nuage."
- },
- {
- type: 0,
- name: 'Humio',
- logo: 'humio',
- link: 'https://humio.com',
- blurb: "Humio est une base de données d'agrégation de journaux. Nous proposons une intégration Kubernetes qui vous donnera un aperçu de vos journaux à travers des applications et des instances."
- },
- {
- type: 0,
- name: 'Outcold Solutions LLC',
- logo: 'outcold',
- link: 'https://www.outcoldsolutions.com/#monitoring-kubernetes',
- blurb: 'Puissantes applications Splunk certifiées pour la surveillance OpenShift, Kubernetes et Docker.'
- },
- {
- type: 0,
- name: 'SysEleven GmbH',
- logo: 'syseleven',
- link: 'http://www.syseleven.de/',
- blurb: "Clients d'entreprise ayant besoin d'opérations à toute épreuve (portails d'entreprise et de commerce électronique à haute performance)"
- },
- {
- type: 0,
- name: 'Landoop',
- logo: 'landoop',
- link: 'http://lenses.stream',
- blurb: 'Lenses for Apache Kafka, to deploy, manage and operate with confidence data streaming pipelines and topologies at scale with confidence and native Kubernetes integration.'
- },
- {
- type: 0,
- name: 'Redis Labs',
- logo: 'redis',
- link: 'https://redislabs.com/blog/getting-started-with-kubernetes-and-redis-using-redis-enterprise/',
- blurb: "Redis Enterprise étend Redis open source et fournit une mise à l'échelle linéaire stable et de haute performance requise pour la création de microservices sur la plateforme Kubernetes."
- },
- {
- type: 3,
- name: 'Diamanti',
- logo: 'diamanti',
- link: 'https://diamanti.com/',
- blurb: 'Diamanti - Diamanti-D10'
- },
- {
- type: 3,
- name: 'Eking',
- logo: 'eking',
- link: 'http://www.eking-tech.com/',
- blurb: 'Hainan eKing Technology Co. - eKing Cloud Container Platform'
- },
- {
- type: 3,
- name: 'Harmony Cloud',
- logo: 'harmony',
- link: 'http://harmonycloud.cn/products/rongqiyun/',
- blurb: 'Harmonycloud - Harmonycloud Container Platform'
- },
- {
- type: 3,
- name: 'Woqutech',
- logo: 'woqutech',
- link: 'http://woqutech.com/product_qfusion.html',
- blurb: 'Woqutech - QFusion'
- },
- {
- type: 3,
- name: 'Baidu',
- logo: 'baidu',
- link: 'https://cloud.baidu.com/product/cce.html',
- blurb: 'Baidu Cloud - Baidu Cloud Container Engine'
- },
- {
- type: 3,
- name: 'ZTE',
- logo: 'zte',
- link: 'https://sdnfv.zte.com.cn/en/home',
- blurb: 'ZTE - TECS OpenPalette'
- },
- {
- type: 1,
- name: 'Automatic Server AG',
- logo: 'asag',
- link: 'http://www.automatic-server.com/paas.html',
- blurb: 'Nous installons et exploitons Kubernetes dans de grandes entreprises, créons des flux de travail de déploiement et aidons à la migration.'
- },
- {
- type: 1,
- name: 'Circulo Siete',
- logo: 'circulo',
- link: 'https://circulosiete.com/consultoria/kubernetes/',
- blurb: 'Notre entreprise basée au Mexique propose des formations, des conseils et une assistance pour la migration de vos charges de travail vers Kubernetes, Cloud Native Microservices & Devops.'
- },
- {
- type: 1,
- name: 'DevOpsGuru',
- logo: 'devopsguru',
- link: 'http://devopsguru.ca/workshop',
- blurb: 'DevOpsGuru travaille avec les petites entreprises pour passer du physique au virtuel en conteneurisé.'
- },
- {
- type: 1,
- name: 'EIN Intelligence Co., Ltd',
- logo: 'ein',
- link: 'https://ein.io',
- blurb: 'Startups et entreprises agiles en Corée du Sud.'
- },
- {
- type: 0,
- name: 'GuardiCore',
- logo: 'guardicore',
- link: 'https://www.guardicore.com/',
- blurb: 'GuardiCore a fourni une visibilité au niveau des processus et une application des stratégies réseau sur les actifs conteneurisés sur la plateforme Kubernetes.'
- },
- {
- type: 0,
- name: 'Hedvig',
- logo: 'hedvig',
- link: 'https://www.hedviginc.com/blog/provisioning-hedvig-storage-with-kubernetes',
- blurb: 'Hedvig est un stockage défini par logiciel qui utilise NFS ou iSCSI pour les volumes persistants afin de provisionner le stockage partagé pour les pods et les conteneurs.'
- },
- {
- type: 0,
- name: 'Hewlett Packard Enterprise',
- logo: 'hpe',
- link: ' https://www.hpe.com/us/en/storage/containers.html',
- blurb: 'Stockage permanent qui rend les données aussi faciles à gérer que les conteneurs: provisioning dynamique, performances et protection basées sur des stratégies, qualité de service, etc.'
- },
- {
- type: 0,
- name: 'JetBrains',
- logo: 'jetbrains',
- link: 'https://blog.jetbrains.com/teamcity/2017/10/teamcity-kubernetes-support-plugin/',
- blurb: "Exécutez des agents de génération de cloud TeamCity dans un cluster Kubernetes. Fournit un support Helm en tant qu'étape de construction."
- },
- {
- type: 2,
- name: 'Opensense',
- logo: 'opensense',
- link: 'http://www.opensense.fr/en/kubernetes-en/',
- blurb: 'Nous fournissons des services Kubernetes (intégration, exploitation, formation) ainsi que le développement de microservices bancaires basés sur notre expérience étendue en matière de cloud de conteneurs, de microservices, de gestion de données et du secteur financier.'
- },
- {
- type: 2,
- name: 'SAP SE',
- logo: 'sap',
- link: 'https://cloudplatform.sap.com',
- blurb: "SAP Cloud Platform fournit des fonctionnalités en mémoire et des services métier uniques pour la création et l'extension d'applications. Avec Open Source Project Project, SAP utilise la puissance de Kubernetes pour offrir une expérience ouverte, robuste et multi-cloud à ses clients. Vous pouvez utiliser des principes de conception natifs en nuage simples et modernes et exploiter les compétences dont votre organisation dispose déjà pour fournir des applications agiles et transformatives, tout en s'intégrant aux dernières fonctionnalités de SAP Leonardo."
- },
- {
- type: 1,
- name: 'Mobilise Cloud Services Limited',
- logo: 'mobilise',
- link: 'https://www.mobilise.cloud/en/services/serverless-application-delivery/',
- blurb: 'Mobilize aide les organisations à adopter Kubernetes et à les intégrer à leurs outils CI / CD.'
- },
- {
- type: 3,
- name: 'AWS',
- logo: 'aws',
- link: 'https://aws.amazon.com/eks/',
- blurb: 'Amazon Elastic Container Service pour Kubernetes (Amazon EKS) est un service géré qui facilite l’exécution de Kubernetes sur AWS sans avoir à installer ni à utiliser vos propres clusters Kubernetes.'
- },
- {
- type: 3,
- name: 'Kontena',
- logo: 'kontena',
- link: 'https://pharos.sh',
- blurb: 'Kontena Pharos - La distribution simple, solide et certifiée Kubernetes qui fonctionne.'
- },
- {
- type: 2,
- name: 'NTTData',
- logo: 'nttdata',
- link: 'http://de.nttdata.com/altemista-cloud',
- blurb: 'NTT DATA, membre du groupe NTT, apporte la puissance du plus important fournisseur d’infrastructures au monde dans la communauté mondiale des K8.'
- },
- {
- type: 2,
- name: 'OCTO',
- logo: 'octo',
- link: 'https://www.octo.academy/fr/formation/275-kubernetes-utiliser-architecturer-et-administrer-une-plateforme-de-conteneurs',
- blurb: "La technologie OCTO fournit des services de formation, d'architecture, de conseil technique et de livraison, notamment des conteneurs et des Kubernetes."
- },
- {
- type: 0,
- name: 'Logdna',
- logo: 'logdna',
- link: 'https://logdna.com/kubernetes',
- blurb: 'Identifiez instantanément les problèmes de production avec LogDNA, la meilleure plate-forme de journalisation que vous utiliserez jamais. Commencez avec seulement 2 commandes kubectl.'
- }
- ]
-
- var kcspContainer = document.getElementById('kcspContainer')
- var distContainer = document.getElementById('distContainer')
- var ktpContainer = document.getElementById('ktpContainer')
- var isvContainer = document.getElementById('isvContainer')
- var servContainer = document.getElementById('servContainer')
-
- var sorted = partners.sort(function (a, b) {
- if (a.name > b.name) return 1
- if (a.name < b.name) return -1
- return 0
- })
-
- sorted.forEach(function (obj) {
- var box = document.createElement('div')
- box.className = 'partner-box'
-
- var img = document.createElement('img')
- img.src = '/images/square-logos/' + obj.logo + '.png'
-
- var div = document.createElement('div')
-
- var p = document.createElement('p')
- p.textContent = obj.blurb
-
- var link = document.createElement('a')
- link.href = obj.link
- link.target = '_blank'
- link.textContent = 'Learn more'
-
- div.appendChild(p)
- div.appendChild(link)
-
- box.appendChild(img)
- box.appendChild(div)
-
- var container;
- if (obj.type === 0) {
- container = isvContainer;
- } else if (obj.type === 1) {
- container = servContainer;
- } else if (obj.type === 2) {
- container = kcspContainer;
- } else if (obj.type === 3) {
- container = distContainer;
- } else if (obj.type === 4) {
- container = ktpContainer;
- }
-
- container.appendChild(box)
- })
-})();
diff --git a/content/fr/partners/_index.html b/content/fr/partners/_index.html
index 415b46b34f6ad..164c8f6fc4d8a 100644
--- a/content/fr/partners/_index.html
+++ b/content/fr/partners/_index.html
@@ -8,85 +8,48 @@
---
-
-
Kubernetes travaille avec des partenaires pour créer une base de code forte et dynamique prenant en charge un large éventail de plates-formes complémentaires.
-
-
-
-
- Fournisseurs de services certifiés Kubernetes
-
- Des fournisseurs de services aguerris ayant une expérience approfondie dans l'aide aux entreprises pour l'adoption de Kubernetes.
-
Kubernetes travaille avec des partenaires pour créer une base de code forte et dynamique prenant en charge un large éventail de plates-formes complémentaires.
+
+
+
+
+ Fournisseurs de services certifiés Kubernetes
+
+ Des fournisseurs de services aguerris ayant une expérience approfondie dans l'aide aux entreprises pour l'adoption de Kubernetes.
+
diff --git a/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md b/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md
index 0d2606a4217e7..ce8e62f7fd4d6 100644
--- a/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md
+++ b/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md
@@ -194,18 +194,6 @@ in order to proxy service traffic. If unspecified (0-0) then ports will be rando
用来设置代理服务所使用的端口。如果未指定(即 ‘0-0’),则代理服务会随机选择端口号。
@@ -1794,19 +1809,19 @@ Default: false
when setting the cgroupv2 memory.high value to enforce MemoryQoS.
Decreasing this factor will set lower high limit for container cgroups and put heavier reclaim pressure
while increasing will put less reclaim pressure.
-See http://kep.k8s.io/2570 for more details.
+See https://kep.k8s.io/2570 for more details.
Default: 0.8
-->