diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 13590450bf611..3f82ad5d3d593 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,4 +1,4 @@ - + +
Kubelet





















Kubelet...
Device Plugin





















Device Plugin...
Device Plugin gRPC server















Device Plugin gRPC server...
GetDevicePluginOptions
GetDevicePluginOptions
ListAndWatch
ListAndWatch
GetPreferredAllocation
GetPreferredAllocation
Allocate
Allocate
PreStartContainer
PreStartContainer
GetDevicePluginOptions
GetDevicePluginOptions
ListAndWatch
ListAndWatch
GetPreferredAllocation
GetPreferredAllocation
Allocate
Allocate
PreStartContainer
PreStartContainer
Kubelet gRPC server




Kubelet gRPC server...
Register
Register
Register
Register
Kubelet gRPC API implementation
Kubelet gRPC API impl...
Kubelet gRPC client call 
Kubelet gRPC client c...
Device Plugin gRPC API implementation
Device Plugin gRPC AP...
Device Plugin gRPC client call
Device Plugin gRPC cl...
Text is not SVG - cannot display
\ No newline at end of file diff --git a/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/index.md b/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/index.md new file mode 100644 index 0000000000000..d3b7102efb075 --- /dev/null +++ b/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/index.md @@ -0,0 +1,93 @@ +--- +layout: blog +title: 'Kubernetes 1.26: Device Manager graduates to GA' +date: 2022-12-19 +slug: devicemanager-ga +--- + +**Author:** Swati Sehgal (Red Hat) + +The Device Plugin framework was introduced in the Kubernetes v1.8 release as a vendor +independent framework to enable discovery, advertisement and allocation of external +devices without modifying core Kubernetes. The feature graduated to Beta in v1.10. +With the recent release of Kubernetes v1.26, Device Manager is now generally +available (GA). + +Within the kubelet, the Device Manager facilitates communication with device plugins +using gRPC through Unix sockets. Device Manager and Device plugins both act as gRPC +servers and clients by serving and connecting to the exposed gRPC services respectively. +Device plugins serve a gRPC service that kubelet connects to for device discovery, +advertisement (as extended resources) and allocation. Device Manager connects to +the `Registration` gRPC service served by kubelet to register itself with kubelet. + +Please refer to the documentation for an [example](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#example-pod) on how a pod can request a device exposed to the cluster by a device plugin. + +Here are some example implementations of device plugins: +- [AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin) +- [Collection of Intel device plugins for Kubernetes](https://github.com/intel/intel-device-plugins-for-kubernetes) +- [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin) +- [SRIOV network device plugin for Kubernetes](https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin) + +## Noteworthy developments since Device Plugin framework introduction + +### Kubelet APIs moved to kubelet staging repo +External facing `deviceplugin` API packages moved from `k8s.io/kubernetes/pkg/kubelet/apis/` +to `k8s.io/kubelet/pkg/apis/` in v1.17. Refer to [Move external facing kubelet apis to staging](https://github.com/kubernetes/kubernetes/pull/83551) for more details on the rationale behind this change. + +### Device Plugin API updates +Additional gRPC endpoints introduced: + 1. `GetDevicePluginOptions` is used by device plugins to communicate + options to the `DeviceManager` in order to indicate if `PreStartContainer`, + `GetPreferredAllocation` or other future optional calls are supported and + can be called before making devices available to the container. + 1. `GetPreferredAllocation` allows a device plugin to forward allocation + preferrence to the `DeviceManager` so it can incorporate this information + into its allocation decisions. The `DeviceManager` will call out to a + plugin at pod admission time asking for a preferred device allocation + of a given size from a list of available devices to make a more informed + decision. E.g. Specifying inter-device constraints to indicate preferrence + on best-connected set of devices when allocating devices to a container. + 1. `PreStartContainer` is called before each container start if indicated by + device plugins during registration phase. It allows Device Plugins to run device + specific operations on the Devices requested. E.g. reconfiguring or + reprogramming FPGAs before the container starts running. + +Pull Requests that introduced these changes are here: +1. [Invoke preStart RPC call before container start, if desired by plugin](https://github.com/kubernetes/kubernetes/pull/58282) +1. [Add GetPreferredAllocation() call to the v1beta1 device plugin API](https://github.com/kubernetes/kubernetes/pull/92665) + +With introduction of the above endpoints the interaction between Device Manager in +kubelet and Device Manager can be shown as below: + +{{< figure src="deviceplugin-framework-overview.svg" alt="Representation of the Device Plugin framework showing the relationship between the kubelet and a device plugin" class="diagram-large" caption="Device Plugin framework Overview" >}} + +### Change in semantics of device plugin registration process +Device plugin code was refactored to separate 'plugin' package under the `devicemanager` +package to lay the groundwork for introducing a `v1beta2` device plugin API. This would +allow adding support in `devicemanager` to service multiple device plugin APIs at the +same time. + +With this refactoring work, it is now mandatory for a device plugin to start serving its gRPC +service before registering itself with kubelet. Previously, these two operations were asynchronous +and device plugin could register itself before starting its gRPC server which is no longer the +case. For more details, refer to [PR #109016](https://github.com/kubernetes/kubernetes/pull/109016) and [Issue #112395](https://github.com/kubernetes/kubernetes/issues/112395). + +### Dynamic resource allocation +In Kubernetes 1.26, inspired by how [Persistent Volumes](/docs/concepts/storage/persistent-volumes) +are handled in Kubernetes, [Dynamic Resource Allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/) +has been introduced to cater to devices that have more sophisticated resource requirements like: + +1. Decouple device initialization and allocation from the pod lifecycle. +1. Facilitate dynamic sharing of devices between containers and pods. +1. Support custom resource-specific parameters +1. Enable resource-specific setup and cleanup actions +1. Enable support for Network-attached resources, not just node-local resources + +## Is the Device Plugin API stable now? +No, the Device Plugin API is still not stable; the latest Device Plugin API version +available is `v1beta1`. There are plans in the community to introduce `v1beta2` API +to service multiple plugin APIs at once. A per-API call with request/response types +would allow adding support for newer API versions without explicitly bumping the API. + +In addition to that, there are existing proposals in the community to introduce additional +endpoints [KEP-3162: Add Deallocate and PostStopContainer to Device Manager API](https://github.com/kubernetes/enhancements/issues/3162). diff --git a/content/en/blog/_posts/2022-12-20-validating-admission-policies-alpha/index.md b/content/en/blog/_posts/2022-12-20-validating-admission-policies-alpha/index.md new file mode 100644 index 0000000000000..fbbee4ab6dfb8 --- /dev/null +++ b/content/en/blog/_posts/2022-12-20-validating-admission-policies-alpha/index.md @@ -0,0 +1,160 @@ +--- +layout: blog +title: "Kubernetes 1.26: Introducing Validating Admission Policies" +date: 2022-12-20 +slug: validating-admission-policies-alpha +--- + +**Authors:** Joe Betz (Google), Cici Huang (Google) + +In Kubernetes 1.26, the 1st alpha release of validating admission policies is +available! + +Validating admission policies use the [Common Expression +Language](https://github.com/google/cel-spec) (CEL) to offer a declarative, +in-process alternative to [validating admission +webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks). + +CEL was first introduced to Kubernetes for the [Validation rules for +CustomResourceDefinitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules). +This enhancement expands the use of CEL in Kubernetes to support a far wider +range of admission use cases. + +Admission webhooks can be burdensome to develop and operate. Webhook developers +must implement and maintain a webhook binary to handle admission requests. Also, +admission webhooks are complex to operate. Each webhook must be deployed, +monitored and have a well defined upgrade and rollback plan. To make matters +worse, if a webhook times out or becomes unavailable, the Kubernetes control +plane can become unavailable. This enhancement avoids much of this complexity of +admission webhooks by embedding CEL expressions into Kubernetes resources +instead of calling out to a remote webhook binary. + +For example, to set a limit on how many replicas a Deployment can have. +Start by defining a validation policy: + +```yaml +apiVersion: admissionregistration.k8s.io/v1alpha1 +kind: ValidatingAdmissionPolicy +metadata: + name: "demo-policy.example.com" +spec: + matchConstraints: + resourceRules: + - apiGroups: ["apps"] + apiVersions: ["v1"] + operations: ["CREATE", "UPDATE"] + resources: ["deployments"] + validations: + - expression: "object.spec.replicas <= 5" +``` + +The `expression` field contains the CEL expression that is used to validate +admission requests. `matchConstraints` declares what types of requests this +`ValidatingAdmissionPolicy` is may validate. + +Next bind the policy to the appropriate resources: + +```yaml +apiVersion: admissionregistration.k8s.io/v1alpha1 +kind: ValidatingAdmissionPolicyBinding +metadata: + name: "demo-binding-test.example.com" +spec: + policyName: "demo-policy.example.com" + matchResources: + namespaceSelector: + matchExpressions: + - key: environment + operator: In + values: + - test +``` + +This `ValidatingAdmissionPolicyBinding` resource binds the above policy only to +namespaces where the `environment` label is set to `test`. Once this binding +is created, the kube-apiserver will begin enforcing this admission policy. + +To emphasize how much simpler this approach is than admission webhooks, if this example +were instead implemented with a webhook, an entire binary would need to be +developed and maintained just to perform a `<=` check. In our review of a wide +range of admission webhooks used in production, the vast majority performed +relatively simple checks, all of which can easily be expressed using CEL. + +Validation admission policies are highly configurable, enabling policy authors +to define policies that can be parameterized and scoped to resources as needed +by cluster administrators. + +For example, the above admission policy can be modified to make it configurable: + +```yaml +apiVersion: admissionregistration.k8s.io/v1alpha1 +kind: ValidatingAdmissionPolicy +metadata: + name: "demo-policy.example.com" +spec: + paramKind: + apiVersion: rules.example.com/v1 # You also need a CustomResourceDefinition for this API + kind: ReplicaLimit + matchConstraints: + resourceRules: + - apiGroups: ["apps"] + apiVersions: ["v1"] + operations: ["CREATE", "UPDATE"] + resources: ["deployments"] + validations: + - expression: "object.spec.replicas <= params.maxReplicas" +``` + +Here, `paramKind` defines the resources used to configure the policy and the +`expression` uses the `params` variable to access the parameter resource. + +This allows multiple bindings to be defined, each configured differently. For +example: + +```yaml +apiVersion: admissionregistration.k8s.io/v1alpha1 +kind: ValidatingAdmissionPolicyBinding +metadata: + name: "demo-binding-production.example.com" +spec: + policyName: "demo-policy.example.com" + paramRef: + name: "demo-params-production.example.com" + matchResources: + namespaceSelector: + matchExpressions: + - key: environment + operator: In + values: + - production +``` + +```yaml +apiVersion: rules.example.com/v1 # defined via a CustomResourceDefinition +kind: ReplicaLimit +metadata: + name: "demo-params-production.example.com" +maxReplicas: 1000 +``` + +This binding and parameter resource pair limit deployments in namespaces with the +`environment` label set to `production` to a max of 1000 replicas. + +You can then use a separate binding and parameter pair to set a different limit +for namespaces in the `test` environment. + +I hope this has given you a glimpse of what is possible with validating +admission policies! There are many features that we have not yet touched on. + +To learn more, read +[Validating Admission Policy](/docs/reference/access-authn-authz/validating-admission-policy/). + +We are working hard to add more features to admission policies and make the +enhancement easier to use. Try it out, send us your feedback and help us build +a simpler alternative to admission webhooks! + +## How do I get involved? + +If you want to get involved in development of admission policies, discuss enhancement +roadmaps, or report a bug, you can get in touch with developers at +[SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery). diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/index.md b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/index.md new file mode 100644 index 0000000000000..58bb57366b266 --- /dev/null +++ b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/index.md @@ -0,0 +1,129 @@ +--- +layout: blog +title: 'Kubernetes v1.26: GA Support for Kubelet Credential Providers' +date: 2022-12-22 +slug: kubelet-credential-providers +--- + +**Authors:** Andrew Sy Kim (Google), Dixita Narang (Google) + +Kubernetes v1.26 introduced generally available (GA) support for [_kubelet credential +provider plugins_]( /docs/tasks/kubelet-credential-provider/kubelet-credential-provider/), +offering an extensible plugin framework to dynamically fetch credentials +for any container image registry. + +## Background + +Kubernetes supports the ability to dynamically fetch credentials for a container registry service. +Prior to Kubernetes v1.20, this capability was compiled into the kubelet and only available for +Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry. + +{{< figure src="kubelet-credential-providers-in-tree.png" caption="Figure 1: Kubelet built-in credential provider support for Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry." >}} + +Kubernetes v1.20 introduced alpha support for kubelet credential providers plugins, +which provides a mechanism for the kubelet to dynamically authenticate and pull images +for arbitrary container registries - whether these are public registries, managed services, +or even a self-hosted registry. +In Kubernetes v1.26, this feature is now GA + +{{< figure src="kubelet-credential-providers-plugin.png" caption="Figure 2: Kubelet credential provider overview" >}} + +## Why is it important? + +Prior to Kubernetes v1.20, if you wanted to dynamically fetch credentials for image registries +other than ACR (Azure Container Registry), ECR (Elastic Container Registry), or GCR +(Google Container Registry), you needed to modify the kubelet code. +The new plugin mechanism can be used in any cluster, and lets you authenticate to new registries without +any changes to Kubernetes itself. Any cloud provider or vendor can publish a plugin that lets you authenticate with their image registry. + +## How it works + +The kubelet and the exec plugin binary communicate through stdio (stdin, stdout, and stderr) by sending and receiving +json-serialized api-versioned types. If the exec plugin is enabled and the kubelet requires authentication information for an image +that matches against a plugin, the kubelet will execute the plugin binary, passing the `CredentialProviderRequest` API via stdin. Then +the exec plugin communicates with the container registry to dynamically fetch the credentials and returns the credentials in an +encoded response of the `CredentialProviderResponse` API to the kubelet via stdout. + +{{< figure src="kubelet-credential-providers-how-it-works.png" caption="Figure 3: Kubelet credential provider plugin flow" >}} + +On receiving credentials from the kubelet, the plugin can also indicate how long credentials can be cached for, to prevent unnecessary +execution of the plugin by the kubelet for subsequent image pull requests to the same registry. In cases where the cache duration +is not specified by the plugin, a default cache duration can be specified by the kubelet (more details below). + +```json +{ + "apiVersion": "kubelet.k8s.io/v1", + "kind": "CredentialProviderResponse", + "auth": { + "cacheDuration": "6h", + "private-registry.io/my-app": { + "username": "exampleuser", + "password": "token12345" + } + } +} +``` + +In addition, the plugin can specify the scope in which cached credentials are valid for. This is specified through the `cacheKeyType` field +in `CredentialProviderResponse`. When the value is `Image`, the kubelet will only use cached credentials for future image pulls that exactly +match the image of the first request. When the value is `Registry`, the kubelet will use cached credentials for any subsequent image pulls +destined for the same registry host but using different paths (for example, `gcr.io/foo/bar` and `gcr.io/bar/foo` refer to different images +from the same registry). Lastly, when the value is `Global`, the kubelet will use returned credentials for all images that match against +the plugin, including images that can map to different registry hosts (for example, gcr.io vs k8s.gcr.io). The `cacheKeyType` field is required by plugin +implementations. + +```json +{ + "apiVersion": "kubelet.k8s.io/v1", + "kind": "CredentialProviderResponse", + "auth": { + "cacheKeyType": "Registry", + "private-registry.io/my-app": { + "username": "exampleuser", + "password": "token12345" + } + } +} +``` + +## Using kubelet credential providers + +You can configure credential providers by installing the exec plugin(s) into +a local directory accessible by the kubelet on every node. Then you set two command line arguments for the kubelet: +* `--image-credential-provider-config`: the path to the credential provider plugin config file. +* `--image-credential-provider-bin-dir`: the path to the directory where credential provider plugin binaries are located. + +The configuration file passed into `--image-credential-provider-config` is read by the kubelet to determine which exec plugins should be invoked for a container image used by a Pod. +Note that the name of each _provider_ must match the name of the binary located in the local directory specified in `--image-credential-provider-bin-dir`, otherwise the kubelet +cannot locate the path of the plugin to invoke. + +```yaml +kind: CredentialProviderConfig +apiVersion: kubelet.config.k8s.io/v1 +providers: +- name: auth-provider-gcp + apiVersion: credentialprovider.kubelet.k8s.io/v1 + matchImages: + - "container.cloud.google.com" + - "gcr.io" + - "*.gcr.io" + - "*.pkg.dev" + args: + - get-credentials + - --v=3 + defaultCacheDuration: 1m +``` + +Below is an overview of how the Kubernetes project is using kubelet credential providers for end-to-end testing. + +{{< figure src="kubelet-credential-providers-enabling.png" caption="Figure 4: Kubelet credential provider configuration used for Kubernetes e2e testing" >}} + +For more configuration details, see [Kubelet Credential Providers](https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/). + +## Getting Involved + +Come join SIG Node if you want to report bugs or have feature requests for the Kubelet Credential Provider. You can reach us through the following ways: +* Slack: [#sig-node](https://kubernetes.slack.com/messages/sig-node) +* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node) +* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnode) +* [Biweekly meetings](https://github.com/kubernetes/community/tree/master/sig-node#meetings) diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-enabling.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-enabling.png new file mode 100644 index 0000000000000..5aa0886e90686 Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-enabling.png differ diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-how-it-works.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-how-it-works.png new file mode 100644 index 0000000000000..11054229f88ca Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-how-it-works.png differ diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-in-tree.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-in-tree.png new file mode 100644 index 0000000000000..f26b42d45e8e7 Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-in-tree.png differ diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-plugin.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-plugin.png new file mode 100644 index 0000000000000..2aeedb738f445 Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-plugin.png differ diff --git a/content/en/blog/_posts/2022-12-23-fsgroup-on-mount.md b/content/en/blog/_posts/2022-12-23-fsgroup-on-mount.md new file mode 100644 index 0000000000000..671334c4891ac --- /dev/null +++ b/content/en/blog/_posts/2022-12-23-fsgroup-on-mount.md @@ -0,0 +1,72 @@ +--- +layout: blog +title: "Kubernetes 1.26: Support for Passing Pod fsGroup to CSI Drivers At Mount Time" +date: 2022-12-23 +slug: kubernetes-12-06-fsgroup-on-mount +--- + +**Authors:** Fabio Bertinatto (Red Hat), Hemant Kumar (Red Hat) + +Delegation of `fsGroup` to CSI drivers was first introduced as alpha in Kubernetes 1.22, +and graduated to beta in Kubernetes 1.25. +For Kubernetes 1.26, we are happy to announce that this feature has graduated to +General Availability (GA). + +In this release, if you specify a `fsGroup` in the +[security context](/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod), +for a (Linux) Pod, all processes in the pod's containers are part of the additional group +that you specified. + +In previous Kubernetes releases, the kubelet would *always* apply the +`fsGroup` ownership and permission changes to files in the volume according to the policy +you specified in the Pod's `.spec.securityContext.fsGroupChangePolicy` field. + +Starting with Kubernetes 1.26, CSI drivers have the option to apply the `fsGroup` settings during +volume mount time, which frees the kubelet from changing the permissions of files and directories +in those volumes. + +## How does it work? + +CSI drivers that support this feature should advertise the +[`VOLUME_MOUNT_GROUP`](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetcapabilities) node capability. + +After recognizing this information, the kubelet passes the `fsGroup` information to +the CSI driver during pod startup. This is done through the +[`NodeStageVolumeRequest`](https://github.com/container-storage-interface/spec/blob/v1.7.0/spec.md#nodestagevolume) and +[`NodePublishVolumeRequest`](https://github.com/container-storage-interface/spec/blob/v1.7.0/spec.md#nodepublishvolume) +CSI calls. + +Consequently, the CSI driver is expected to apply the `fsGroup` to the files in the volume using a +_mount option_. As an example, [Azure File CSIDriver](https://github.com/kubernetes-sigs/azurefile-csi-driver) utilizes the `gid` mount option to map +the `fsGroup` information to all the files in the volume. + +It should be noted that in the example above the kubelet refrains from directly +applying the permission changes into the files and directories in that volume files. +Additionally, two policy definitions no longer have an effect: neither +`.spec.fsGroupPolicy` for the CSIDriver object, nor +`.spec.securityContext.fsGroupChangePolicy` for the Pod. + +For more details about the inner workings of this feature, check out the +[enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2317-fsgroup-on-mount/) +and the [CSI Driver `fsGroup` Support](https://kubernetes-csi.github.io/docs/support-fsgroup.html) +in the CSI developer documentation. + +## Why is it important? + +Without this feature, applying the fsGroup information to files is not possible in certain storage environments. + +For instance, Azure File does not support a concept of POSIX-style ownership and permissions +of files. The CSI driver is only able to set the file permissions at the volume level. + +## How do I use it? + +This feature should be mostly transparent to users. If you maintain a CSI driver that should +support this feature, read +[CSI Driver `fsGroup` Support](https://kubernetes-csi.github.io/docs/support-fsgroup.html) +for more information on how to support this feature in your CSI driver. + +Existing CSI drivers that do not support this feature will continue to work as usual: +they will not receive any `fsGroup` information from the kubelet. In addition to that, +the kubelet will continue to perform the ownership and permissions changes to files +for those volumes, according to the policies specified in `.spec.fsGroupPolicy` for the +CSIDriver and `.spec.securityContext.fsGroupChangePolicy` for the relevant Pod. diff --git a/content/en/blog/_posts/2022-12-27-cpumanager-goes-GA.md b/content/en/blog/_posts/2022-12-27-cpumanager-goes-GA.md new file mode 100644 index 0000000000000..d1edc4575b019 --- /dev/null +++ b/content/en/blog/_posts/2022-12-27-cpumanager-goes-GA.md @@ -0,0 +1,71 @@ +--- +layout: blog +title: 'Kubernetes v1.26: CPUManager goes GA' +date: 2022-12-27 +slug: cpumanager-ga +--- + +**Author:** +Francesco Romani (Red Hat) + +The CPU Manager is a part of the kubelet, the Kubernetes node agent, which enables the user to allocate exclusive CPUs to containers. +Since Kubernetes v1.10, where it [graduated to Beta](/blog/2018/07/24/feature-highlight-cpu-manager/), the CPU Manager proved itself reliable and +fulfilled its role of allocating exclusive CPUs to containers, so adoption has steadily grown making it a staple component of performance-critical +and low-latency setups. Over time, most changes were about bugfixes or internal refactoring, with the following noteworthy user-visible changes: + +- [support explicit reservation of CPUs](https://github.com/Kubernetes/Kubernetes/pull/83592): it was already possible to request to reserve a given + number of CPUs for system resources, including the kubelet itself, which will not be used for exclusive CPU allocation. Now it is possible to also + explicitly select which CPUs to reserve instead of letting the kubelet pick them up automatically. +- [report the exclusively allocated CPUs](https://github.com/Kubernetes/Kubernetes/pull/97415) to containers, much like is already done for devices, + using the kubelet-local [PodResources API](/docs/concepts/extend-Kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources). +- [optimize the usage of system resources](https://github.com/Kubernetes/Kubernetes/pull/101771), eliminating unnecessary sysfs changes. + +The CPU Manager reached the point on which it "just works", so in Kubernetes v1.26 it has graduated to generally available (GA). + +## Customization options for CPU Manager {#cpu-managed-customization} + +The CPU Manager supports two operation modes, configured using its _policies_. With the `none` policy, the CPU Manager allocates CPUs to containers +without any specific constraint except the (optional) quota set in the Pod spec. +With the `static` policy, then provided that the pod is in the Guaranteed QoS class and every container in that Pod requests an integer amount of vCPU cores, +then the CPU Manager allocates CPUs exclusively. Exclusive assignment means that other containers (whether from the same Pod, or from a different Pod) do not +get scheduled onto that CPU. + +This simple operational model served the user base pretty well, but as the CPU Manager matured more and more, users started to look at more elaborate use +cases and how to better support them. + +Rather than add more policies, the community realized that pretty much all the novel use cases are some variation of the behavior enabled by the `static` +CPU Manager policy. Hence, it was decided to add [options to tune the behavior of the static policy](https://github.com/Kubernetes/enhancements/tree/master/keps/sig-node/2625-cpumanager-policies-thread-placement#proposed-change). +The options have a varying degree of maturity, like any other Kubernetes feature, and in order to be accepted, each new option provides a backward +compatible behavior when disabled, and to document how to interact with each other, should they interact at all. + +This enabled the Kubernetes project to graduate to GA the CPU Manager core component and core CPU allocation algorithms to GA, +while also enabling a new age of experimentation in this area. +In Kubernetes v1.26, the CPU Manager supports [three different policy options](/docs/tasks/administer-cluster/cpu-management-policies.md#static-policy-options): + +`full-pcpus-only` +: restrict the CPU Manager core allocation algorithm to full physical cores only, reducing noisy neighbor issues from hardware technologies that allow sharing cores. + +`distribute-cpus-across-numa` +: drive the CPU Manager to evenly distribute CPUs across NUMA nodes, for cases where more than one NUMA node is required to satisfy the allocation. + +`align-by-socket` +: change how the CPU Manager allocates CPUs to a container: consider CPUs to be aligned at the socket boundary, instead of NUMA node boundary. + +## Further development + +After graduating the main CPU Manager feature, each existing policy option will follow their graduation process, independent from CPU Manager and from each other option. +There is room for new options to be added, but there's also a growing demand for even more flexibility than what the CPU Manager, and its policy options, currently grant. + +Conversations are in progress in the community about splitting the CPU Manager and the other resource managers currently part of the kubelet executable +into pluggable, independent kubelet plugins. If you are interested in this effort, please join the conversation on SIG Node communication channels (Slack, mailing list, weekly meeting). + +## Further reading + +Please check out the [Control CPU Management Policies on the Node](/docs/tasks/administer-cluster/cpu-management-policies/) +task page to learn more about the CPU Manager, and how it fits in relation to the other node-level resource managers. + +## Getting involved + +This feature is driven by the [SIG Node](https://github.com/Kubernetes/community/blob/master/sig-node/README.md) community. +Please join us to connect with the community and share your ideas and feedback around the above feature and +beyond. We look forward to hearing from you! diff --git a/content/en/blog/_posts/2022-12-29-scalable-job-tracking-ga/index.md b/content/en/blog/_posts/2022-12-29-scalable-job-tracking-ga/index.md new file mode 100644 index 0000000000000..6d0685e43cae9 --- /dev/null +++ b/content/en/blog/_posts/2022-12-29-scalable-job-tracking-ga/index.md @@ -0,0 +1,155 @@ +--- +layout: blog +title: "Kubernetes 1.26: Job Tracking, to Support Massively Parallel Batch Workloads, Is Generally Available" +date: 2022-12-29 +slug: "scalable-job-tracking-ga" +--- + +**Authors:** Aldo Culquicondor (Google) + +The Kubernetes 1.26 release includes a stable implementation of the [Job](/docs/concepts/workloads/controllers/job/) +controller that can reliably track a large amount of Jobs with high levels of +parallelism. [SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps) +and [WG Batch](https://github.com/kubernetes/community/tree/master/wg-batch) +have worked on this foundational improvement since Kubernetes 1.22. After +multiple iterations and scale verifications, this is now the default +implementation of the Job controller. + +Paired with the Indexed [completion mode](/docs/concepts/workloads/controllers/job/#completion-mode), +the Job controller can handle massively parallel batch Jobs, supporting up to +100k concurrent Pods. + +The new implementation also made possible the development of [Pod failure policy](/docs/concepts/workloads/controllers/job/#pod-failure-policy), +which is in beta in the 1.26 release. + +## How do I use this feature? + +To use Job tracking with finalizers, upgrade to Kubernetes 1.25 or newer and +create new Jobs. You can also use this feature in v1.23 and v1.24, if you have the +ability to enable the `JobTrackingWithFinalizers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). + +If your cluster runs Kubernetes 1.26, Job tracking with finalizers is a stable +feature. For v1.25, it's behind that feature gate, and your cluster administrators may have +explicitly disabled it - for example, if you have a policy of not using +beta features. + +Jobs created before the upgrade will still be tracked using the legacy behavior. +This is to avoid retroactively adding finalizers to running Pods, which might +introduce race conditions. + +For maximum performance on large Jobs, the Kubernetes project recommends +using the [Indexed completion mode](/docs/concepts/workloads/controllers/job/#completion-mode). +In this mode, the control plane is able to track Job progress with less API +calls. + +If you are a developer of operator(s) for batch, [HPC](https://en.wikipedia.org/wiki/High-performance_computing), +[AI](https://en.wikipedia.org/wiki/Artificial_intelligence), [ML](https://en.wikipedia.org/wiki/Machine_learning) +or related workloads, we encourage you to use the Job API to delegate accurate +progress tracking to Kubernetes. If there is something missing in the Job API +that forces you to manage plain Pods, the [Working Group Batch](https://github.com/kubernetes/community/tree/master/wg-batch) +welcomes your feedback and contributions. + +### Deprecation notices + +During the development of the feature, the control plane added the annotation +[`batch.kubernetes.io/job-tracking`](/docs/reference/labels-annotations-taints/#batch-kubernetes-io-job-tracking) +to the Jobs that were created when the feature was enabled. +This allowed a safe transition for older Jobs, but it was never meant to stay. + +In the 1.26 release, we deprecated the annotation `batch.kubernetes.io/job-tracking` +and the control plane will stop adding it in Kubernetes 1.27. +Along with that change, we will remove the legacy Job tracking implementation. +As a result, the Job controller will track all Jobs using finalizers and it will +ignore Pods that don't have the aforementioned finalizer. + +Before you upgrade your cluster to 1.27, we recommend that you verify that there +are no running Jobs that don't have the annotation, or you wait for those jobs +to complete. +Otherwise, you might observe the control plane recreating some Pods. +We expect that this shouldn't affect any users, as the feature is enabled by +default since Kubernetes 1.25, giving enough buffer for old jobs to complete. + +## What problem does the new implementation solve? + +Generally, Kubernetes workload controllers, such as ReplicaSet or StatefulSet, +rely on the existence of Pods or other objects in the API to determine the +status of the workload and whether replacements are needed. +For example, if a Pod that belonged to a ReplicaSet terminates or ceases to +exist, the ReplicaSet controller needs to create a replacement Pod to satisfy +the desired number of replicas (`.spec.replicas`). + +Since its inception, the Job controller also relied on the existence of Pods in +the API to track Job status. A Job has [completion](/docs/concepts/workloads/controllers/job/#completion-mode) +and [failure handling](/docs/concepts/workloads/controllers/job/#handling-pod-and-container-failures) +policies, requiring the end state of a finished Pod to determine whether to +create a replacement Pod or mark the Job as completed or failed. As a result, +the Job controller depended on Pods, even terminated ones, to remain in the API +in order to keep track of the status. + +This dependency made the tracking of Job status unreliable, because Pods can be +deleted from the API for a number of reasons, including: +- The garbage collector removing orphan Pods when a Node goes down. +- The garbage collector removing terminated Pods when they reach a threshold. +- The Kubernetes scheduler preempting a Pod to accomodate higher priority Pods. +- The taint manager evicting a Pod that doesn't tolerate a `NoExecute` taint. +- External controllers, not included as part of Kubernetes, or humans deleting + Pods. + +### The new implementation + +When a controller needs to take an action on objects before they are removed, it +should add a [finalizer](/docs/concepts/overview/working-with-objects/finalizers/) +to the objects that it manages. +A finalizer prevents the objects from being deleted from the API until the +finalizers are removed. Once the controller is done with the cleanup and +accounting for the deleted object, it can remove the finalizer from the object and the +control plane removes the object from the API. + +This is what the new Job controller is doing: adding a finalizer during Pod +creation, and removing the finalizer after the Pod has terminated and has been +accounted for in the Job status. However, it wasn't that simple. + +The main challenge is that there are at least two objects involved: the Pod +and the Job. While the finalizer lives in the Pod object, the accounting lives +in the Job object. There is no mechanism to atomically remove the finalizer in +the Pod and update the counters in the Job status. Additionally, there could be +more than one terminated Pod at a given time. + +To solve this problem, we implemented a three staged approach, each translating +to an API call. +1. For each terminated Pod, add the unique ID (UID) of the Pod into short-lived + lists stored in the `.status` of the owning Job + ([.status.uncountedTerminatedPods](/docs/reference/kubernetes-api/workload-resources/job-v1/#JobStatus)). +2. Remove the finalizer from the Pods(s). +3. Atomically do the following operations: + - remove UIDs from the short-lived lists + - increment the overall `succeeded` and `failed` counters in the `status` of + the Job. + +Additional complications come from the fact that the Job controller might +receive the results of the API changes in steps 1 and 2 out of order. We solved +this by adding an in-memory cache for removed finalizers. + +Still, we faced some issues during the beta stage, leaving some pods stuck +with finalizers in some conditions ([#108645](https://github.com/kubernetes/kubernetes/issues/108645), +[#109485](https://github.com/kubernetes/kubernetes/issues/109485), and +[#111646](https://github.com/kubernetes/kubernetes/pull/111646)). As a result, +we decided to switch that feature gate to be disabled by default for the 1.23 +and 1.24 releases. + +Once resolved, we re-enabled the feature for the 1.25 release. Since then, we +have received reports from our customers running tens of thousands of Pods at a +time in their clusters through the Job API. Seeing this success, we decided to +graduate the feature to stable in 1.26, as part of our long term commitment to +make the Job API the best way to run large batch Jobs in a Kubernetes cluster. + +To learn more about the feature, you can read the [KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/2307-job-tracking-without-lingering-pods). + +## Acknowledgments + +As with any Kubernetes feature, multiple people contributed to getting this +done, from testing and filing bugs to reviewing code. + +On behalf of SIG Apps, I would like to especially thank Jordan Liggitt (Google) +for helping me debug and brainstorm solutions for more than one race condition +and Maciej Szulik (Red Hat) for his thorough reviews. diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-overview.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-overview.png new file mode 100644 index 0000000000000..c6cdbef25ff99 Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-overview.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-with-terminating-pod.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-with-terminating-pod.png new file mode 100644 index 0000000000000..b5a516a01d2d0 Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-with-terminating-pod.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/index.md b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/index.md new file mode 100644 index 0000000000000..91ecd167ccf6f --- /dev/null +++ b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/index.md @@ -0,0 +1,117 @@ +--- +layout: blog +title: "Kubernetes v1.26: Advancements in Kubernetes Traffic Engineering" +date: 2022-12-30 +slug: advancements-in-kubernetes-traffic-engineering +--- + +**Authors:** Andrew Sy Kim (Google) + +Kubernetes v1.26 includes significant advancements in network traffic engineering with the graduation of +two features (Service internal traffic policy support, and EndpointSlice terminating conditions) to GA, +and a third feature (Proxy terminating endpoints) to beta. The combination of these enhancements aims +to address short-comings in traffic engineering that people face today, and unlock new capabilities for the future. + +## Traffic Loss from Load Balancers During Rolling Updates + +Prior to Kubernetes v1.26, clusters could experience [loss of traffic](https://github.com/kubernetes/kubernetes/issues/85643) +from Service load balancers during rolling updates when setting the `externalTrafficPolicy` field to `Local`. +There are a lot of moving parts at play here so a quick overview of how Kubernetes manages load balancers might help! + +In Kubernetes, you can create a Service with `type: LoadBalancer` to expose an application externally with a load balancer. +The load balancer implementation varies between clusters and platforms, but the Service provides a generic abstraction +representing the load balancer that is consistent across all Kubernetes installations. + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app.kubernetes.io/name: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + type: LoadBalancer +``` + +Under the hood, Kubernetes allocates a NodePort for the Service, which is then used by kube-proxy to provide a +network data path from the NodePort to the Pod. A controller will then add all available Nodes in the cluster +to the load balancer’s backend pool, using the designated NodePort for the Service as the backend target port. + +{{< figure src="traffic-engineering-service-load-balancer.png" caption="Figure 1: Overview of Service load balancers" >}} + +Oftentimes it is beneficial to set `externalTrafficPolicy: Local` for Services, to avoid extra hops between +Nodes that are not running healthy Pods backing that Service. When using `externalTrafficPolicy: Local`, +an additional NodePort is allocated for health checking purposes, such that Nodes that do not contain healthy +Pods are excluded from the backend pool for a load balancer. + +{{< figure src="traffic-engineering-lb-healthy.png" caption="Figure 2: Load balancer traffic to a healthy Node, when externalTrafficPolicy is Local" >}} + +One such scenario where traffic can be lost is when a Node loses all Pods for a Service, +but the external load balancer has not probed the health check NodePort yet. The likelihood of this situation +is largely dependent on the health checking interval configured on the load balancer. The larger the interval, +the more likely this will happen, since the load balancer will continue to send traffic to a node +even after kube-proxy has removed forwarding rules for that Service. This also occurrs when Pods start terminating +during rolling updates. Since Kubernetes does not consider terminating Pods as “Ready”, traffic can be loss +when there are only terminating Pods on any given Node during a rolling update. + +{{< figure src="traffic-engineering-lb-without-proxy-terminating-endpoints.png" caption="Figure 3: Load balancer traffic to terminating endpoints, when externalTrafficPolicy is Local" >}} + +Starting in Kubernetes v1.26, kube-proxy enables the `ProxyTerminatingEndpoints` feature by default, which +adds automatic failover and routing to terminating endpoints in scenarios where the traffic would otherwise +be dropped. More specifically, when there is a rolling update and a Node only contains terminating Pods, +kube-proxy will route traffic to the terminating Pods based on their readiness. In addition, kube-proxy will +actively fail the health check NodePort if there are only terminating Pods available. By doing so, +kube-proxy alerts the external load balancer that new connections should not be sent to that Node but will +gracefully handle requests for existing connections. + +{{< figure src="traffic-engineering-lb-with-proxy-terminating-endpoints.png" caption="Figure 4: Load Balancer traffic to terminating endpoints with ProxyTerminatingEndpoints enabled, when externalTrafficPolicy is Local" >}} + +### EndpointSlice Conditions + +In order to support this new capability in kube-proxy, the EndpointSlice API introduced new conditions for endpoints: +`serving` and `terminating`. + +{{< figure src="endpointslice-overview.png" caption="Figure 5: Overview of EndpointSlice conditions" >}} + +The `serving` condition is semantically identical to `ready`, except that it can be `true` or `false` +while a Pod is terminating, unlike `ready` which will always be `false` for terminating Pods for compatibility reasons. +The `terminating` condition is true for Pods undergoing termination (non-empty deletionTimestamp), false otherwise. + +The addition of these two conditions enables consumers of this API to understand Pod states that were previously not possible. +For example, we can now track "ready" and "not ready" Pods that are also terminating. + +{{< figure src="endpointslice-with-terminating-pod.png" caption="Figure 6: EndpointSlice conditions with a terminating Pod" >}} + +Consumers of the EndpointSlice API, such as Kube-proxy and Ingress Controllers, can now use these conditions to coordinate connection draining +events, by continuing to forward traffic for existing connections but rerouting new connections to other non-terminating endpoints. + +## Optimizing Internal Node-Local Traffic + +Similar to how Services can set `externalTrafficPolicy: Local` to avoid extra hops for externally sourced traffic, Kubernetes +now supports `internalTrafficPolicy: Local`, to enable the same optimization for traffic originating within the cluster, specifically +for traffic using the Service Cluster IP as the destination address. This feature graduated to Beta in Kubernetes v1.24 and is graduating to GA in v1.26. + +Services default the `internalTrafficPolicy` field to `Cluster`, where traffic is randomly distributed to all endpoints. + +{{< figure src="service-internal-traffic-policy-cluster.png" caption="Figure 7: Service routing when internalTrafficPolicy is Cluster" >}} + +When `internalTrafficPolicy` is set to `Local`, kube-proxy will forward internal traffic for a Service only if there is an available endpoint +that is local to the same Node. + +{{< figure src="service-internal-traffic-policy-local.png" caption="Figure 8: Service routing when internalTrafficPolicy is Local" >}} + +{{< caution >}} +When using `internalTrafficPoliy: Local`, traffic will be dropped by kube-proxy when no local endpoints are available. +{{< /caution >}} + +## Getting Involved + +If you're interested in future discussions on Kubernetes traffic engineering, you can get involved in SIG Network through the following ways: +* Slack: [#sig-network](https://kubernetes.slack.com/messages/sig-network) +* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-network) +* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnetwork) +* [Biweekly meetings](https://github.com/kubernetes/community/tree/master/sig-network#meetings) diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-cluster.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-cluster.png new file mode 100644 index 0000000000000..e0f477aa2e39e Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-cluster.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-local.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-local.png new file mode 100644 index 0000000000000..407a0db0ed8f8 Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-local.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-healthy.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-healthy.png new file mode 100644 index 0000000000000..74ac7f4f5c931 Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-healthy.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-with-proxy-terminating-endpoints.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-with-proxy-terminating-endpoints.png new file mode 100644 index 0000000000000..0faa5d960a526 Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-with-proxy-terminating-endpoints.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-without-proxy-terminating-endpoints.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-without-proxy-terminating-endpoints.png new file mode 100644 index 0000000000000..43db9c9efb9a6 Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-without-proxy-terminating-endpoints.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-service-load-balancer.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-service-load-balancer.png new file mode 100644 index 0000000000000..a4e58c6207cb3 Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-service-load-balancer.png differ diff --git a/content/en/blog/_posts/2023-01-02-cross-namespace-data-sources-alpha.md b/content/en/blog/_posts/2023-01-02-cross-namespace-data-sources-alpha.md new file mode 100644 index 0000000000000..2f7cd683e029a --- /dev/null +++ b/content/en/blog/_posts/2023-01-02-cross-namespace-data-sources-alpha.md @@ -0,0 +1,159 @@ +--- +layout: blog +title: "Kubernetes v1.26: Alpha support for cross-namespace storage data sources" +date: 2023-01-02 +slug: cross-namespace-data-sources-alpha +--- + +**Author:** Takafumi Takahashi (Hitachi Vantara) + +Kubernetes v1.26, released last month, introduced an alpha feature that +lets you specify a data source for a PersistentVolumeClaim, even where the source +data belong to a different namespace. +With the new feature enabled, you specify a namespace in the `dataSourceRef` field of +a new PersistentVolumeClaim. Once Kubernetes checks that access is OK, the new +PersistentVolume can populate its data from the storage source specified in that other +namespace. +Before Kubernetes v1.26, provided your cluster had the `AnyVolumeDataSource` feature enabled, +you could already provision new volumes from a data source in the **same** +namespace. +However, that only worked for the data source in the same namespace, +therefore users couldn't provision a PersistentVolume with a claim +in one namespace from a data source in other namespace. +To solve this problem, Kubernetes v1.26 added a new alpha `namespace` field +to `dataSourceRef` field in PersistentVolumeClaim the API. + +## How it works + +Once the csi-provisioner finds that a data source is specified with a `dataSourceRef` that +has a non-empty namespace name, +it checks all reference grants within the namespace that's specified by the`.spec.dataSourceRef.namespace` +field of the PersistentVolumeClaim, in order to see if access to the data source is allowed. +If any ReferenceGrant allows access, the csi-provisioner provisions a volume from the data source. + +## Trying it out + +The following things are required to use cross namespace volume provisioning: + +* Enable the `AnyVolumeDataSource` and `CrossNamespaceVolumeDataSource` [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) for the kube-apiserver and kube-controller-manager +* Install a CRD for the specific `VolumeSnapShot` controller +* Install the CSI Provisioner controller and enable the `CrossNamespaceVolumeDataSource` feature gate +* Install the CSI driver +* Install a CRD for ReferenceGrants + +## Putting it all together + +To see how this works, you can install the sample and try it out. +This sample do to create PVC in dev namespace from VolumeSnapshot in prod namespace. +That is a simple example. For real world use, you might want to use a more complex approach. + +### Assumptions for this example {#example-assumptions} + +* Your Kubernetes cluster was deployed with `AnyVolumeDataSource` and `CrossNamespaceVolumeDataSource` feature gates enabled +* There are two namespaces, dev and prod +* CSI driver is being deployed +* There is an existing VolumeSnapshot named `new-snapshot-demo` in the _prod_ namespace +* The ReferenceGrant CRD (from the Gateway API project) is already deployed + +### Grant ReferenceGrants read permission to the CSI Provisioner + +Access to ReferenceGrants is only needed when the CSI driver +has the `CrossNamespaceVolumeDataSource` controller capability. +For this example, the external-provisioner needs **get**, **list**, and **watch** +permissions for `referencegrants` (API group `gateway.networking.k8s.io`). + +```yaml + - apiGroups: ["gateway.networking.k8s.io"] + resources: ["referencegrants"] + verbs: ["get", "list", "watch"] +``` + +### Enable the CrossNamespaceVolumeDataSource feature gate for the CSI Provisioner + +Add `--feature-gates=CrossNamespaceVolumeDataSource=true` to the csi-provisioner command line. +For example, use this manifest snippet to redefine the container: + +```yaml + - args: + - -v=5 + - --csi-address=/csi/csi.sock + - --feature-gates=Topology=true + - --feature-gates=CrossNamespaceVolumeDataSource=true + image: csi-provisioner:latest + imagePullPolicy: IfNotPresent + name: csi-provisioner +``` + +### Create a ReferenceGrant + +Here's a manifest for an example ReferenceGrant. + +```yaml +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: ReferenceGrant +metadata: + name: allow-prod-pvc + namespace: prod +spec: + from: + - group: "" + kind: PersistentVolumeClaim + namespace: dev + to: + - group: snapshot.storage.k8s.io + kind: VolumeSnapshot + name: new-snapshot-demo +``` + +### Create a PersistentVolumeClaim by using cross namespace data source + +Kubernetes creates a PersistentVolumeClaim on dev and the CSI driver populates +the PersistentVolume used on dev from snapshots on prod. + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: example-pvc + namespace: dev +spec: + storageClassName: example + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + dataSourceRef: + apiGroup: snapshot.storage.k8s.io + kind: VolumeSnapshot + name: new-snapshot-demo + namespace: prod + volumeMode: Filesystem +``` + +## How can I learn more? + +The enhancement proposal, +[Provision volumes from cross-namespace snapshots](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3294-provision-volumes-from-cross-namespace-snapshots), includes lots of detail about the history and technical implementation of this feature. + +Please get involved by joining the [Kubernetes Storage Special Interest Group (SIG)](https://github.com/kubernetes/community/tree/master/sig-storage) +to help us enhance this feature. +There are a lot of good ideas already and we'd be thrilled to have more! + +## Acknowledgments + +It takes a wonderful group to make wonderful software. +Special thanks to the following people for the insightful reviews, +thorough consideration and valuable contribution to the CrossNamespaceVolumeDataSouce feature: + +* Michelle Au (msau42) +* Xing Yang (xing-yang) +* Masaki Kimura (mkimuram) +* Tim Hockin (thockin) +* Ben Swartzlander (bswartz) +* Rob Scott (robscott) +* John Griffith (j-griffith) +* Michael Henriksen (mhenriks) +* Mustafa Elbehery (Elbehery) + +It’s been a joy to work with y'all on this. diff --git a/content/en/blog/_posts/2023-01-05-retroactive-default-storage-class.md b/content/en/blog/_posts/2023-01-05-retroactive-default-storage-class.md new file mode 100644 index 0000000000000..1a6f7374b9a08 --- /dev/null +++ b/content/en/blog/_posts/2023-01-05-retroactive-default-storage-class.md @@ -0,0 +1,170 @@ +--- +layout: blog +title: "Kubernetes 1.26: Retroactive Default StorageClass" +date: 2023-01-05 +slug: retroactive-default-storage-class +--- + +**Author:** Roman Bednář (Red Hat) + +The v1.25 release of Kubernetes introduced an alpha feature to change how a default StorageClass was assigned to a PersistentVolumeClaim (PVC). +With the feature enabled, you no longer need to create a default StorageClass first and PVC second to assign the class. Additionally, any PVCs without a StorageClass assigned can be updated later. +This feature was graduated to beta in Kubernetes 1.26. + +You can read [retroactive default StorageClass assignment](/docs/concepts/storage/persistent-volumes/#retroactive-default-storageclass-assignment) in the Kubernetes documentation for more details about how to use that, +or you can read on to learn about why the Kubernetes project is making this change. + +## Why did StorageClass assignment need improvements + +Users might already be familiar with a similar feature that assigns default StorageClasses to **new** PVCs at the time of creation. This is currently handled by the [admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass). + +But what if there wasn't a default StorageClass defined at the time of PVC creation? +Users would end up with a PVC that would never be assigned a class. +As a result, no storage would be provisioned, and the PVC would be somewhat "stuck" at this point. +Generally, two main scenarios could result in "stuck" PVCs and cause problems later down the road. +Let's take a closer look at each of them. + +### Changing default StorageClass + +With the alpha feature enabled, there were two options admins had when they wanted to change the default StorageClass: + +1. Creating a new StorageClass as default before removing the old one associated with the PVC. +This would result in having two defaults for a short period. +At this point, if a user were to create a PersistentVolumeClaim with storageClassName set to null (implying default StorageClass), the newest default StorageClass would be chosen and assigned to this PVC. + +2. Removing the old default first and creating a new default StorageClass. +This would result in having no default for a short time. +Subsequently, if a user were to create a PersistentVolumeClaim with storageClassName set to null (implying default StorageClass), the PVC would be in Pending state forever. +The user would have to fix this by deleting the PVC and recreating it once the default StorageClass was available. + + +### Resource ordering during cluster installation + +If a cluster installation tool needed to create resources that required storage, for example, an image registry, it was difficult to get the ordering right. +This is because any Pods that required storage would rely on the presence of a default StorageClass and would fail to be created if it wasn't defined. + +## What changed + +We've changed the PersistentVolume (PV) controller to assign a default StorageClass to any unbound PersistentVolumeClaim that has the storageClassName set to null. +We've also modified the PersistentVolumeClaim admission within the API server to allow the change of values from an unset value to an actual StorageClass name. + +### Null `storageClassName` versus `storageClassName: ""` - does it matter? { #null-vs-empty-string } + +Before this feature was introduced, those values were equal in terms of behavior. Any PersistentVolumeClaim with the storageClassName set to null or "" would bind to an existing PersistentVolume resource with storageClassName also set to null or "". + +With this new feature enabled we wanted to maintain this behavior but also be able to update the StorageClass name. +With these constraints in mind, the feature changes the semantics of null. If a default StorageClass is present, null would translate to "Give me a default" and "" would mean "Give me PersistentVolume that also has "" StorageClass name." In the absence of a StorageClass, the behavior would remain unchanged. + +Summarizing the above, we've changed the semantics of null so that its behavior depends on the presence or absence of a definition of default StorageClass. + +The tables below show all these cases to better describe when PVC binds and when its StorageClass gets updated. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PVC binding behavior with Retroactive default StorageClass
PVC storageClassName = ""PVC storageClassName = null
Without default classPV storageClassName = ""bindsbinds
PV without storageClassNamebindsbinds
With default classPV storageClassName = ""bindsclass updates
PV without storageClassNamebindsclass updates
+ +## How to use it + +If you want to test the feature whilst it's alpha, you need to enable the relevant feature gate in the kube-controller-manager and the kube-apiserver. Use the `--feature-gates` command line argument: + +``` +--feature-gates="...,RetroactiveDefaultStorageClass=true" +``` + +### Test drive + +If you would like to see the feature in action and verify it works fine in your cluster here's what you can try: + +1. Define a basic PersistentVolumeClaim: + + ```yaml + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-1 + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ``` + +2. Create the PersistentVolumeClaim when there is no default StorageClass. The PVC won't provision or bind (unless there is an existing, suitable PV already present) and will remain in Pending state. + + ``` + $ kc get pvc + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + pvc-1 Pending + ``` + +3. Configure one StorageClass as default. + + ``` + $ kc patch sc -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' + storageclass.storage.k8s.io/my-storageclass patched + ``` + +4. Verify that PersistentVolumeClaims is now provisioned correctly and was updated retroactively with new default StorageClass. + + ``` + $ kc get pvc + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + pvc-1 Bound pvc-06a964ca-f997-4780-8627-b5c3bf5a87d8 1Gi RWO my-storageclass 87m + ``` + +### New metrics + +To help you see that the feature is working as expected we also introduced a new retroactive_storageclass_total metric to show how many times that the PV controller attempted to update PersistentVolumeClaim, and retroactive_storageclass_errors_total to show how many of those attempts failed. + +## Getting involved + +We always welcome new contributors so if you would like to get involved you can join our [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG). + +If you would like to share feedback, you can do so on our [public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5). + +Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order): + +- Deep Debroy ([ddebroy](https://github.com/ddebroy)) +- Divya Mohan ([divya-mohan0209](https://github.com/divya-mohan0209)) +- Jan Šafránek ([jsafrane](https://github.com/jsafrane/)) +- Joe Betz ([jpbetz](https://github.com/jpbetz)) +- Jordan Liggitt ([liggitt](https://github.com/liggitt)) +- Michelle Au ([msau42](https://github.com/msau42)) +- Seokho Son ([seokho-son](https://github.com/seokho-son)) +- Shannon Kularathna ([shannonxtreme](https://github.com/shannonxtreme)) +- Tim Bannister ([sftim](https://github.com/sftim)) +- Tim Hockin ([thockin](https://github.com/thockin)) +- Wojciech Tyczynski ([wojtek-t](https://github.com/wojtek-t)) +- Xing Yang ([xing-yang](https://github.com/xing-yang)) diff --git a/content/en/blog/_posts/2023-01-06-unhealthy-pod-eviction-policy-for-pdb.md b/content/en/blog/_posts/2023-01-06-unhealthy-pod-eviction-policy-for-pdb.md new file mode 100644 index 0000000000000..09c03926b7031 --- /dev/null +++ b/content/en/blog/_posts/2023-01-06-unhealthy-pod-eviction-policy-for-pdb.md @@ -0,0 +1,107 @@ +--- +layout: blog +title: "Kubernetes 1.26: Eviction policy for unhealthy pods guarded by PodDisruptionBudgets" +date: 2023-01-06 +slug: "unhealthy-pod-eviction-policy-for-pdbs" +--- + +**Authors:** Filip Křepinský (Red Hat), Morten Torkildsen (Google), Ravi Gudimetla (Apple) + + +Ensuring the disruptions to your applications do not affect its availability isn't a simple +task. Last month's release of Kubernetes v1.26 lets you specify an _unhealthy pod eviction policy_ +for [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets) (PDBs) +to help you maintain that availability during node management operations. +In this article, we will dive deeper into what modifications were introduced for PDBs to +give application owners greater flexibility in managing disruptions. + +## What problems does this solve? + +API-initiated eviction of pods respects PodDisruptionBudgets (PDBs). This means that a requested [voluntary disruption](https://kubernetes.io/docs/concepts/scheduling-eviction/#pod-disruption) +via an eviction to a Pod, should not disrupt a guarded application and `.status.currentHealthy` of a PDB should not fall +below `.status.desiredHealthy`. Running pods that are [Unhealthy](/docs/tasks/run-application/configure-pdb/#healthiness-of-a-pod) +do not count towards the PDB status, but eviction of these is only possible in case the application +is not disrupted. This helps disrupted or not yet started application to achieve availability +as soon as possible without additional downtime that would be caused by evictions. + +Unfortunately, this poses a problem for cluster administrators that would like to drain nodes +without any manual interventions. Misbehaving applications with pods in `CrashLoopBackOff` +state (due to a bug or misconfiguration) or pods that are simply failing to become ready +make this task much harder. Any eviction request will fail due to violation of a PDB, +when all pods of an application are unhealthy. Draining of a node cannot make any progress +in that case. + +On the other hand there are users that depend on the existing behavior, in order to: +- prevent data-loss that would be caused by deleting pods that are guarding an underlying resource or storage +- achieve the best availability possible for their application + +Kubernetes 1.26 introduced a new experimental field to the PodDisruptionBudget API: `.spec.unhealthyPodEvictionPolicy`. +When enabled, this field lets you support both of those requirements. + +## How does it work? + +API-initiated eviction is the process that triggers graceful pod termination. +The process can be initiated either by calling the API directly, +by using a `kubectl drain` command, or other actors in the cluster. +During this process every pod removal is consulted with appropriate PDBs, +to ensure that a sufficient number of pods is always running in the cluster. + +The following policies allow PDB authors to have a greater control how the process deals with unhealthy pods. + +There are two policies `IfHealthyBudget` and `AlwaysAllow` to choose from. + +The former, `IfHealthyBudget`, follows the existing behavior to achieve the best availability +that you get by default. Unhealthy pods can be disrupted only if their application +has a minimum available `.status.desiredHealthy` number of pods. + +By setting the `spec.unhealthyPodEvictionPolicy` field of your PDB to `AlwaysAllow`, +you are choosing the best effort availability for your application. +With this policy it is always possible to evict unhealthy pods. +This will make it easier to maintain and upgrade your clusters. + +We think that `AlwaysAllow` will often be a better choice, but for some critical workloads you may +still prefer to protect even unhealthy Pods from node drains or other forms of API-initiated +eviction. + +## How do I use it? + +This is an alpha feature, which means you have to enable the `PDBUnhealthyPodEvictionPolicy` +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/), +with the command line argument `--feature-gates=PDBUnhealthyPodEvictionPolicy=true` +to the kube-apiserver. + +Here's an example. Assume that you've enabled the feature gate in your cluster, and that you +already defined a Deployment that runs a plain webserver. You labelled the Pods for that +Deployment with `app: nginx`. +You want to limit avoidable disruption, and you know that best effort availability is +sufficient for this app. +You decide to allow evictions even if those webserver pods are unhealthy. +You create a PDB to guard this application, with the `AlwaysAllow` policy for evicting +unhealthy pods: + +```yaml +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: nginx-pdb +spec: + selector: + matchLabels: + app: nginx + maxUnavailable: 1 + unhealthyPodEvictionPolicy: AlwaysAllow +``` + + +## How can I learn more? + + +- Read the KEP: [Unhealthy Pod Eviction Policy for PDBs](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3017-pod-healthy-policy-for-pdb) +- Read the documentation: [Unhealthy Pod Eviction Policy](/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy) for PodDisruptionBudgets +- Review the Kubernetes documentation for [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets), [draining of Nodes](/docs/tasks/administer-cluster/safely-drain-node/) and [evictions](/docs/concepts/scheduling-eviction/api-eviction/) + + +## How do I get involved? + +If you have any feedback, please reach out to us in the [#sig-apps](https://kubernetes.slack.com/archives/C18NZM5K9) channel on Slack (visit https://slack.k8s.io/ for an invitation if you need one), or on the SIG Apps mailing list: kubernetes-sig-apps@googlegroups.com + diff --git a/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/decision-tree.svg b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/decision-tree.svg new file mode 100644 index 0000000000000..c9e57f34b6c5f --- /dev/null +++ b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/decision-tree.svg @@ -0,0 +1,3 @@ + + +PriorityClasses and their values
1
1
dev-pc (value:1000000)
dev-pc (value:1000000)
2
2
preprod-pc (value:2000000)
preprod-pc (value:2000000)
3
3
prod-pc (value:4000000)
prod-pc (value:4000000)






dev-nginx
dev-ngin...






%3CmxGraphModel%3E%3Croot%3E%3CmxCell%20id%3D%220%22%2F%3E%3CmxCell%20id%3D%221%22%20parent%3D%220%22%2F%3E%3CmxCell%20id%3D%222%22%20value%3D%22%26lt%3Bbr%26gt%3B%26lt%3Bbr%26gt%3B%26lt%3Bbr%26gt%3B%26lt%3Bbr%26gt%3B%26lt%3Bbr%26gt%3B%26lt%3Bbr%26gt%3Bdev-nginx%22%20style%3D%22sketch%3D0%3Bhtml%3D1%3Bdashed%3D0%3Bwhitespace%3Dwrap%3BfillColor%3D%232875E2%3BstrokeColor%3D%23ffffff%3Bpoints%3D%5B%5B0.005%2C0.63%2C0%5D%2C%5B0.1%2C0.2%2C0%5D%2C%5B0.9%2C0.2%2C0%5D%2C%5B0.5%2C0%2C0%5D%2C%5B0.995%2C0.63%2C0%5D%2C%5B0.72%2C0.99%2C0%5D%2C%5B0.5%2C1%2C0%5D%2C%5B0.28%2C0.99%2C0%5D%5D%3Bshape%3Dmxgraph.kubernetes.icon%3BprIcon%3Dpod%3BfontSize%3D12%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%22210%22%20y%3D%22340%22%20width%3D%2250%22%20height%3D%2248%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3C%2Froot%3E%3C%2FmxGraphModel%3Epreprod-nginx
%3CmxGra...






prod-nginx
prod-ngi...
Scheduling queue
Scheduling queue
Based on the
priorityClass values, scheduler
places all the pods
in the queue.

dev-nginx being the lowest
and prod-nginx being
the highest.
Based on the...
Do the worker
nodes have enough resources to run
the pods?
Do the worker...
The preemption logic kicks-in and preempts lowest priority pod.
The preemption logic kicks-in...
No
No
In our case we don't
have enough resources. Therefore dev-nginx
was evicted.
In our case we don't...
Scheduling of pods happens normally.
Scheduling of pods happe...
Yes
Yes
Lowest priority pod gets gracefully terminated to make room for high-priority pods.
Lowest priority pod gets gracef...
Since we have enough room for high-priority pods, the scheduler schedules them.
Since we have enough room for high-p...
In our case prod-nginx was
successfully scheduled
to node01.
In our case prod-nginx was...
Do the worker
nodes have enough resources to run
high priority pods
 now ?
Do the worker...
Yes
Yes
No
No
Just in case there are still
resource constraints, the
process continues.
Just in case there are still...
Text is not SVG - cannot display
\ No newline at end of file diff --git a/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/index.md b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/index.md new file mode 100644 index 0000000000000..7e1ec725c607f --- /dev/null +++ b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/index.md @@ -0,0 +1,329 @@ +--- +layout: blog +title: "Protect Your Mission-Critical Pods From Eviction With PriorityClass" +date: 2023-01-12 +slug: protect-mission-critical-pods-priorityclass +description: "Pod priority and preemption help to make sure that mission-critical pods are up in the event of a resource crunch by deciding order of scheduling and eviction." +--- + + +**Author:** Sunny Bhambhani (InfraCloud Technologies) + +Kubernetes has been widely adopted, and many organizations use it as their de-facto orchestration engine for running workloads that need to be created and deleted frequently. + +Therefore, proper scheduling of the pods is key to ensuring that application pods are up and running within the Kubernetes cluster without any issues. This article delves into the use cases around resource management by leveraging the [PriorityClass](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) object to protect mission-critical or high-priority pods from getting evicted and making sure that the application pods are up, running, and serving traffic. + +## Resource management in Kubernetes + +The control plane consists of multiple components, out of which the scheduler (usually the built-in [kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/)) is one of the components which is responsible for assigning a node to a pod. + +Whenever a pod is created, it enters a "pending" state, after which the scheduler determines which node is best suited for the placement of the new pod. + +In the background, the scheduler runs as an infinite loop looking for pods without a `nodeName` set that are [ready for scheduling](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/). For each Pod that needs scheduling, the scheduler tries to decide which node should run that Pod. + +If the scheduler cannot find any node, the pod remains in the pending state, which is not ideal. + +{{< note >}} +To name a few, `nodeSelector` , `taints and tolerations` , `nodeAffinity` , the rank of nodes based on available resources (for example, CPU and memory), and several other criteria are used to determine the pod's placement. +{{< /note >}} + +The below diagram, from point number 1 through 4, explains the request flow: + +{{< figure src=kube-scheduler.svg alt="A diagram showing the scheduling of three Pods that a client has directly created." title="Scheduling in Kubernetes">}} + +## Typical use cases + +Below are some real-life scenarios where control over the scheduling and eviction of pods may be required. + +1. Let's say the pod you plan to deploy is critical, and you have some resource constraints. An example would be the DaemonSet of an infrastructure component like Grafana Loki. The Loki pods must run before other pods can on every node. In such cases, you could ensure resource availability by manually identifying and deleting the pods that are not required or by adding a new node to the cluster. Both these approaches are unsuitable since the former would be tedious to execute, and the latter could involve an expenditure of time and money. + + +2. Another use case could be a single cluster that holds the pods for the below environments with associated priorities: + - Production (`prod`): top priority + - Preproduction (`preprod`): intermediate priority + - Development (`dev`): least priority + + In the event of high resource consumption in the cluster, there is competition for CPU and memory resources on the nodes. While cluster-level autoscaling _may_ add more nodes, it takes time. In the interim, if there are no further nodes to scale the cluster, some Pods could remain in a Pending state, or the service could be degraded as they compete for resources. If the kubelet does evict a Pod from the node, that eviction would be random because the kubelet doesn’t have any special information about which Pods to evict and which to keep. + +3. A third example could be a microservice backed by a queuing application or a database running into a resource crunch and the queue or database getting evicted. In such a case, all the other services would be rendered useless until the database can serve traffic again. + +There can also be other scenarios where you want to control the order of scheduling or order of eviction of pods. + +## PriorityClasses in Kubernetes + +PriorityClass is a cluster-wide API object in Kubernetes and part of the `scheduling.k8s.io/v1` API group. It contains a mapping of the PriorityClass name (defined in `.metadata.name`) and an integer value (defined in `.value`). This represents the value that the scheduler uses to determine Pod's relative priority. + +Additionally, when you create a cluster using kubeadm or a managed Kubernetes service (for example, Azure Kubernetes Service), Kubernetes uses PriorityClasses to safeguard the pods that are hosted on the control plane nodes. This ensures that critical cluster components such as CoreDNS and kube-proxy can run even if resources are constrained. + +This availability of pods is achieved through the use of a special PriorityClass that ensures the pods are up and running and that the overall cluster is not affected. + +```console +$ kubectl get priorityclass +NAME VALUE GLOBAL-DEFAULT AGE +system-cluster-critical 2000000000 false 82m +system-node-critical 2000001000 false 82m +``` + +The diagram below shows exactly how it works with the help of an example, which will be detailed in the upcoming section. + +{{< figure src="decision-tree.svg" alt="A flow chart that illustrates how the kube-scheduler prioritizes new Pods and potentially preempts existing Pods" title="Pod scheduling and preemption">}} + +### Pod priority and preemption + +[Pod preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) is a Kubernetes feature that allows the cluster to preempt pods (removing an existing Pod in favor of a new Pod) on the basis of priority. [Pod priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority) indicates the importance of a pod relative to other pods while scheduling. If there aren't enough resources to run all the current pods, the scheduler tries to evict lower-priority pods over high-priority ones. + +Also, when a healthy cluster experiences a node failure, typically, lower-priority pods get preempted to create room for higher-priority pods on the available node. This happens even if the cluster can bring up a new node automatically since pod creation is usually much faster than bringing up a new node. + +### PriorityClass requirements + +Before you set up PriorityClasses, there are a few things to consider. + +1. Decide which PriorityClasses are needed. For instance, based on environment, type of pods, type of applications, etc. +2. The default PriorityClass resource for your cluster. The pods without a `priorityClassName` will be treated as priority 0. +3. Use a consistent naming convention for all PriorityClasses. +4. Make sure that the pods for your workloads are running with the right PriorityClass. + +## PriorityClass hands-on example + +Let’s say there are 3 application pods: one for prod, one for preprod, and one for development. Below are three sample YAML manifest files for each of those. + +```yaml +--- +# development +apiVersion: v1 +kind: Pod +metadata: + name: dev-nginx + labels: + env: dev +spec: + containers: + - name: dev-nginx + image: nginx + resources: + requests: + memory: "256Mi" + cpu: "0.2" + limits: + memory: ".5Gi" + cpu: "0.5" +``` + +```yaml +--- +# preproduction +apiVersion: v1 +kind: Pod +metadata: + name: preprod-nginx + labels: + env: preprod +spec: + containers: + - name: preprod-nginx + image: nginx + resources: + requests: + memory: "1.5Gi" + cpu: "1.5" + limits: + memory: "2Gi" + cpu: "2" +``` + +```yaml +--- +# production +apiVersion: v1 +kind: Pod +metadata: + name: prod-nginx + labels: + env: prod +spec: + containers: + - name: prod-nginx + image: nginx + resources: + requests: + memory: "2Gi" + cpu: "2" + limits: + memory: "2Gi" + cpu: "2" +``` + +You can create these pods with the `kubectl create -f ` command, and then check their status +using the `kubectl get pods` command. You can see if they are up and look ready to serve traffic: + +```console +$ kubectl get pods --show-labels +NAME READY STATUS RESTARTS AGE LABELS +dev-nginx 1/1 Running 0 55s env=dev +preprod-nginx 1/1 Running 0 55s env=preprod +prod-nginx 0/1 Pending 0 55s env=prod +``` + +Bad news. The pod for the Production environment is still Pending and isn't serving any traffic. + +Let's see why this is happening: +```console +$ kubectl get events +... +... +5s Warning FailedScheduling pod/prod-nginx 0/2 nodes are available: 1 Insufficient cpu, 2 Insufficient memory. +``` + +In this example, there is only one worker node, and that node has a resource crunch. + +Now, let's look at how PriorityClass can help in this situation since prod should be given higher priority than the other environments. + +## PriorityClass API + +Before creating PriorityClasses based on these requirements, let's see what a basic manifest for a PriorityClass looks like and outline some prerequisites: + +```yaml +apiVersion: scheduling.k8s.io/v1 +kind: PriorityClass +metadata: + name: PRIORITYCLASS_NAME +value: 0 # any integer value between -1000000000 to 1000000000 +description: >- + (Optional) description goes here! +globalDefault: false # or true. Only one PriorityClass can be the global default. +``` + +Below are some prerequisites for PriorityClasses: + +- The name of a PriorityClass must be a valid DNS subdomain name. +- When you make your own PriorityClass, the name should not start with `system-`, as those names are + reserved by Kubernetes itself (for example, they are used for two built-in PriorityClasses). +- Its absolute value should be between -1000000000 to 1000000000 (1 billion). +- Larger numbers are reserved by PriorityClasses such as `system-cluster-critical` + (this Pod is critically important to the cluster) and `system-node-critical` (the node + critically relies on this Pod). + `system-node-critical` is a higher priority than `system-cluster-critical`, because a + cluster-critical Pod can only work well if the node where it is running has all its node-level + critical requirements met. +- There are two optional fields: + - `globalDefault`: When true, this PriorityClass is used for pods where a `priorityClassName` is not specified. + Only one PriorityClass with `globalDefault` set to true can exist in a cluster. + If there is no PriorityClass defined with globalDefault set to true, all the pods with no priorityClassName defined will be treated with 0 priority (i.e. the least priority). + - `description`: A string with a meaningful value so that people know when to use this PriorityClass. + +{{< note >}} +Adding a PriorityClass with `globalDefault` set to `true` does not mean it will apply the same to the existing pods that are already running. This will be applicable only to the pods that came into existence after the PriorityClass was created. +{{< /note >}} + +### PriorityClass in action + +Here's an example. Next, create some environment-specific PriorityClasses: + +```yaml +apiVersion: scheduling.k8s.io/v1 +kind: PriorityClass +metadata: + name: dev-pc +value: 1000000 +globalDefault: false +description: >- + (Optional) This priority class should only be used for all development pods. +``` + +```yaml +apiVersion: scheduling.k8s.io/v1 +kind: PriorityClass +metadata: + name: preprod-pc +value: 2000000 +globalDefault: false +description: >- + (Optional) This priority class should only be used for all preprod pods. +``` + +```yaml +apiVersion: scheduling.k8s.io/v1 +kind: PriorityClass +metadata: + name: prod-pc +value: 4000000 +globalDefault: false +description: >- + (Optional) This priority class should only be used for all prod pods. +``` + +Use `kubectl create -f ` command to create a pc and `kubectl get pc` to check its status. + +```console +$ kubectl get pc +NAME VALUE GLOBAL-DEFAULT AGE +dev-pc 1000000 false 3m13s +preprod-pc 2000000 false 2m3s +prod-pc 4000000 false 7s +system-cluster-critical 2000000000 false 82m +system-node-critical 2000001000 false 82m +``` + +The new PriorityClasses are in place now. A small change is needed in the pod manifest or pod template (in a ReplicaSet or Deployment). In other words, you need to specify the priority class name at `.spec.priorityClassName` (which is a string value). + +First update the previous production pod manifest file to have a PriorityClass assigned, then delete the Production pod and recreate it. You can't edit the priority class for a Pod that already exists. + +In my cluster, when I tried this, here's what happened. +First, that change seems successful; the status of pods has been updated: + +```console +$ kubectl get pods --show-labels +NAME READY STATUS RESTARTS AGE LABELS +dev-nginx 1/1 Terminating 0 55s env=dev +preprod-nginx 1/1 Running 0 55s env=preprod +prod-nginx 0/1 Pending 0 55s env=prod +``` + +The dev-nginx pod is getting terminated. Once that is successfully terminated and there are enough resources for the prod pod, the control plane can schedule the prod pod: + +```console +Warning FailedScheduling pod/prod-nginx 0/2 nodes are available: 1 Insufficient cpu, 2 Insufficient memory. +Normal Preempted pod/dev-nginx by default/prod-nginx on node node01 +Normal Killing pod/dev-nginx Stopping container dev-nginx +Normal Scheduled pod/prod-nginx Successfully assigned default/prod-nginx to node01 +Normal Pulling pod/prod-nginx Pulling image "nginx" +Normal Pulled pod/prod-nginx Successfully pulled image "nginx" +Normal Created pod/prod-nginx Created container prod-nginx +Normal Started pod/prod-nginx Started container prod-nginx +``` + +## Enforcement + +When you set up PriorityClasses, they exist just how you defined them. However, people +(and tools) that make changes to your cluster are free to set any PriorityClass, or to not +set any PriorityClass at all. +However, you can use other Kubernetes features to make sure that the priorities you wanted +are actually applied. + +As an alpha feature, you can define a [ValidatingAdmissionPolicy](/blog/2022/12/20/validating-admission-policies-alpha/) and a ValidatingAdmissionPolicyBinding so that, for example, +Pods that go into the `prod` namespace must use the `prod-pc` PriorityClass. +With another ValidatingAdmissionPolicyBinding you ensure that the `preprod` namespace +uses the `preprod-pc` PriorityClass, and so on. +In *any* cluster, you can enforce similar controls using external projects such as +[Kyverno](https://kyverno.io/) or [Gatekeeper](https://open-policy-agent.github.io/gatekeeper/), +through validating admission webhooks. + +However you do it, Kubernetes gives you options to make sure that the PriorityClasses are +used how you wanted them to be, or perhaps just to +[warn](https://open-policy-agent.github.io/gatekeeper/website/docs/violations/#warn-enforcement-action) +users when they pick an unsuitable option. + +## Summary + +The above example and its events show you what this feature of Kubernetes brings to the table, along with several scenarios where you can use this feature. To reiterate, this helps ensure that mission-critical pods are up and available to serve the traffic and, in the case of a resource crunch, determines cluster behavior. + +It gives you some power to decide the order of scheduling and order of [preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) for Pods. Therefore, you need to define the PriorityClasses sensibly. +For example, if you have a cluster autoscaler to add nodes on demand, +make sure to run it with the `system-cluster-critical` PriorityClass. You don't want to +get in a situation where the autoscaler has been preempted and there are no new nodes +coming online. + +If you have any queries or feedback, feel free to reach out to me on [LinkedIn](http://www.linkedin.com/in/sunnybhambhani). + + + diff --git a/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/kube-scheduler.svg b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/kube-scheduler.svg new file mode 100644 index 0000000000000..53f5c1fb7b7a3 --- /dev/null +++ b/content/en/blog/_posts/2023-01-12-protect-mission-critical-pods-priorityclass/kube-scheduler.svg @@ -0,0 +1,4 @@ + + + +









kube-apiserver
kube-apiserver...





purple-pod
purple-p...





client
client...





brown-pod
brown-po...





indigo-pod
indigo-p...









etcd
etcd...









kube-scheduler
kube-scheduler...









kube-controller-manager
kube-controller-manager...





kubelet
kubelet...





kubelet
kubelet...





kubelet
kubelet...





blue-pod
blue-pod...





red-pod
red-pod...





pink-pod
pink-pod...





green-pod
green-po...


kube-scheduler watches
for new pods with no
nodeName assigned.

Once it finds one, it update
the nodeName key and
schedules it.
kube-scheduler watches...
etcd is where k8s
objects are persisted.

In this case it will persist
the information about
the new pods.
etcd is where k8s...
Based on the value of
nodeName, kubelet
launches the pod.
Based on the value of...
1
1
2%3CmxGraphModel%3E%3Croot%3E%3CmxCell%20id%3D%220%22%2F%3E%3CmxCell%20id%3D%221%22%20parent%3D%220%22%2F%3E%3CmxCell%20id%3D%222%22%20value%3D%221%22%20style%3D%22ellipse%3BwhiteSpace%3Dwrap%3Bhtml%3D1%3BfillColor%3D%23FFFFFF%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%22460%22%20y%3D%22290%22%20width%3D%2220%22%20height%3D%2220%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3C%2Froot%3E%3C%2FmxGraphModel%3E
2%3...
3
3
4
4
new incoming pods.
new incoming pods.
Text is not SVG - cannot display
\ No newline at end of file diff --git a/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Example.png b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Example.png new file mode 100644 index 0000000000000..175c21a889626 Binary files /dev/null and b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Example.png differ diff --git a/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Microservices.png b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Microservices.png new file mode 100644 index 0000000000000..da0f60a5054a4 Binary files /dev/null and b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/Microservices.png differ diff --git a/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/index.md b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/index.md new file mode 100644 index 0000000000000..8ecad96975dd3 --- /dev/null +++ b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/index.md @@ -0,0 +1,84 @@ +--- +layout: blog +title: Consider All Microservices Vulnerable — And Monitor Their Behavior +date: 2023-01-20 +slug: security-behavior-analysis +--- + +**Author:** +David Hadas (IBM Research Labs) + +_This post warns Devops from a false sense of security. Following security best practices when developing and configuring microservices do not result in non-vulnerable microservices. The post shows that although all deployed microservices are vulnerable, there is much that can be done to ensure microservices are not exploited. It explains how analyzing the behavior of clients and services from a security standpoint, named here **"Security-Behavior Analysis"**, can protect the deployed vulnerable microservices. It points to [Guard](http://knative.dev/security-guard), an open source project offering security-behavior monitoring and control of Kubernetes microservices presumed vulnerable._ + +As cyber attacks continue to intensify in sophistication, organizations deploying cloud services continue to grow their cyber investments aiming to produce safe and non-vulnerable services. However, the year-by-year growth in cyber investments does not result in a parallel reduction in cyber incidents. Instead, the number of cyber incidents continues to grow annually. Evidently, organizations are doomed to fail in this struggle - no matter how much effort is made to detect and remove cyber weaknesses from deployed services, it seems offenders always have the upper hand. + +Considering the current spread of offensive tools, sophistication of offensive players, and ever-growing cyber financial gains to offenders, any cyber strategy that relies on constructing a non-vulnerable, weakness-free service in 2023 is clearly too naïve. It seems the only viable strategy is to: + +➥ **Admit that your services are vulnerable!** + +In other words, consciously accept that you will never create completely invulnerable services. If your opponents find even a single weakness as an entry-point, you lose! Admitting that in spite of your best efforts, all your services are still vulnerable is an important first step. Next, this post discusses what you can do about it... + +## How to protect microservices from being exploited + +Being vulnerable does not necessarily mean that your service will be exploited. Though your services are vulnerable in some ways unknown to you, offenders still need to identify these vulnerabilities and then exploit them. If offenders fail to exploit your service vulnerabilities, you win! In other words, having a vulnerability that can’t be exploited, represents a risk that can’t be realized. + +{{< figure src="Example.png" alt="Image of an example of offender gaining foothold in a service" class="diagram-large" caption="Figure 1. An Offender gaining foothold in a vulnerable service" >}} + +The above diagram shows an example in which the offender does not yet have a foothold in the service; that is, it is assumed that your service does not run code controlled by the offender on day 1. In our example the service has vulnerabilities in the API exposed to clients. To gain an initial foothold the offender uses a malicious client to try and exploit one of the service API vulnerabilities. The malicious client sends an exploit that triggers some unplanned behavior of the service. + +More specifically, let’s assume the service is vulnerable to an SQL injection. The developer failed to sanitize the user input properly, thereby allowing clients to send values that would change the intended behavior. In our example, if a client sends a query string with key “username” and value of _“tom or 1=1”_, the client will receive the data of all users. Exploiting this vulnerability requires the client to send an irregular string as the value. Note that benign users will not be sending a string with spaces or with the equal sign character as a username, instead they will normally send legal usernames which for example may be defined as a short sequence of characters a-z. No legal username can trigger service unplanned behavior. + +In this simple example, one can already identify several opportunities to detect and block an attempt to exploit the vulnerability (un)intentionally left behind by the developer, making the vulnerability unexploitable. First, the malicious client behavior differs from the behavior of benign clients, as it sends irregular requests. If such a change in behavior is detected and blocked, the exploit will never reach the service. Second, the service behavior in response to the exploit differs from the service behavior in response to a regular request. Such behavior may include making subsequent irregular calls to other services such as a data store, taking irregular time to respond, and/or responding to the malicious client with an irregular response (for example, containing much more data than normally sent in case of benign clients making regular requests). Service behavioral changes, if detected, will also allow blocking the exploit in different stages of the exploitation attempt. + +More generally: + +- Monitoring the behavior of clients can help detect and block exploits against service API vulnerabilities. In fact, deploying efficient client behavior monitoring makes many vulnerabilities unexploitable and others very hard to achieve. To succeed, the offender needs to create an exploit undetectable from regular requests. + +- Monitoring the behavior of services can help detect services as they are being exploited regardless of the attack vector used. Efficient service behavior monitoring limits what an attacker may be able to achieve as the offender needs to ensure the service behavior is undetectable from regular service behavior. + +Combining both approaches may add a protection layer to the deployed vulnerable services, drastically decreasing the probability for anyone to successfully exploit any of the deployed vulnerable services. Next, let us identify four use cases where you need to use security-behavior monitoring. + +## Use cases + +One can identify the following four different stages in the life of any service from a security standpoint. In each stage, security-behavior monitoring is required to meet different challenges: + +Service State | Use case | What do you need in order to cope with this use case? +------------- | ------------- | ----------------------------------------- +**Normal** | **No known vulnerabilities:** The service owner is normally not aware of any known vulnerabilities in the service image or configuration. Yet, it is reasonable to assume that the service has weaknesses. | **Provide generic protection against any unknown, zero-day, service vulnerabilities** - Detect/block irregular patterns sent as part of incoming client requests that may be used as exploits. +**Vulnerable** | **An applicable CVE is published:** The service owner is required to release a new non-vulnerable revision of the service. Research shows that in practice this process of removing a known vulnerability may take many weeks to accomplish (2 months on average). | **Add protection based on the CVE analysis** - Detect/block incoming requests that include specific patterns that may be used to exploit the discovered vulnerability. Continue to offer services, although the service has a known vulnerability. +**Exploitable** | **A known exploit is published:** The service owner needs a way to filter incoming requests that contain the known exploit. | **Add protection based on a known exploit signature** - Detect/block incoming client requests that carry signatures identifying the exploit. Continue to offer services, although the presence of an exploit. +**Misused** | **An offender misuses pods backing the service:** The offender can follow an attack pattern enabling him/her to misuse pods. The service owner needs to restart any compromised pods while using non compromised pods to continue offering the service. Note that once a pod is restarted, the offender needs to repeat the attack pattern before he/she may again misuse it. | **Identify and restart instances of the component that is being misused** - At any given time, some backing pods may be compromised and misused, while others behave as designed. Detect/remove the misused pods while allowing other pods to continue servicing client requests. + +Fortunately, microservice architecture is well suited to security-behavior monitoring as discussed next. + +## Security-Behavior of microservices versus monoliths {#microservices-vs-monoliths} + +Kubernetes is often used to support workloads designed with microservice architecture. By design, microservices aim to follow the UNIX philosophy of "Do One Thing And Do It Well". Each microservice has a bounded context and a clear interface. In other words, you can expect the microservice clients to send relatively regular requests and the microservice to present a relatively regular behavior as a response to these requests. Consequently, a microservice architecture is an excellent candidate for security-behavior monitoring. + +{{< figure src="Microservices.png" alt="Image showing why microservices are well suited for security-behavior monitoring" class="diagram-large" caption="Figure 2. Microservices are well suited for security-behavior monitoring" >}} + +The diagram above clarifies how dividing a monolithic service to a set of microservices improves our ability to perform security-behavior monitoring and control. In a monolithic service approach, different client requests are intertwined, resulting in a diminished ability to identify irregular client behaviors. Without prior knowledge, an observer of the intertwined client requests will find it hard to distinguish between types of requests and their related characteristics. Further, internal client requests are not exposed to the observer. Lastly, the aggregated behavior of the monolithic service is a compound of the many different internal behaviors of its components, making it hard to identify irregular service behavior. + +In a microservice environment, each microservice is expected by design to offer a more well-defined service and serve better defined type of requests. This makes it easier for an observer to identify irregular client behavior and irregular service behavior. Further, a microservice design exposes the internal requests and internal services which offer more security-behavior data to identify irregularities by an observer. Overall, this makes the microservice design pattern better suited for security-behavior monitoring and control. + +## Security-Behavior monitoring on Kubernetes + +Kubernetes deployments seeking to add Security-Behavior may use [Guard](http://knative.dev/security-guard), developed under the CNCF project Knative. Guard is integrated into the full Knative automation suite that runs on top of Kubernetes. Alternatively, **you can deploy Guard as a standalone tool** to protect any HTTP-based workload on Kubernetes. + +See: + +- [Guard](https://github.com/knative-sandbox/security-guard) on Github, for using Guard as a standalone tool. +- The Knative automation suite - Read about Knative, in the blog post [Opinionated Kubernetes](https://davidhadas.wordpress.com/2022/08/29/knative-an-opinionated-kubernetes) which describes how Knative simplifies and unifies the way web services are deployed on Kubernetes. +- You may contact Guard maintainers on the [SIG Security](https://kubernetes.slack.com/archives/C019LFTGNQ3) Slack channel or on the Knative community [security](https://knative.slack.com/archives/CBYV1E0TG) Slack channel. The Knative community channel will move soon to the [CNCF Slack](https://communityinviter.com/apps/cloud-native/cncf) under the name `#knative-security`. + +The goal of this post is to invite the Kubernetes community to action and introduce Security-Behavior monitoring and control to help secure Kubernetes based deployments. Hopefully, the community as a follow up will: + +1. Analyze the cyber challenges presented for different Kubernetes use cases +1. Add appropriate security documentation for users on how to introduce Security-Behavior monitoring and control. +1. Consider how to integrate with tools that can help users monitor and control their vulnerable services. + +## Getting involved + +You are welcome to get involved and join the effort to develop security behavior monitoring +and control for Kubernetes; to share feedback and contribute to code or documentation; +and to make or suggest improvements of any kind. diff --git a/content/en/docs/concepts/architecture/garbage-collection.md b/content/en/docs/concepts/architecture/garbage-collection.md index 70fd8423de086..a6e4290710563 100644 --- a/content/en/docs/concepts/architecture/garbage-collection.md +++ b/content/en/docs/concepts/architecture/garbage-collection.md @@ -144,7 +144,7 @@ which you can define: * `MinAge`: the minimum age at which the kubelet can garbage collect a container. Disable by setting to `0`. - * `MaxPerPodContainer`: the maximum number of dead containers each Pod pair + * `MaxPerPodContainer`: the maximum number of dead containers each Pod can have. Disable by setting to less than `0`. * `MaxContainers`: the maximum number of dead containers the cluster can have. Disable by setting to less than `0`. diff --git a/content/en/docs/concepts/architecture/leases.md b/content/en/docs/concepts/architecture/leases.md index f7fbd3906da61..2eb2fdc2cb605 100644 --- a/content/en/docs/concepts/architecture/leases.md +++ b/content/en/docs/concepts/architecture/leases.md @@ -6,7 +6,7 @@ weight: 30 -Distrbuted systems often have a need for "leases", which provides a mechanism to lock shared resources and coordinate activity between nodes. +Distributed systems often have a need for "leases", which provides a mechanism to lock shared resources and coordinate activity between nodes. In Kubernetes, the "lease" concept is represented by `Lease` objects in the `coordination.k8s.io` API group, which are used for system-critical capabilities like node heart beats and component-level leader election. diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index d36d82174b70d..9cf68b6b84150 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -9,7 +9,7 @@ weight: 10 -Kubernetes runs your workload by placing containers into Pods to run on _Nodes_. +Kubernetes runs your {{< glossary_tooltip text="workload" term_id="workload" >}} by placing containers into Pods to run on _Nodes_. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the {{< glossary_tooltip text="control plane" term_id="control-plane" >}} @@ -454,50 +454,6 @@ Message: Pod was terminated in response to imminent node shutdown. {{< /note >}} -## Non Graceful node shutdown {#non-graceful-node-shutdown} - -{{< feature-state state="beta" for_k8s_version="v1.26" >}} - -A node shutdown action may not be detected by kubelet's Node Shutdown Manager, -either because the command does not trigger the inhibitor locks mechanism used by -kubelet or because of a user error, i.e., the ShutdownGracePeriod and -ShutdownGracePeriodCriticalPods are not configured properly. Please refer to above -section [Graceful Node Shutdown](#graceful-node-shutdown) for more details. - -When a node is shutdown but not detected by kubelet's Node Shutdown Manager, the pods -that are part of a StatefulSet will be stuck in terminating status on -the shutdown node and cannot move to a new running node. This is because kubelet on -the shutdown node is not available to delete the pods so the StatefulSet cannot -create a new pod with the same name. If there are volumes used by the pods, the -VolumeAttachments will not be deleted from the original shutdown node so the volumes -used by these pods cannot be attached to a new running node. As a result, the -application running on the StatefulSet cannot function properly. If the original -shutdown node comes up, the pods will be deleted by kubelet and new pods will be -created on a different running node. If the original shutdown node does not come up, -these pods will be stuck in terminating status on the shutdown node forever. - -To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute` -or `NoSchedule` effect to a Node marking it out-of-service. -If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) -is enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint, the -pods on the node will be forcefully deleted if there are no matching tolerations on it and volume -detach operations for the pods terminating on the node will happen immediately. This allows the -Pods on the out-of-service node to recover quickly on a different node. - -During a non-graceful shutdown, Pods are terminated in the two phases: - -1. Force delete the Pods that do not have matching `out-of-service` tolerations. -2. Immediately perform detach volume operation for such pods. - -{{< note >}} -- Before adding the taint `node.kubernetes.io/out-of-service` , it should be verified - that the node is already in shutdown or power off state (not in the middle of - restarting). -- The user is required to manually remove the out-of-service taint after the pods are - moved to a new node and the user has checked that the shutdown node has been - recovered since the user was the one who originally added the taint. -{{< /note >}} - ### Pod Priority based graceful node shutdown {#pod-priority-graceful-node-shutdown} {{< feature-state state="alpha" for_k8s_version="v1.23" >}} @@ -596,6 +552,50 @@ the feature is Beta and is enabled by default. Metrics `graceful_shutdown_start_time_seconds` and `graceful_shutdown_end_time_seconds` are emitted under the kubelet subsystem to monitor node shutdowns. +## Non Graceful node shutdown {#non-graceful-node-shutdown} + +{{< feature-state state="beta" for_k8s_version="v1.26" >}} + +A node shutdown action may not be detected by kubelet's Node Shutdown Manager, +either because the command does not trigger the inhibitor locks mechanism used by +kubelet or because of a user error, i.e., the ShutdownGracePeriod and +ShutdownGracePeriodCriticalPods are not configured properly. Please refer to above +section [Graceful Node Shutdown](#graceful-node-shutdown) for more details. + +When a node is shutdown but not detected by kubelet's Node Shutdown Manager, the pods +that are part of a {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} will be stuck in terminating status on +the shutdown node and cannot move to a new running node. This is because kubelet on +the shutdown node is not available to delete the pods so the StatefulSet cannot +create a new pod with the same name. If there are volumes used by the pods, the +VolumeAttachments will not be deleted from the original shutdown node so the volumes +used by these pods cannot be attached to a new running node. As a result, the +application running on the StatefulSet cannot function properly. If the original +shutdown node comes up, the pods will be deleted by kubelet and new pods will be +created on a different running node. If the original shutdown node does not come up, +these pods will be stuck in terminating status on the shutdown node forever. + +To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute` +or `NoSchedule` effect to a Node marking it out-of-service. +If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +is enabled on {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}, and a Node is marked out-of-service with this taint, the +pods on the node will be forcefully deleted if there are no matching tolerations on it and volume +detach operations for the pods terminating on the node will happen immediately. This allows the +Pods on the out-of-service node to recover quickly on a different node. + +During a non-graceful shutdown, Pods are terminated in the two phases: + +1. Force delete the Pods that do not have matching `out-of-service` tolerations. +2. Immediately perform detach volume operation for such pods. + +{{< note >}} +- Before adding the taint `node.kubernetes.io/out-of-service` , it should be verified + that the node is already in shutdown or power off state (not in the middle of + restarting). +- The user is required to manually remove the out-of-service taint after the pods are + moved to a new node and the user has checked that the shutdown node has been + recovered since the user was the one who originally added the taint. +{{< /note >}} + ## Swap memory management {#swap-memory} {{< feature-state state="alpha" for_k8s_version="v1.22" >}} @@ -646,9 +646,11 @@ see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its ## {{% heading "whatsnext" %}} -* Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node. -* Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core). -* Read the [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) - section of the architecture design document. -* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). +Learn more about the following: + * [Components](/docs/concepts/overview/components/#node-components) that make up a node. + * [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core). + * [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) section of the architecture design document. + * [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). + * [Node Resource Managers](/docs/concepts/policy/node-resource-managers/). + * [Resource Management for Windows nodes](/docs/concepts/configuration/windows-resource-management/). diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index c90715da09956..0dbbe9b6deb67 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -1,20 +1,26 @@ --- -reviewers: -- janetkuo title: Managing Resources content_type: concept +reviewers: +- janetkuo weight: 40 --- -You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features that we will discuss in more depth are [configuration files](/docs/concepts/configuration/overview/) and [labels](/docs/concepts/overview/working-with-objects/labels/). +You've deployed your application and exposed it via a service. Now what? Kubernetes provides a +number of tools to help you manage your application deployment, including scaling and updating. +Among the features that we will discuss in more depth are +[configuration files](/docs/concepts/configuration/overview/) and +[labels](/docs/concepts/overview/working-with-objects/labels/). ## Organizing resource configurations -Many applications require multiple resources to be created, such as a Deployment and a Service. Management of multiple resources can be simplified by grouping them together in the same file (separated by `---` in YAML). For example: +Many applications require multiple resources to be created, such as a Deployment and a Service. +Management of multiple resources can be simplified by grouping them together in the same file +(separated by `---` in YAML). For example: {{< codenew file="application/nginx-app.yaml" >}} @@ -24,89 +30,99 @@ Multiple resources can be created the same way as a single resource: kubectl apply -f https://k8s.io/examples/application/nginx-app.yaml ``` -```shell +```none service/my-nginx-svc created deployment.apps/my-nginx created ``` -The resources will be created in the order they appear in the file. Therefore, it's best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the controller(s), such as Deployment. +The resources will be created in the order they appear in the file. Therefore, it's best to +specify the service first, since that will ensure the scheduler can spread the pods associated +with the service as they are created by the controller(s), such as Deployment. `kubectl apply` also accepts multiple `-f` arguments: ```shell -kubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml +kubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \ + -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml ``` -And a directory can be specified rather than or in addition to individual files: -```shell -kubectl apply -f https://k8s.io/examples/application/nginx/ -``` - -`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`. - -It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together. +It is a recommended practice to put resources related to the same microservice or application tier +into the same file, and to group all of the files associated with your application in the same +directory. If the tiers of your application bind to each other using DNS, you can deploy all of +the components of your stack together. -A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into GitHub: +A URL can also be specified as a configuration source, which is handy for deploying directly from +configuration files checked into GitHub: ```shell -kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/nginx/nginx-deployment.yaml +kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml ``` -```shell +```none deployment.apps/my-nginx created ``` ## Bulk operations in kubectl -Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created: +Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract +resource names from configuration files in order to perform other operations, in particular to +delete the same resources you created: ```shell kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml ``` -```shell +```none deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` -In the case of two resources, you can specify both resources on the command line using the resource/name syntax: +In the case of two resources, you can specify both resources on the command line using the +resource/name syntax: ```shell kubectl delete deployments/my-nginx services/my-nginx-svc ``` -For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using `-l` or `--selector`, to filter resources by their labels: +For larger numbers of resources, you'll find it easier to specify the selector (label query) +specified using `-l` or `--selector`, to filter resources by their labels: ```shell kubectl delete deployment,services -l app=nginx ``` -```shell +```none deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` -Because `kubectl` outputs resource names in the same syntax it accepts, you can chain operations using `$()` or `xargs`: +Because `kubectl` outputs resource names in the same syntax it accepts, you can chain operations +using `$()` or `xargs`: ```shell kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service) kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service | xargs -i kubectl get {} ``` -```shell +```none NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-svc LoadBalancer 10.0.0.208 80/TCP 0s ``` -With the above commands, we first create resources under `examples/application/nginx/` and print the resources created with `-o name` output format -(print each resource as resource/name). Then we `grep` only the "service", and then print it with `kubectl get`. +With the above commands, we first create resources under `examples/application/nginx/` and print +the resources created with `-o name` output format (print each resource as resource/name). +Then we `grep` only the "service", and then print it with `kubectl get`. -If you happen to organize your resources across several subdirectories within a particular directory, you can recursively perform the operations on the subdirectories also, by specifying `--recursive` or `-R` alongside the `--filename,-f` flag. +If you happen to organize your resources across several subdirectories within a particular +directory, you can recursively perform the operations on the subdirectories also, by specifying +`--recursive` or `-R` alongside the `--filename,-f` flag. -For instance, assume there is a directory `project/k8s/development` that holds all of the {{< glossary_tooltip text="manifests" term_id="manifest" >}} needed for the development environment, organized by resource type: +For instance, assume there is a directory `project/k8s/development` that holds all of the +{{< glossary_tooltip text="manifests" term_id="manifest" >}} needed for the development environment, +organized by resource type: -``` +```none project/k8s/development ├── configmap │   └── my-configmap.yaml @@ -116,13 +132,15 @@ project/k8s/development └── my-pvc.yaml ``` -By default, performing a bulk operation on `project/k8s/development` will stop at the first level of the directory, not processing any subdirectories. If we had tried to create the resources in this directory using the following command, we would have encountered an error: +By default, performing a bulk operation on `project/k8s/development` will stop at the first level +of the directory, not processing any subdirectories. If we had tried to create the resources in +this directory using the following command, we would have encountered an error: ```shell kubectl apply -f project/k8s/development ``` -```shell +```none error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin) ``` @@ -132,13 +150,14 @@ Instead, specify the `--recursive` or `-R` flag with the `--filename,-f` flag as kubectl apply -f project/k8s/development --recursive ``` -```shell +```none configmap/my-config created deployment.apps/my-deployment created persistentvolumeclaim/my-pvc created ``` -The `--recursive` flag works with any operation that accepts the `--filename,-f` flag such as: `kubectl {create,get,delete,describe,rollout}` etc. +The `--recursive` flag works with any operation that accepts the `--filename,-f` flag such as: +`kubectl {create,get,delete,describe,rollout}` etc. The `--recursive` flag also works when multiple `-f` arguments are provided: @@ -146,7 +165,7 @@ The `--recursive` flag also works when multiple `-f` arguments are provided: kubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive ``` -```shell +```none namespace/development created namespace/staging created configmap/my-config created @@ -154,36 +173,41 @@ deployment.apps/my-deployment created persistentvolumeclaim/my-pvc created ``` -If you're interested in learning more about `kubectl`, go ahead and read [Command line tool (kubectl)](/docs/reference/kubectl/). +If you're interested in learning more about `kubectl`, go ahead and read +[Command line tool (kubectl)](/docs/reference/kubectl/). ## Using labels effectively -The examples we've used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another. +The examples we've used so far apply at most a single label to any resource. There are many +scenarios where multiple labels should be used to distinguish sets from one another. -For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/master/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels: +For instance, different applications would use different values for the `app` label, but a +multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/master/guestbook/), +would additionally need to distinguish each tier. The frontend could carry the following labels: ```yaml - labels: - app: guestbook - tier: frontend +labels: + app: guestbook + tier: frontend ``` -while the Redis master and slave would have different `tier` labels, and perhaps even an additional `role` label: +while the Redis master and slave would have different `tier` labels, and perhaps even an +additional `role` label: ```yaml - labels: - app: guestbook - tier: backend - role: master +labels: + app: guestbook + tier: backend + role: master ``` and ```yaml - labels: - app: guestbook - tier: backend - role: slave +labels: + app: guestbook + tier: backend + role: slave ``` The labels allow us to slice and dice our resources along any dimension specified by a label: @@ -193,7 +217,7 @@ kubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml kubectl get pods -Lapp -Ltier -Lrole ``` -```shell +```none NAME READY STATUS RESTARTS AGE APP TIER ROLE guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend @@ -208,7 +232,8 @@ my-nginx-o0ef1 1/1 Running 0 29m nginx ```shell kubectl get pods -lapp=guestbook,role=slave ``` -```shell + +```none NAME READY STATUS RESTARTS AGE guestbook-redis-slave-2q2yf 1/1 Running 0 3m guestbook-redis-slave-qgazl 1/1 Running 0 3m @@ -216,62 +241,72 @@ guestbook-redis-slave-qgazl 1/1 Running 0 3m ## Canary deployments -Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. It is common practice to deploy a *canary* of a new application release (specified via image tag in the pod template) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out. +Another scenario where multiple labels are needed is to distinguish deployments of different +releases or configurations of the same component. It is common practice to deploy a *canary* of a +new application release (specified via image tag in the pod template) side by side with the +previous release so that the new release can receive live production traffic before fully rolling +it out. For instance, you can use a `track` label to differentiate different releases. The primary, stable release would have a `track` label with value as `stable`: -```yaml - name: frontend - replicas: 3 - ... - labels: - app: guestbook - tier: frontend - track: stable - ... - image: gb-frontend:v3 +```none +name: frontend +replicas: 3 +... +labels: + app: guestbook + tier: frontend + track: stable +... +image: gb-frontend:v3 ``` -and then you can create a new release of the guestbook frontend that carries the `track` label with different value (i.e. `canary`), so that two sets of pods would not overlap: +and then you can create a new release of the guestbook frontend that carries the `track` label +with different value (i.e. `canary`), so that two sets of pods would not overlap: -```yaml - name: frontend-canary - replicas: 1 - ... - labels: - app: guestbook - tier: frontend - track: canary - ... - image: gb-frontend:v4 +```none +name: frontend-canary +replicas: 1 +... +labels: + app: guestbook + tier: frontend + track: canary +... +image: gb-frontend:v4 ``` - -The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the `track` label), so that the traffic will be redirected to both applications: +The frontend service would span both sets of replicas by selecting the common subset of their +labels (i.e. omitting the `track` label), so that the traffic will be redirected to both +applications: ```yaml - selector: - app: guestbook - tier: frontend +selector: + app: guestbook + tier: frontend ``` -You can tweak the number of replicas of the stable and canary releases to determine the ratio of each release that will receive live production traffic (in this case, 3:1). -Once you're confident, you can update the stable track to the new application release and remove the canary one. +You can tweak the number of replicas of the stable and canary releases to determine the ratio of +each release that will receive live production traffic (in this case, 3:1). +Once you're confident, you can update the stable track to the new application release and remove +the canary one. -For a more concrete example, check the [tutorial of deploying Ghost](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary). +For a more concrete example, check the +[tutorial of deploying Ghost](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary). ## Updating labels -Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`. +Sometimes existing pods and other resources need to be relabeled before creating new resources. +This can be done with `kubectl label`. For example, if you want to label all your nginx pods as frontend tier, run: ```shell kubectl label pods -l app=nginx tier=fe ``` -```shell +```none pod/my-nginx-2035384211-j5fhi labeled pod/my-nginx-2035384211-u2c7e labeled pod/my-nginx-2035384211-u3t6x labeled @@ -283,20 +318,25 @@ To see the pods you labeled, run: ```shell kubectl get pods -l app=nginx -L tier ``` -```shell + +```none NAME READY STATUS RESTARTS AGE TIER my-nginx-2035384211-j5fhi 1/1 Running 0 23m fe my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe my-nginx-2035384211-u3t6x 1/1 Running 0 23m fe ``` -This outputs all "app=nginx" pods, with an additional label column of pods' tier (specified with `-L` or `--label-columns`). +This outputs all "app=nginx" pods, with an additional label column of pods' tier (specified with +`-L` or `--label-columns`). -For more information, please see [labels](/docs/concepts/overview/working-with-objects/labels/) and [kubectl label](/docs/reference/generated/kubectl/kubectl-commands/#label). +For more information, please see [labels](/docs/concepts/overview/working-with-objects/labels/) +and [kubectl label](/docs/reference/generated/kubectl/kubectl-commands/#label). ## Updating annotations -Sometimes you would want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools, libraries, etc. This can be done with `kubectl annotate`. For example: +Sometimes you would want to attach annotations to resources. Annotations are arbitrary +non-identifying metadata for retrieval by API clients such as tools, libraries, etc. +This can be done with `kubectl annotate`. For example: ```shell kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' @@ -312,17 +352,19 @@ metadata: ... ``` -For more information, please see [annotations](/docs/concepts/overview/working-with-objects/annotations/) and [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands/#annotate) document. +For more information, see [annotations](/docs/concepts/overview/working-with-objects/annotations/) +and [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands/#annotate) document. ## Scaling your application -When load on your application grows or shrinks, use `kubectl` to scale your application. For instance, to decrease the number of nginx replicas from 3 to 1, do: +When load on your application grows or shrinks, use `kubectl` to scale your application. +For instance, to decrease the number of nginx replicas from 3 to 1, do: ```shell kubectl scale deployment/my-nginx --replicas=1 ``` -```shell +```none deployment.apps/my-nginx scaled ``` @@ -332,25 +374,27 @@ Now you only have one pod managed by the deployment. kubectl get pods -l app=nginx ``` -```shell +```none NAME READY STATUS RESTARTS AGE my-nginx-2035384211-j5fhi 1/1 Running 0 30m ``` -To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do: +To have the system automatically choose the number of nginx replicas as needed, +ranging from 1 to 3, do: ```shell kubectl autoscale deployment/my-nginx --min=1 --max=3 ``` -```shell +```none horizontalpodautoscaler.autoscaling/my-nginx autoscaled ``` Now your nginx replicas will be scaled up and down as needed, automatically. -For more information, please see [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale), [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) and [horizontal pod autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) document. - +For more information, please see [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale), +[kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) and +[horizontal pod autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) document. ## In-place updates of resources @@ -361,20 +405,34 @@ Sometimes it's necessary to make narrow, non-disruptive updates to resources you It is suggested to maintain a set of configuration files in source control (see [configuration as code](https://martinfowler.com/bliki/InfrastructureAsCode.html)), so that they can be maintained and versioned along with the code for the resources they configure. -Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) to push your configuration changes to the cluster. +Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) +to push your configuration changes to the cluster. -This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified. +This command will compare the version of the configuration that you're pushing with the previous +version and apply the changes you've made, without overwriting any automated changes to properties +you haven't specified. ```shell kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml +``` + +```none deployment.apps/my-nginx configured ``` -Note that `kubectl apply` attaches an annotation to the resource in order to determine the changes to the configuration since the previous invocation. When it's invoked, `kubectl apply` does a three-way diff between the previous configuration, the provided input and the current configuration of the resource, in order to determine how to modify the resource. +Note that `kubectl apply` attaches an annotation to the resource in order to determine the changes +to the configuration since the previous invocation. When it's invoked, `kubectl apply` does a +three-way diff between the previous configuration, the provided input and the current +configuration of the resource, in order to determine how to modify the resource. -Currently, resources are created without this annotation, so the first invocation of `kubectl apply` will fall back to a two-way diff between the provided input and the current configuration of the resource. During this first invocation, it cannot detect the deletion of properties set when the resource was created. For this reason, it will not remove them. +Currently, resources are created without this annotation, so the first invocation of `kubectl +apply` will fall back to a two-way diff between the provided input and the current configuration +of the resource. During this first invocation, it cannot detect the deletion of properties set +when the resource was created. For this reason, it will not remove them. -All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as `kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to `kubectl apply` to detect and perform deletions using a three-way diff. +All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as +`kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to +`kubectl apply` to detect and perform deletions using a three-way diff. ### kubectl edit @@ -384,7 +442,8 @@ Alternatively, you may also update resources with `kubectl edit`: kubectl edit deployment/my-nginx ``` -This is equivalent to first `get` the resource, edit it in text editor, and then `apply` the resource with the updated version: +This is equivalent to first `get` the resource, edit it in text editor, and then `apply` the +resource with the updated version: ```shell kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml @@ -397,7 +456,8 @@ deployment.apps/my-nginx configured rm /tmp/nginx.yaml ``` -This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables. +This allows you to do more significant changes more easily. Note that you can specify the editor +with your `EDITOR` or `KUBE_EDITOR` environment variables. For more information, please see [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) document. @@ -411,20 +471,25 @@ and ## Disruptive updates -In some cases, you may need to update resource fields that cannot be updated once initialized, or you may want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file: +In some cases, you may need to update resource fields that cannot be updated once initialized, or +you may want to make a recursive change immediately, such as to fix broken pods created by a +Deployment. To change such fields, use `replace --force`, which deletes and re-creates the +resource. In this case, you can modify your original configuration file: ```shell kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force ``` -```shell +```none deployment.apps/my-nginx deleted deployment.apps/my-nginx replaced ``` ## Updating your application without a service outage -At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios. +At some point, you'll eventually need to update your deployed application, typically by specifying +a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several +update operations, each of which is applicable to different scenarios. We'll guide you through how to create and update applications with Deployments. @@ -434,7 +499,7 @@ Let's say you were running version 1.14.2 of nginx: kubectl create deployment my-nginx --image=nginx:1.14.2 ``` -```shell +```none deployment.apps/my-nginx created ``` @@ -444,24 +509,24 @@ with 3 replicas (so the old and new revisions can coexist): kubectl scale deployment my-nginx --current-replicas=1 --replicas=3 ``` -``` +```none deployment.apps/my-nginx scaled ``` -To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1` using the previous kubectl commands. +To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` +to `nginx:1.16.1` using the previous kubectl commands. ```shell kubectl edit deployment/my-nginx ``` -That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/). - - +That's it! The Deployment will declaratively update the deployed nginx application progressively +behind the scene. It ensures that only a certain number of old replicas may be down while they are +being updated, and only a certain number of new replicas may be created above the desired number +of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/). ## {{% heading "whatsnext" %}} - - Learn about [how to use `kubectl` for application introspection and debugging](/docs/tasks/debug/debug-application/debug-running-pod/). - See [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/). - diff --git a/content/en/docs/concepts/cluster-administration/system-traces.md b/content/en/docs/concepts/cluster-administration/system-traces.md index 664e8951bfa47..04bd58ce38b92 100644 --- a/content/en/docs/concepts/cluster-administration/system-traces.md +++ b/content/en/docs/concepts/cluster-administration/system-traces.md @@ -84,7 +84,7 @@ The kubelet CRI interface and authenticated http servers are instrumented to gen trace spans. As with the apiserver, the endpoint and sampling rate are configurable. Trace context propagation is also configured. A parent span's sampling decision is always respected. A provided tracing configuration sampling rate will apply to spans without a parent. -Enabled without a configured endpoint, the default OpenTelemetry Collector reciever address of "localhost:4317" is set. +Enabled without a configured endpoint, the default OpenTelemetry Collector receiver address of "localhost:4317" is set. #### Enabling tracing in the kubelet diff --git a/content/en/docs/concepts/configuration/overview.md b/content/en/docs/concepts/configuration/overview.md index 33e04e1a914a3..7266f2b7ebc2a 100644 --- a/content/en/docs/concepts/configuration/overview.md +++ b/content/en/docs/concepts/configuration/overview.md @@ -102,13 +102,13 @@ to others, please don't hesitate to file an issue or submit a PR. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach. -A Service can be made to span multiple Deployments by omitting release-specific labels from its -selector. When you need to update a running service without downtime, use a -[Deployment](/docs/concepts/workloads/controllers/deployment/). + A Service can be made to span multiple Deployments by omitting release-specific labels from its + selector. When you need to update a running service without downtime, use a + [Deployment](/docs/concepts/workloads/controllers/deployment/). -A desired state of an object is described by a Deployment, and if changes to that spec are -_applied_, the deployment controller changes the actual state to the desired state at a controlled -rate. + A desired state of an object is described by a Deployment, and if changes to that spec are + _applied_, the deployment controller changes the actual state to the desired state at a controlled + rate. - Use the [Kubernetes common labels](/docs/concepts/overview/working-with-objects/common-labels/) for common use cases. These standardized labels enrich the metadata in a way that allows tools, diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index 52408e6022bdd..437bbce57a10e 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -490,13 +490,6 @@ the kubelet on each node to authenticate to that repository. You can configure _image pull secrets_ to make this possible. These secrets are configured at the Pod level. -The `imagePullSecrets` field for a Pod is a list of references to Secrets in the same namespace -as the Pod. -You can use an `imagePullSecrets` to pass image registry access credentials to -the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod. -See `PodSpec` in the [Pod API reference](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec) -for more information about the `imagePullSecrets` field. - #### Using imagePullSecrets The `imagePullSecrets` field is a list of references to secrets in the same namespace. diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index a135d1d1b6d46..6e00eb46b67a7 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -15,8 +15,7 @@ software dependencies. Container images are executable software bundles that can standalone and that make very well defined assumptions about their runtime environment. You typically create a container image of your application and push it to a registry -before referring to it in a -{{< glossary_tooltip text="Pod" term_id="pod" >}} +before referring to it in a {{< glossary_tooltip text="Pod" term_id="pod" >}}. This page provides an outline of the container image concept. @@ -36,8 +35,8 @@ and possibly a port number as well; for example: `fictional.registry.example:104 If you don't specify a registry hostname, Kubernetes assumes that you mean the Docker public registry. -After the image name part you can add a _tag_ (in the same way you would when using with commands like `docker` or `podman`). -Tags let you identify different versions of the same series of images. +After the image name part you can add a _tag_ (in the same way you would when using with commands +like `docker` or `podman`). Tags let you identify different versions of the same series of images. Image tags consist of lowercase and uppercase letters, digits, underscores (`_`), periods (`.`), and dashes (`-`). @@ -69,10 +68,10 @@ these values have: `Always` : every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image - [digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier). If the kubelet has a - container image with that exact digest cached locally, the kubelet uses its cached - image; otherwise, the kubelet pulls the image with the resolved digest, - and uses that image to launch the container. + [digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier). + If the kubelet has a container image with that exact digest cached locally, the kubelet uses its + cached image; otherwise, the kubelet pulls the image with the resolved digest, and uses that image + to launch the container. `Never` : the kubelet does not try fetching the image. If the image is somehow already present @@ -97,7 +96,11 @@ the image's digest; replace `:` with `@` (for example, `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`). -When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely identifies a specific version of the image, so Kubernetes runs the same code every time it starts a container with that image name and digest specified. Specifying an image by digest fixes the code that you run so that a change at the registry cannot lead to that mix of versions. +When using image tags, if the image registry were to change the code that the tag on that image +represents, you might end up with a mix of Pods running the old and new code. An image digest +uniquely identifies a specific version of the image, so Kubernetes runs the same code every time +it starts a container with that image name and digest specified. Specifying an image by digest +fixes the code that you run so that a change at the registry cannot lead to that mix of versions. There are third-party [admission controllers](/docs/reference/access-authn-authz/admission-controllers/) that mutate Pods (and pod templates) when they are created, so that the @@ -137,8 +140,8 @@ If you would like to always force a pull, you can do one of the following: Kubernetes will set the policy to `Always` when you submit the Pod. - Omit the `imagePullPolicy` and the tag for the image to use; Kubernetes will set the policy to `Always` when you submit the Pod. -- Enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller. - +- Enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) + admission controller. ### ImagePullBackOff @@ -156,35 +159,46 @@ which is 300 seconds (5 minutes). ## Multi-architecture images with image indexes -As well as providing binary images, a container registry can also serve a [container image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md). An image index can point to multiple [image manifests](https://github.com/opencontainers/image-spec/blob/master/manifest.md) for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using. +As well as providing binary images, a container registry can also serve a +[container image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md). +An image index can point to multiple [image manifests](https://github.com/opencontainers/image-spec/blob/master/manifest.md) +for architecture-specific versions of a container. The idea is that you can have a name for an image +(for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to +fetch the right binary image for the machine architecture they are using. -Kubernetes itself typically names container images with a suffix `-$(ARCH)`. For backward compatibility, please generate the older images with suffixes. The idea is to generate say `pause` image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes. +Kubernetes itself typically names container images with a suffix `-$(ARCH)`. For backward +compatibility, please generate the older images with suffixes. The idea is to generate say `pause` +image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards +compatible for older configurations or YAML files which may have hard coded the images with +suffixes. ## Using a private registry Private registries may require keys to read images from them. Credentials can be provided in several ways: - - Configuring Nodes to Authenticate to a Private Registry - - all pods can read any configured private registries - - requires node configuration by cluster administrator - - Kubelet Credential Provider to dynamically fetch credentials for private registries - - kubelet can be configured to use credential provider exec plugin - for the respective private registry. - - Pre-pulled Images - - all pods can use any images cached on a node - - requires root access to all nodes to set up - - Specifying ImagePullSecrets on a Pod - - only pods which provide own keys can access the private registry - - Vendor-specific or local extensions - - if you're using a custom node configuration, you (or your cloud - provider) can implement your mechanism for authenticating the node - to the container registry. + +- Configuring Nodes to Authenticate to a Private Registry + - all pods can read any configured private registries + - requires node configuration by cluster administrator +- Kubelet Credential Provider to dynamically fetch credentials for private registries + - kubelet can be configured to use credential provider exec plugin + for the respective private registry. +- Pre-pulled Images + - all pods can use any images cached on a node + - requires root access to all nodes to set up +- Specifying ImagePullSecrets on a Pod + - only pods which provide own keys can access the private registry +- Vendor-specific or local extensions + - if you're using a custom node configuration, you (or your cloud + provider) can implement your mechanism for authenticating the node + to the container registry. These options are explained in more detail below. ### Configuring nodes to authenticate to a private registry -Specific instructions for setting credentials depends on the container runtime and registry you chose to use. You should refer to your solution's documentation for the most accurate information. +Specific instructions for setting credentials depends on the container runtime and registry you +chose to use. You should refer to your solution's documentation for the most accurate information. For an example of configuring a private container image registry, see the [Pull an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry) @@ -269,7 +283,6 @@ If now a container specifies an image `my-registry.io/images/subpath/my-image` to be pulled, then the kubelet will try to download them from both authentication sources if one of them fails. - ### Pre-pulled images {{< note >}} @@ -285,7 +298,8 @@ then a local image is used (preferentially or exclusively, respectively). If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images. -This can be used to preload certain images for speed or as an alternative to authenticating to a private registry. +This can be used to preload certain images for speed or as an alternative to authenticating to a +private registry. All pods will have read access to any pre-pulled images. @@ -307,13 +321,18 @@ to the registry, as well as its hostname. Run the following command, substituting the appropriate uppercase values: ```shell -kubectl create secret docker-registry --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL +kubectl create secret docker-registry \ + --docker-server=DOCKER_REGISTRY_SERVER \ + --docker-username=DOCKER_USER \ + --docker-password=DOCKER_PASSWORD \ + --docker-email=DOCKER_EMAIL ``` If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes {{< glossary_tooltip text="Secrets" term_id="secret" >}}. -[Create a Secret based on existing Docker credentials](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials) explains how to set this up. +[Create a Secret based on existing Docker credentials](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials) +explains how to set this up. This is particularly useful if you are using multiple private container registries, as `kubectl create secret docker-registry` creates a Secret that @@ -358,7 +377,8 @@ This needs to be done for each pod that is using a private registry. However, setting of this field can be automated by setting the imagePullSecrets in a [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/) resource. -Check [Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) for detailed instructions. +Check [Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) +for detailed instructions. You can use this in conjunction with a per-node `.docker/config.json`. The credentials will be merged. @@ -371,7 +391,8 @@ common use cases and suggested solutions. 1. Cluster running only non-proprietary (e.g. open-source) images. No need to hide images. - Use public images from a public registry - No configuration required. - - Some cloud providers automatically cache or mirror public images, which improves availability and reduces the time to pull images. + - Some cloud providers automatically cache or mirror public images, which improves + availability and reduces the time to pull images. 1. Cluster running some proprietary images which should be hidden to those outside the company, but visible to all cluster users. - Use a hosted private registry @@ -382,15 +403,17 @@ common use cases and suggested solutions. - It will work better with cluster autoscaling than manual node configuration. - Or, on a cluster where changing the node configuration is inconvenient, use `imagePullSecrets`. 1. Cluster with proprietary images, a few of which require stricter access control. - - Ensure [AlwaysPullImages admission controller](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) is active. Otherwise, all Pods potentially have access to all images. + - Ensure [AlwaysPullImages admission controller](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) + is active. Otherwise, all Pods potentially have access to all images. - Move sensitive data into a "Secret" resource, instead of packaging it in an image. 1. A multi-tenant cluster where each tenant needs own private registry. - - Ensure [AlwaysPullImages admission controller](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) is active. Otherwise, all Pods of all tenants potentially have access to all images. + - Ensure [AlwaysPullImages admission controller](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) + is active. Otherwise, all Pods of all tenants potentially have access to all images. - Run a private registry with authorization required. - - Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace. + - Generate registry credential for each tenant, put into secret, and populate secret to each + tenant namespace. - The tenant adds that secret to imagePullSecrets of each namespace. - If you need access to multiple registries, you can create one secret for each registry. ## {{% heading "whatsnext" %}} @@ -398,3 +421,4 @@ If you need access to multiple registries, you can create one secret for each re * Read the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md). * Learn about [container image garbage collection](/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection). * Learn more about [pulling an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry). + diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md index 6645984559bb1..b26a25af04d9e 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -87,60 +87,65 @@ spec: The general workflow of a device plugin includes the following steps: -* Initialization. During this phase, the device plugin performs vendor specific +1. Initialization. During this phase, the device plugin performs vendor-specific initialization and setup to make sure the devices are in a ready state. -* The plugin starts a gRPC service, with a Unix socket under host path +1. The plugin starts a gRPC service, with a Unix socket under the host path `/var/lib/kubelet/device-plugins/`, that implements the following interfaces: - ```gRPC - service DevicePlugin { - // GetDevicePluginOptions returns options to be communicated with Device Manager. - rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} - - // ListAndWatch returns a stream of List of Devices - // Whenever a Device state change or a Device disappears, ListAndWatch - // returns the new list - rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} - - // Allocate is called during container creation so that the Device - // Plugin can run device specific operations and instruct Kubelet - // of the steps to make the Device available in the container - rpc Allocate(AllocateRequest) returns (AllocateResponse) {} - - // GetPreferredAllocation returns a preferred set of devices to allocate - // from a list of available ones. The resulting preferred allocation is not - // guaranteed to be the allocation ultimately performed by the - // devicemanager. It is only designed to help the devicemanager make a more - // informed allocation decision when possible. - rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {} - - // PreStartContainer is called, if indicated by Device Plugin during registeration phase, - // before each container start. Device plugin can run device specific operations - // such as resetting the device before making devices available to the container. - rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {} - } - ``` - - {{< note >}} - Plugins are not required to provide useful implementations for - `GetPreferredAllocation()` or `PreStartContainer()`. Flags indicating which - (if any) of these calls are available should be set in the `DevicePluginOptions` - message sent back by a call to `GetDevicePluginOptions()`. The `kubelet` will - always call `GetDevicePluginOptions()` to see which optional functions are - available, before calling any of them directly. - {{< /note >}} - -* The plugin registers itself with the kubelet through the Unix socket at host + ```gRPC + service DevicePlugin { + // GetDevicePluginOptions returns options to be communicated with Device Manager. + rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} + + // ListAndWatch returns a stream of List of Devices + // Whenever a Device state change or a Device disappears, ListAndWatch + // returns the new list + rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} + + // Allocate is called during container creation so that the Device + // Plugin can run device specific operations and instruct Kubelet + // of the steps to make the Device available in the container + rpc Allocate(AllocateRequest) returns (AllocateResponse) {} + + // GetPreferredAllocation returns a preferred set of devices to allocate + // from a list of available ones. The resulting preferred allocation is not + // guaranteed to be the allocation ultimately performed by the + // devicemanager. It is only designed to help the devicemanager make a more + // informed allocation decision when possible. + rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {} + + // PreStartContainer is called, if indicated by Device Plugin during registeration phase, + // before each container start. Device plugin can run device specific operations + // such as resetting the device before making devices available to the container. + rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {} + } + ``` + + {{< note >}} + Plugins are not required to provide useful implementations for + `GetPreferredAllocation()` or `PreStartContainer()`. Flags indicating + the availability of these calls, if any, should be set in the `DevicePluginOptions` + message sent back by a call to `GetDevicePluginOptions()`. The `kubelet` will + always call `GetDevicePluginOptions()` to see which optional functions are + available, before calling any of them directly. + {{< /note >}} + +1. The plugin registers itself with the kubelet through the Unix socket at host path `/var/lib/kubelet/device-plugins/kubelet.sock`. -* After successfully registering itself, the device plugin runs in serving mode, during which it keeps - monitoring device health and reports back to the kubelet upon any device state changes. - It is also responsible for serving `Allocate` gRPC requests. During `Allocate`, the device plugin may - do device-specific preparation; for example, GPU cleanup or QRNG initialization. - If the operations succeed, the device plugin returns an `AllocateResponse` that contains container - runtime configurations for accessing the allocated devices. The kubelet passes this information - to the container runtime. + {{< note >}} + The ordering of the workflow is important. A plugin MUST start serving gRPC + service before registering itself with kubelet for successful registration. + {{< /note >}} + +1. After successfully registering itself, the device plugin runs in serving mode, during which it keeps + monitoring device health and reports back to the kubelet upon any device state changes. + It is also responsible for serving `Allocate` gRPC requests. During `Allocate`, the device plugin may + do device-specific preparation; for example, GPU cleanup or QRNG initialization. + If the operations succeed, the device plugin returns an `AllocateResponse` that contains container + runtime configurations for accessing the allocated devices. The kubelet passes this information + to the container runtime. ### Handling kubelet restarts @@ -172,11 +177,11 @@ Beta graduation of this feature. Because of this, kubelet upgrades should be sea but there still may be changes in the API before stabilization making upgrades not guaranteed to be non-breaking. -{{< caution >}} +{{< note >}} Although the Device Manager component of Kubernetes is a generally available feature, the _device plugin API_ is not stable. For information on the device plugin API and version compatibility, read [Device Plugin API versions](/docs/reference/node/device-plugin-api-versions/). -{{< caution >}} +{{< /note >}} As a project, Kubernetes recommends that device plugin developers: diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md index 4fc30f96b1490..5c6cfa7fc5842 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -109,7 +109,8 @@ If you want to enable `hostPort` support, you must specify `portMappings capabil }, { "type": "portmap", - "capabilities": {"portMappings": true} + "capabilities": {"portMappings": true}, + "externalSetMarkChain": "KUBE-MARK-MASQ" } ] } diff --git a/content/en/docs/concepts/extend-kubernetes/operator.md b/content/en/docs/concepts/extend-kubernetes/operator.md index d740d481dd6a7..69b13915603a3 100644 --- a/content/en/docs/concepts/extend-kubernetes/operator.md +++ b/content/en/docs/concepts/extend-kubernetes/operator.md @@ -119,6 +119,7 @@ operator. * [kubebuilder](https://book.kubebuilder.io/) * [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (.NET operator SDK) * [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator) +* [Mast](https://docs.ansi.services/mast/user_guide/operator/) * [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html) along with WebHooks that you implement yourself * [Operator Framework](https://operatorframework.io) diff --git a/content/en/docs/concepts/overview/_index.md b/content/en/docs/concepts/overview/_index.md index 72abe7298a0fd..f5952d39b7482 100644 --- a/content/en/docs/concepts/overview/_index.md +++ b/content/en/docs/concepts/overview/_index.md @@ -76,11 +76,11 @@ Containers have become popular because they provide extra benefits, such as: applications from infrastructure. * Observability: not only surfaces OS-level information and metrics, but also application health and other signals. -* Environmental consistency across development, testing, and production: Runs +* Environmental consistency across development, testing, and production: runs the same on a laptop as it does in the cloud. -* Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-premises, +* Cloud and OS distribution portability: runs on Ubuntu, RHEL, CoreOS, on-premises, on major public clouds, and anywhere else. -* Application-centric management: Raises the level of abstraction from running an +* Application-centric management: raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources. * Loosely coupled, distributed, elastic, liberated micro-services: applications are broken into smaller, independent pieces and can be deployed and managed dynamically – diff --git a/content/en/docs/concepts/overview/working-with-objects/common-labels.md b/content/en/docs/concepts/overview/working-with-objects/common-labels.md index b4ccb7a652c6a..c6bda86afefcd 100644 --- a/content/en/docs/concepts/overview/working-with-objects/common-labels.md +++ b/content/en/docs/concepts/overview/working-with-objects/common-labels.md @@ -37,7 +37,7 @@ on every resource object. | ----------------------------------- | --------------------- | -------- | ---- | | `app.kubernetes.io/name` | The name of the application | `mysql` | string | | `app.kubernetes.io/instance` | A unique name identifying the instance of an application | `mysql-abcxzy` | string | -| `app.kubernetes.io/version` | The current version of the application (e.g., a semantic version, revision hash, etc.) | `5.7.21` | string | +| `app.kubernetes.io/version` | The current version of the application (e.g., a [SemVer 1.0](https://semver.org/spec/v1.0.0.html), revision hash, etc.) | `5.7.21` | string | | `app.kubernetes.io/component` | The component within the architecture | `database` | string | | `app.kubernetes.io/part-of` | The name of a higher level application this one is part of | `wordpress` | string | | `app.kubernetes.io/managed-by` | The tool being used to manage the operation of an application | `helm` | string | @@ -171,4 +171,3 @@ metadata: ``` With the MySQL `StatefulSet` and `Service` you'll notice information about both MySQL and WordPress, the broader application, are included. - diff --git a/content/en/docs/concepts/overview/working-with-objects/labels.md b/content/en/docs/concepts/overview/working-with-objects/labels.md index 55b1a5d032ad7..477ce6f2f5ca5 100644 --- a/content/en/docs/concepts/overview/working-with-objects/labels.md +++ b/content/en/docs/concepts/overview/working-with-objects/labels.md @@ -9,9 +9,12 @@ weight: 40 _Labels_ are key/value pairs that are attached to objects, such as pods. -Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. -Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. -Each object can have a set of key/value labels defined. Each Key must be unique for a given object. +Labels are intended to be used to specify identifying attributes of objects +that are meaningful and relevant to users, but do not directly imply semantics +to the core system. Labels can be used to organize and to select subsets of +objects. Labels can be attached to objects at creation time and subsequently +added and modified at any time. Each object can have a set of key/value labels +defined. Each Key must be unique for a given object. ```json "metadata": { @@ -30,37 +33,56 @@ and CLIs. Non-identifying information should be recorded using ## Motivation -Labels enable users to map their own organizational structures onto system objects in a loosely coupled fashion, without requiring clients to store these mappings. +Labels enable users to map their own organizational structures onto system objects +in a loosely coupled fashion, without requiring clients to store these mappings. -Service deployments and batch processing pipelines are often multi-dimensional entities (e.g., multiple partitions or deployments, multiple release tracks, multiple tiers, multiple micro-services per tier). Management often requires cross-cutting operations, which breaks encapsulation of strictly hierarchical representations, especially rigid hierarchies determined by the infrastructure rather than by users. +Service deployments and batch processing pipelines are often multi-dimensional entities +(e.g., multiple partitions or deployments, multiple release tracks, multiple tiers, +multiple micro-services per tier). Management often requires cross-cutting operations, +which breaks encapsulation of strictly hierarchical representations, especially rigid +hierarchies determined by the infrastructure rather than by users. Example labels: - * `"release" : "stable"`, `"release" : "canary"` - * `"environment" : "dev"`, `"environment" : "qa"`, `"environment" : "production"` - * `"tier" : "frontend"`, `"tier" : "backend"`, `"tier" : "cache"` - * `"partition" : "customerA"`, `"partition" : "customerB"` - * `"track" : "daily"`, `"track" : "weekly"` +* `"release" : "stable"`, `"release" : "canary"` +* `"environment" : "dev"`, `"environment" : "qa"`, `"environment" : "production"` +* `"tier" : "frontend"`, `"tier" : "backend"`, `"tier" : "cache"` +* `"partition" : "customerA"`, `"partition" : "customerB"` +* `"track" : "daily"`, `"track" : "weekly"` -These are examples of [commonly used labels](/docs/concepts/overview/working-with-objects/common-labels/); you are free to develop your own conventions. Keep in mind that label Key must be unique for a given object. +These are examples of +[commonly used labels](/docs/concepts/overview/working-with-objects/common-labels/); +you are free to develop your own conventions. +Keep in mind that label Key must be unique for a given object. ## Syntax and character set -_Labels_ are key/value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (`/`). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (`.`), not longer than 253 characters in total, followed by a slash (`/`). +_Labels_ are key/value pairs. Valid label keys have two segments: an optional +prefix and name, separated by a slash (`/`). The name segment is required and +must be 63 characters or less, beginning and ending with an alphanumeric +character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), +and alphanumerics between. The prefix is optional. If specified, the prefix +must be a DNS subdomain: a series of DNS labels separated by dots (`.`), +not longer than 253 characters in total, followed by a slash (`/`). -If the prefix is omitted, the label Key is presumed to be private to the user. Automated system components (e.g. `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl`, or other third-party automation) which add labels to end-user objects must specify a prefix. +If the prefix is omitted, the label Key is presumed to be private to the user. +Automated system components (e.g. `kube-scheduler`, `kube-controller-manager`, +`kube-apiserver`, `kubectl`, or other third-party automation) which add labels +to end-user objects must specify a prefix. -The `kubernetes.io/` and `k8s.io/` prefixes are [reserved](/docs/reference/labels-annotations-taints/) for Kubernetes core components. +The `kubernetes.io/` and `k8s.io/` prefixes are +[reserved](/docs/reference/labels-annotations-taints/) for Kubernetes core components. Valid label value: + * must be 63 characters or less (can be empty), * unless empty, must begin and end with an alphanumeric character (`[a-z0-9A-Z]`), * could contain dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. -For example, here's the configuration file for a Pod that has two labels `environment: production` and `app: nginx` : +For example, here's the configuration file for a Pod that has two labels +`environment: production` and `app: nginx`: ```yaml - apiVersion: v1 kind: Pod metadata: @@ -74,34 +96,43 @@ spec: image: nginx:1.14.2 ports: - containerPort: 80 - ``` ## Label selectors -Unlike [names and UIDs](/docs/concepts/overview/working-with-objects/names/), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s). +Unlike [names and UIDs](/docs/concepts/overview/working-with-objects/names/), labels +do not provide uniqueness. In general, we expect many objects to carry the same label(s). -Via a _label selector_, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes. +Via a _label selector_, the client/user can identify a set of objects. +The label selector is the core grouping primitive in Kubernetes. The API currently supports two types of selectors: _equality-based_ and _set-based_. -A label selector can be made of multiple _requirements_ which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical _AND_ (`&&`) operator. +A label selector can be made of multiple _requirements_ which are comma-separated. +In the case of multiple requirements, all must be satisfied so the comma separator +acts as a logical _AND_ (`&&`) operator. The semantics of empty or non-specified selectors are dependent on the context, and API types that use selectors should document the validity and meaning of them. {{< note >}} -For some API types, such as ReplicaSets, the label selectors of two instances must not overlap within a namespace, or the controller can see that as conflicting instructions and fail to determine how many replicas should be present. +For some API types, such as ReplicaSets, the label selectors of two instances must +not overlap within a namespace, or the controller can see that as conflicting +instructions and fail to determine how many replicas should be present. {{< /note >}} {{< caution >}} -For both equality-based and set-based conditions there is no logical _OR_ (`||`) operator. Ensure your filter statements are structured accordingly. +For both equality-based and set-based conditions there is no logical _OR_ (`||`) operator. +Ensure your filter statements are structured accordingly. {{< /caution >}} ### _Equality-based_ requirement -_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well. -Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. For example: +_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. +Matching objects must satisfy all of the specified label constraints, though they may +have additional labels as well. Three kinds of operators are admitted `=`,`==`,`!=`. +The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. +For example: ``` environment = production @@ -109,8 +140,9 @@ tier != frontend ``` The former selects all resources with key equal to `environment` and value equal to `production`. -The latter selects all resources with key equal to `tier` and value distinct from `frontend`, and all resources with no labels with the `tier` key. -One could filter for resources in `production` excluding `frontend` using the comma operator: `environment=production,tier!=frontend` +The latter selects all resources with key equal to `tier` and value distinct from `frontend`, +and all resources with no labels with the `tier` key. One could filter for resources in `production` +excluding `frontend` using the comma operator: `environment=production,tier!=frontend` One usage scenario for equality-based label requirement is for Pods to specify node selection criteria. For example, the sample Pod below selects nodes with @@ -134,7 +166,9 @@ spec: ### _Set-based_ requirement -_Set-based_ label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: `in`,`notin` and `exists` (only the key identifier). For example: +_Set-based_ label requirements allow filtering keys according to a set of values. +Three kinds of operators are supported: `in`,`notin` and `exists` (only the key identifier). +For example: ``` environment in (production, qa) @@ -143,27 +177,38 @@ partition !partition ``` -* The first example selects all resources with key equal to `environment` and value equal to `production` or `qa`. -* The second example selects all resources with key equal to `tier` and values other than `frontend` and `backend`, and all resources with no labels with the `tier` key. -* The third example selects all resources including a label with key `partition`; no values are checked. -* The fourth example selects all resources without a label with key `partition`; no values are checked. - -Similarly the comma separator acts as an _AND_ operator. So filtering resources with a `partition` key (no matter the value) and with `environment` different than  `qa` can be achieved using `partition,environment notin (qa)`. -The _set-based_ label selector is a general form of equality since `environment=production` is equivalent to `environment in (production)`; similarly for `!=` and `notin`. - -_Set-based_ requirements can be mixed with _equality-based_ requirements. For example: `partition in (customerA, customerB),environment!=qa`. - +- The first example selects all resources with key equal to `environment` and value + equal to `production` or `qa`. +- The second example selects all resources with key equal to `tier` and values other + than `frontend` and `backend`, and all resources with no labels with the `tier` key. +- The third example selects all resources including a label with key `partition`; + no values are checked. +- The fourth example selects all resources without a label with key `partition`; + no values are checked. + +Similarly the comma separator acts as an _AND_ operator. So filtering resources +with a `partition` key (no matter the value) and with `environment` different +than `qa` can be achieved using `partition,environment notin (qa)`. +The _set-based_ label selector is a general form of equality since +`environment=production` is equivalent to `environment in (production)`; +similarly for `!=` and `notin`. + +_Set-based_ requirements can be mixed with _equality-based_ requirements. +For example: `partition in (customerA, customerB),environment!=qa`. ## API ### LIST and WATCH filtering -LIST and WATCH operations may specify label selectors to filter the sets of objects returned using a query parameter. Both requirements are permitted (presented here as they would appear in a URL query string): +LIST and WATCH operations may specify label selectors to filter the sets of objects +returned using a query parameter. Both requirements are permitted +(presented here as they would appear in a URL query string): - * _equality-based_ requirements: `?labelSelector=environment%3Dproduction,tier%3Dfrontend` - * _set-based_ requirements: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29` +* _equality-based_ requirements: `?labelSelector=environment%3Dproduction,tier%3Dfrontend` +* _set-based_ requirements: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29` -Both label selector styles can be used to list or watch resources via a REST client. For example, targeting `apiserver` with `kubectl` and using _equality-based_ one may write: +Both label selector styles can be used to list or watch resources via a REST client. +For example, targeting `apiserver` with `kubectl` and using _equality-based_ one may write: ```shell kubectl get pods -l environment=production,tier=frontend @@ -175,7 +220,8 @@ or using _set-based_ requirements: kubectl get pods -l 'environment in (production),tier in (frontend)' ``` -As already mentioned _set-based_ requirements are more expressive.  For instance, they can implement the _OR_ operator on values: +As already mentioned _set-based_ requirements are more expressive. +For instance, they can implement the _OR_ operator on values: ```shell kubectl get pods -l 'environment in (production, qa)' @@ -196,15 +242,19 @@ also use label selectors to specify sets of other resources, such as #### Service and ReplicationController -The set of pods that a `service` targets is defined with a label selector. Similarly, the population of pods that a `replicationcontroller` should manage is also defined with a label selector. +The set of pods that a `service` targets is defined with a label selector. +Similarly, the population of pods that a `replicationcontroller` should +manage is also defined with a label selector. -Labels selectors for both objects are defined in `json` or `yaml` files using maps, and only _equality-based_ requirement selectors are supported: +Labels selectors for both objects are defined in `json` or `yaml` files using maps, +and only _equality-based_ requirement selectors are supported: ```json "selector": { "component" : "redis", } ``` + or ```yaml @@ -212,7 +262,8 @@ selector: component: redis ``` -this selector (respectively in `json` or `yaml` format) is equivalent to `component=redis` or `component in (redis)`. +This selector (respectively in `json` or `yaml` format) is equivalent to +`component=redis` or `component in (redis)`. #### Resources that support set-based requirements @@ -231,9 +282,25 @@ selector: - {key: environment, operator: NotIn, values: [dev]} ``` -`matchLabels` is a map of `{key,value}` pairs. A single `{key,value}` in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value". `matchExpressions` is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both `matchLabels` and `matchExpressions` are ANDed together -- they must all be satisfied in order to match. +`matchLabels` is a map of `{key,value}` pairs. A single `{key,value}` in the +`matchLabels` map is equivalent to an element of `matchExpressions`, whose `key` +field is "key", the `operator` is "In", and the `values` array contains only "value". +`matchExpressions` is a list of pod selector requirements. Valid operators include +In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of +In and NotIn. All of the requirements, from both `matchLabels` and `matchExpressions` +are ANDed together -- they must all be satisfied in order to match. #### Selecting sets of nodes -One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule. -See the documentation on [node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information. +One use case for selecting over labels is to constrain the set of nodes onto which +a pod can schedule. See the documentation on +[node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information. + +## {{% heading "whatsnext" %}} + +- Learn how to [add a label to a node](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node) +- Find [Well-known labels, Annotations and Taints](/docs/reference/labels-annotations-taints/) +- See [Recommended labels](/docs/concepts/overview/working-with-objects/common-labels/) +- [Enforce Pod Security Standards with Namespace Labels](/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/) +- [Use Labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) to manage deployments. +- Read a blog on [Writing a Controller for Pod Labels](/blog/2021/06/21/writing-a-controller-for-pod-labels/) diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md index 1eb96fe4a632f..9ec8edaffda1f 100644 --- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md @@ -147,7 +147,7 @@ kubectl api-resources --namespaced=false ## Automatic labelling -{{< feature-state state="beta" for_k8s_version="1.21" >}} +{{< feature-state for_k8s_version="1.22" state="stable" >}} The Kubernetes control plane sets an immutable {{< glossary_tooltip text="label" term_id="label" >}} `kubernetes.io/metadata.name` on all namespaces, provided that the `NamespaceDefaultLabelName` diff --git a/content/en/docs/concepts/policy/limit-range.md b/content/en/docs/concepts/policy/limit-range.md index a1428b0e4b32d..e1e0d0b93a0bf 100644 --- a/content/en/docs/concepts/policy/limit-range.md +++ b/content/en/docs/concepts/policy/limit-range.md @@ -50,7 +50,7 @@ The name of a LimitRange object must be a valid ## LimitRange and admission checks for Pods -A `LimitRange` does **not** check the consistency of the default values it applies. This means that a default value for the _limit_ that is set by `LimitRange` may be less than the _request_ value specified for the container in the spec that a client submits to the API server. If that happens, the final Pod will not be scheduleable. +A `LimitRange` does **not** check the consistency of the default values it applies. This means that a default value for the _limit_ that is set by `LimitRange` may be less than the _request_ value specified for the container in the spec that a client submits to the API server. If that happens, the final Pod will not be schedulable. For example, you define a `LimitRange` with this manifest: diff --git a/content/en/docs/concepts/policy/pid-limiting.md b/content/en/docs/concepts/policy/pid-limiting.md index 1e03ccf375cd8..54e1b324f9d9b 100644 --- a/content/en/docs/concepts/policy/pid-limiting.md +++ b/content/en/docs/concepts/policy/pid-limiting.md @@ -73,13 +73,6 @@ The value you specified declares that the specified number of process IDs will be reserved for the system as a whole and for Kubernetes system daemons respectively. -{{< note >}} -Before Kubernetes version 1.20, PID resource limiting with Node-level -reservations required enabling the [feature -gate](/docs/reference/command-line-tools-reference/feature-gates/) -`SupportNodePidsLimit` to work. -{{< /note >}} - ## Pod PID limits Kubernetes allows you to limit the number of processes running in a Pod. You @@ -89,12 +82,6 @@ To configure the limit, you can specify the command line parameter `--pod-max-pi to the kubelet, or set `PodPidsLimit` in the kubelet [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/). -{{< note >}} -Before Kubernetes version 1.20, PID resource limiting for Pods required enabling -the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) -`SupportPodPidsLimit` to work. -{{< /note >}} - ## PID based eviction You can configure kubelet to start terminating a Pod when it is misbehaving and consuming abnormal amount of resources. diff --git a/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md b/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md index 5a5647985d589..be9994631559e 100644 --- a/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md +++ b/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md @@ -26,22 +26,7 @@ criteria that Pod should be satisfied before considered schedulable. This field only when a Pod is created (either by the client, or mutated during admission). After creation, each schedulingGate can be removed in arbitrary order, but addition of a new scheduling gate is disallowed. -{{}} -stateDiagram-v2 - s1: pod created - s2: pod scheduling gated - s3: pod scheduling ready - s4: pod running - if: empty scheduling gates? - [*] --> s1 - s1 --> if - s2 --> if: scheduling gate removed - if --> s2: no - if --> s3: yes - s3 --> s4 - s4 --> [*] -{{< /mermaid >}} - +{{< figure src="/docs/images/podSchedulingGates.svg" alt="pod-scheduling-gates-diagram" caption="Figure. Pod SchedulingGates" class="diagram-large" link="https://mermaid.live/edit#pako:eNplkktTwyAUhf8KgzuHWpukaYszutGlK3caFxQuCVMCGSDVTKf_XfKyPlhxz4HDB9wT5lYAptgHFuBRsdKxenFMClMYFIdfUdRYgbiD6ItJTEbR8wpEq5UpUfnDTf-5cbPoJjcbXdcaE61RVJIiqJvQ_Y30D-OCt-t3tFjcR5wZayiVnIGmkv4NiEfX9jijKTmmRH5jf0sRugOP0HyHUc1m6KGMFP27cM28fwSJDluPpNKaXqVJzmFNfHD2APRKSjnNFx9KhIpmzSfhVls3eHdTRrwG8QnxKfEZUUNeYTDBNbiaKRF_5dSfX-BQQQ0FpnEqQLJWhwIX5hyXsjbYl85wTINrgeC2EZd_xFQy7b_VJ6GCdd-itkxALE84dE3fAqXyIUZya6Qqe711OspVCI2ny2Vv35QqVO3-htt66ZWomAvVcZcv8yTfsiSFfJOydZoKvl_ttjLJVlJsblcJw-czwQ0zr9ZeqGDgeR77b2jD8xdtjtDn" >}} ## Usage example To mark a Pod not-ready for scheduling, you can create it with one or more scheduling gates like this: diff --git a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md index 62098a0928f42..76855f5a5e57c 100644 --- a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md +++ b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -97,7 +97,7 @@ your cluster. Those fields are: nodes match the node selector. {{< note >}} - The `minDomains` field is a beta field and enabled by default in 1.25. You can disable it by disabling the + The `minDomains` field is a beta field and disabled by default in 1.25. You can enable it by enabling the `MinDomainsInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). {{< /note >}} diff --git a/content/en/docs/concepts/security/api-server-bypass-risks.md b/content/en/docs/concepts/security/api-server-bypass-risks.md index 7c906a203172e..90435b1eeeec2 100644 --- a/content/en/docs/concepts/security/api-server-bypass-risks.md +++ b/content/en/docs/concepts/security/api-server-bypass-risks.md @@ -12,7 +12,8 @@ The Kubernetes API server is the main point of entry to a cluster for external p (users and services) interacting with it. As part of this role, the API server has several key built-in security controls, such as -audit logging and {{< glossary_tooltip text="admission controllers" term_id="admission-controller" >}}. However, there are ways to modify the configuration +audit logging and {{< glossary_tooltip text="admission controllers" term_id="admission-controller" >}}. +However, there are ways to modify the configuration or content of the cluster that bypass these controls. This page describes the ways in which the security controls built into the @@ -65,7 +66,8 @@ every container running on the node. When Kubernetes cluster users have RBAC access to `Node` object sub-resources, that access serves as authorization to interact with the kubelet API. The exact access depends on -which sub-resource access has been granted, as detailed in [kubelet authorization](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authorization). +which sub-resource access has been granted, as detailed in +[kubelet authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authorization). Direct access to the kubelet API is not subject to admission control and is not logged by Kubernetes audit logging. An attacker with direct access to this API may be able to @@ -80,11 +82,12 @@ The default anonymous access doesn't make this assertion with the control plane. ### Mitigations - Restrict access to sub-resources of the `nodes` API object using mechanisms such as - [RBAC](/docs/reference/access-authn-authz/rbac/). Only grant this access when required, - such as by monitoring services. + [RBAC](/docs/reference/access-authn-authz/rbac/). Only grant this access when required, + such as by monitoring services. - Restrict access to the kubelet port. Only allow specified and trusted IP address - ranges to access the port. -- [Ensure that kubelet authentication is set to webhook or certificate mode](/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authentication). + ranges to access the port. +- Ensure that [kubelet authentication](/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authentication). + is set to webhook or certificate mode. - Ensure that the unauthenticated "read-only" Kubelet port is not enabled on the cluster. ## The etcd API diff --git a/content/en/docs/concepts/security/multi-tenancy.md b/content/en/docs/concepts/security/multi-tenancy.md index 8393b3a0f2d29..49355d08a6ac4 100755 --- a/content/en/docs/concepts/security/multi-tenancy.md +++ b/content/en/docs/concepts/security/multi-tenancy.md @@ -44,7 +44,7 @@ share clusters. The other major form of multi-tenancy frequently involves a Software-as-a-Service (SaaS) vendor running multiple instances of a workload for customers. This business model is so strongly associated with this deployment style that many people call it "SaaS tenancy." However, a better -term might be "multi-customer tenancy,” since SaaS vendors may also use other deployment models, +term might be "multi-customer tenancy," since SaaS vendors may also use other deployment models, and this deployment model can also be used outside of SaaS. In this scenario, the customers do not have access to the cluster; Kubernetes is invisible from diff --git a/content/en/docs/concepts/security/rbac-good-practices.md b/content/en/docs/concepts/security/rbac-good-practices.md index 8b883bba9a3db..b6abde0d7494e 100644 --- a/content/en/docs/concepts/security/rbac-good-practices.md +++ b/content/en/docs/concepts/security/rbac-good-practices.md @@ -121,8 +121,20 @@ considered weak. ### Persistent volume creation -As noted in the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/#volumes-and-file-systems) -documentation, access to create PersistentVolumes can allow for escalation of access to the underlying host. +If someone - or some application - is allowed to create arbitrary PersistentVolumes, that access +includes the creation of `hostPath` volumes, which then means that a Pod would get access +to the underlying host filesystem(s) on the associated node. Granting that ability is a security risk. + +There are many ways a container with unrestricted access to the host filesystem can escalate privileges, including +reading data from other containers, and abusing the credentials of system services, such as Kubelet. + +You should only allow access to create PersistentVolume objects for: + +- users (cluster operators) that need this access for their work, and who you trust, +- the Kubernetes control plane components which creates PersistentVolumes based on PersistentVolumeClaims + that are configured for automatic provisioning. + This is usually setup by the Kubernetes provider or by the operator when installing a CSI driver. + Where access to persistent storage is required trusted administrators should create PersistentVolumes, and constrained users should use PersistentVolumeClaims to access that storage. diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md index 85680c91458c4..d67e1e3e98e9b 100644 --- a/content/en/docs/concepts/services-networking/ingress-controllers.md +++ b/content/en/docs/concepts/services-networking/ingress-controllers.md @@ -60,6 +60,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet * [Tyk Operator](https://github.com/TykTechnologies/tyk-operator) extends Ingress with Custom Resources to bring API Management capabilities to Ingress. Tyk Operator works with the Open Source Tyk Gateway & Tyk Cloud control plane. * [Voyager](https://appscode.com/products/voyager) is an ingress controller for [HAProxy](https://www.haproxy.org/#desc). +* [Wallarm Ingress Controller](https://www.wallarm.com/solutions/waf-for-kubernetes) is an Ingress Controller that provides WAAP (WAF) and API Security capabilities. ## Using multiple Ingress controllers diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md index 56d3566c2ecbf..a1797f20b877f 100644 --- a/content/en/docs/concepts/services-networking/network-policies.md +++ b/content/en/docs/concepts/services-networking/network-policies.md @@ -11,88 +11,144 @@ description: >- NetworkPolicies allow you to specify rules for traffic flow within your cluster, and also between Pods and the outside world. Your cluster must use a network plugin that supports NetworkPolicy enforcement. + --- -If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which allow you to specify how a {{< glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network. NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to other connections. +If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you +might consider using Kubernetes NetworkPolicies for particular applications in your cluster. +NetworkPolicies are an application-centric construct which allow you to specify how a {{< +glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network +"entities" (we use the word "entity" here to avoid overloading the more common terms such as +"endpoints" and "services", which have specific Kubernetes connotations) over the network. +NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to +other connections. -The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers: +The entities that a Pod can communicate with are identified through a combination of the following +3 identifiers: 1. Other pods that are allowed (exception: a pod cannot block access to itself) 2. Namespaces that are allowed -3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node) +3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, + regardless of the IP address of the Pod or the node) -When defining a pod- or namespace- based NetworkPolicy, you use a {{< glossary_tooltip text="selector" term_id="selector">}} to specify what traffic is allowed to and from the Pod(s) that match the selector. +When defining a pod- or namespace- based NetworkPolicy, you use a +{{< glossary_tooltip text="selector" term_id="selector">}} to specify what traffic is allowed to +and from the Pod(s) that match the selector. Meanwhile, when IP based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges). ## Prerequisites -Network policies are implemented by the [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect. +Network policies are implemented by the [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). +To use network policies, you must be using a networking solution which supports NetworkPolicy. +Creating a NetworkPolicy resource without a controller that implements it will have no effect. ## The Two Sorts of Pod Isolation -There are two sorts of isolation for a pod: isolation for egress, and isolation for ingress. They concern what connections may be established. "Isolation" here is not absolute, rather it means "some restrictions apply". The alternative, "non-isolated for $direction", means that no restrictions apply in the stated direction. The two sorts of isolation (or not) are declared independently, and are both relevant for a connection from one pod to another. - -By default, a pod is non-isolated for egress; all outbound connections are allowed. A pod is isolated for egress if there is any NetworkPolicy that both selects the pod and has "Egress" in its `policyTypes`; we say that such a policy applies to the pod for egress. When a pod is isolated for egress, the only allowed connections from the pod are those allowed by the `egress` list of some NetworkPolicy that applies to the pod for egress. The effects of those `egress` lists combine additively. - -By default, a pod is non-isolated for ingress; all inbound connections are allowed. A pod is isolated for ingress if there is any NetworkPolicy that both selects the pod and has "Ingress" in its `policyTypes`; we say that such a policy applies to the pod for ingress. When a pod is isolated for ingress, the only allowed connections into the pod are those from the pod's node and those allowed by the `ingress` list of some NetworkPolicy that applies to the pod for ingress. The effects of those `ingress` lists combine additively. - -Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow. Thus, order of evaluation does not affect the policy result. - -For a connection from a source pod to a destination pod to be allowed, both the egress policy on the source pod and the ingress policy on the destination pod need to allow the connection. If either side does not allow the connection, it will not happen. +There are two sorts of isolation for a pod: isolation for egress, and isolation for ingress. +They concern what connections may be established. "Isolation" here is not absolute, rather it +means "some restrictions apply". The alternative, "non-isolated for $direction", means that no +restrictions apply in the stated direction. The two sorts of isolation (or not) are declared +independently, and are both relevant for a connection from one pod to another. + +By default, a pod is non-isolated for egress; all outbound connections are allowed. +A pod is isolated for egress if there is any NetworkPolicy that both selects the pod and has +"Egress" in its `policyTypes`; we say that such a policy applies to the pod for egress. +When a pod is isolated for egress, the only allowed connections from the pod are those allowed by +the `egress` list of some NetworkPolicy that applies to the pod for egress. +The effects of those `egress` lists combine additively. + +By default, a pod is non-isolated for ingress; all inbound connections are allowed. +A pod is isolated for ingress if there is any NetworkPolicy that both selects the pod and +has "Ingress" in its `policyTypes`; we say that such a policy applies to the pod for ingress. +When a pod is isolated for ingress, the only allowed connections into the pod are those from +the pod's node and those allowed by the `ingress` list of some NetworkPolicy that applies to +the pod for ingress. The effects of those `ingress` lists combine additively. + +Network policies do not conflict; they are additive. If any policy or policies apply to a given +pod for a given direction, the connections allowed in that direction from that pod is the union of +what the applicable policies allow. Thus, order of evaluation does not affect the policy result. + +For a connection from a source pod to a destination pod to be allowed, both the egress policy on +the source pod and the ingress policy on the destination pod need to allow the connection. If +either side does not allow the connection, it will not happen. ## The NetworkPolicy resource {#networkpolicy-resource} -See the [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) reference for a full definition of the resource. +See the [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) +reference for a full definition of the resource. An example NetworkPolicy might look like this: {{< codenew file="service/networking/networkpolicy.yaml" >}} {{< note >}} -POSTing this to the API server for your cluster will have no effect unless your chosen networking solution supports network policy. +POSTing this to the API server for your cluster will have no effect unless your chosen networking +solution supports network policy. {{< /note >}} -__Mandatory Fields__: As with all other Kubernetes config, a NetworkPolicy -needs `apiVersion`, `kind`, and `metadata` fields. For general information -about working with config files, see +__Mandatory Fields__: As with all other Kubernetes config, a NetworkPolicy needs `apiVersion`, +`kind`, and `metadata` fields. For general information about working with config files, see [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), and [Object Management](/docs/concepts/overview/working-with-objects/object-management). -__spec__: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) has all the information needed to define a particular network policy in the given namespace. +**spec**: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) +has all the information needed to define a particular network policy in the given namespace. -__podSelector__: Each NetworkPolicy includes a `podSelector` which selects the grouping of pods to which the policy applies. The example policy selects pods with the label "role=db". An empty `podSelector` selects all pods in the namespace. +**podSelector**: Each NetworkPolicy includes a `podSelector` which selects the grouping of pods to +which the policy applies. The example policy selects pods with the label "role=db". An empty +`podSelector` selects all pods in the namespace. -__policyTypes__: Each NetworkPolicy includes a `policyTypes` list which may include either `Ingress`, `Egress`, or both. The `policyTypes` field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no `policyTypes` are specified on a NetworkPolicy then by default `Ingress` will always be set and `Egress` will be set if the NetworkPolicy has any egress rules. +**policyTypes**: Each NetworkPolicy includes a `policyTypes` list which may include either +`Ingress`, `Egress`, or both. The `policyTypes` field indicates whether or not the given policy +applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no +`policyTypes` are specified on a NetworkPolicy then by default `Ingress` will always be set and +`Egress` will be set if the NetworkPolicy has any egress rules. -__ingress__: Each NetworkPolicy may include a list of allowed `ingress` rules. Each rule allows traffic which matches both the `from` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an `ipBlock`, the second via a `namespaceSelector` and the third via a `podSelector`. +**ingress**: Each NetworkPolicy may include a list of allowed `ingress` rules. Each rule allows +traffic which matches both the `from` and `ports` sections. The example policy contains a single +rule, which matches traffic on a single port, from one of three sources, the first specified via +an `ipBlock`, the second via a `namespaceSelector` and the third via a `podSelector`. -__egress__: Each NetworkPolicy may include a list of allowed `egress` rules. Each rule allows traffic which matches both the `to` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port to any destination in `10.0.0.0/24`. +**egress**: Each NetworkPolicy may include a list of allowed `egress` rules. Each rule allows +traffic which matches both the `to` and `ports` sections. The example policy contains a single +rule, which matches traffic on a single port to any destination in `10.0.0.0/24`. So, the example NetworkPolicy: -1. isolates "role=db" pods in the "default" namespace for both ingress and egress traffic (if they weren't already isolated) -2. (Ingress rules) allows connections to all pods in the "default" namespace with the label "role=db" on TCP port 6379 from: +1. isolates `role=db` pods in the `default` namespace for both ingress and egress traffic + (if they weren't already isolated) +1. (Ingress rules) allows connections to all pods in the `default` namespace with the label + `role=db` on TCP port 6379 from: + + * any pod in the `default` namespace with the label `role=frontend` + * any pod in a namespace with the label `project=myproject` + * IP addresses in the ranges `172.17.0.0`–`172.17.0.255` and `172.17.2.0`–`172.17.255.255` + (ie, all of `172.17.0.0/16` except `172.17.1.0/24`) - * any pod in the "default" namespace with the label "role=frontend" - * any pod in a namespace with the label "project=myproject" - * IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (ie, all of 172.17.0.0/16 except 172.17.1.0/24) -3. (Egress rules) allows connections from any pod in the "default" namespace with the label "role=db" to CIDR 10.0.0.0/24 on TCP port 5978 +1. (Egress rules) allows connections from any pod in the `default` namespace with the label + `role=db` to CIDR `10.0.0.0/24` on TCP port 5978 -See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) walkthrough for further examples. +See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) +walkthrough for further examples. ## Behavior of `to` and `from` selectors -There are four kinds of selectors that can be specified in an `ingress` `from` section or `egress` `to` section: +There are four kinds of selectors that can be specified in an `ingress` `from` section or `egress` +`to` section: -__podSelector__: This selects particular Pods in the same namespace as the NetworkPolicy which should be allowed as ingress sources or egress destinations. +**podSelector**: This selects particular Pods in the same namespace as the NetworkPolicy which +should be allowed as ingress sources or egress destinations. -__namespaceSelector__: This selects particular namespaces for which all Pods should be allowed as ingress sources or egress destinations. +**namespaceSelector**: This selects particular namespaces for which all Pods should be allowed as +ingress sources or egress destinations. -__namespaceSelector__ *and* __podSelector__: A single `to`/`from` entry that specifies both `namespaceSelector` and `podSelector` selects particular Pods within particular namespaces. Be careful to use correct YAML syntax; this policy: +**namespaceSelector** *and* **podSelector**: A single `to`/`from` entry that specifies both +`namespaceSelector` and `podSelector` selects particular Pods within particular namespaces. Be +careful to use correct YAML syntax. For example: ```yaml ... @@ -107,7 +163,8 @@ __namespaceSelector__ *and* __podSelector__: A single `to`/`from` entry that spe ... ``` -contains a single `from` element allowing connections from Pods with the label `role=client` in namespaces with the label `user=alice`. But *this* policy: +This policy contains a single `from` element allowing connections from Pods with the label +`role=client` in namespaces with the label `user=alice`. But the following policy is different: ```yaml ... @@ -122,12 +179,15 @@ contains a single `from` element allowing connections from Pods with the label ` ... ``` -contains two elements in the `from` array, and allows connections from Pods in the local Namespace with the label `role=client`, *or* from any Pod in any namespace with the label `user=alice`. +It contains two elements in the `from` array, and allows connections from Pods in the local +Namespace with the label `role=client`, *or* from any Pod in any namespace with the label +`user=alice`. When in doubt, use `kubectl describe` to see how Kubernetes has interpreted the policy. -__ipBlock__: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable. +**ipBlock**: This selects particular IP CIDR ranges to allow as ingress sources or egress +destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable. Cluster ingress and egress mechanisms often require rewriting the source or destination IP of packets. In cases where this happens, it is not defined whether this happens before or @@ -143,59 +203,73 @@ cluster-external IPs may or may not be subject to `ipBlock`-based policies. ## Default policies -By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace. The following examples let you change the default behavior +By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to +and from pods in that namespace. The following examples let you change the default behavior in that namespace. ### Default deny all ingress traffic -You can create a "default" ingress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods. +You can create a "default" ingress isolation policy for a namespace by creating a NetworkPolicy +that selects all pods but does not allow any ingress traffic to those pods. {{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}} -This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod. +This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated +for ingress. This policy does not affect isolation for egress from any pod. ### Allow all ingress traffic -If you want to allow all incoming connections to all pods in a namespace, you can create a policy that explicitly allows that. +If you want to allow all incoming connections to all pods in a namespace, you can create a policy +that explicitly allows that. {{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}} -With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod. +With this policy in place, no additional policy or policies can cause any incoming connection to +those pods to be denied. This policy has no effect on isolation for egress from any pod. ### Default deny all egress traffic -You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any egress traffic from those pods. +You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy +that selects all pods but does not allow any egress traffic from those pods. {{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}} -This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not -change the ingress isolation behavior of any pod. +This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed +egress traffic. This policy does not change the ingress isolation behavior of any pod. ### Allow all egress traffic -If you want to allow all connections from all pods in a namespace, you can create a policy that explicitly allows all outgoing connections from pods in that namespace. +If you want to allow all connections from all pods in a namespace, you can create a policy that +explicitly allows all outgoing connections from pods in that namespace. {{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}} -With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod. +With this policy in place, no additional policy or policies can cause any outgoing connection from +those pods to be denied. This policy has no effect on isolation for ingress to any pod. ### Default deny all ingress and all egress traffic -You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by creating the following NetworkPolicy in that namespace. +You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by +creating the following NetworkPolicy in that namespace. {{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}} -This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic. +This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed +ingress or egress traffic. ## SCTP support {{< feature-state for_k8s_version="v1.20" state="stable" >}} -As a stable feature, this is enabled by default. To disable SCTP at a cluster level, you (or your cluster administrator) will need to disable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `--feature-gates=SCTPSupport=false,…`. +As a stable feature, this is enabled by default. To disable SCTP at a cluster level, you (or your +cluster administrator) will need to disable the `SCTPSupport` +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +for the API server with `--feature-gates=SCTPSupport=false,…`. When the feature gate is enabled, you can set the `protocol` field of a NetworkPolicy to `SCTP`. {{< note >}} -You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP protocol NetworkPolicies. +You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP +protocol NetworkPolicies. {{< /note >}} ## Targeting a range of ports @@ -206,33 +280,14 @@ When writing a NetworkPolicy, you can target a range of ports instead of a singl This is achievable with the usage of the `endPort` field, as the following example: -```yaml -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: multi-port-egress - namespace: default -spec: - podSelector: - matchLabels: - role: db - policyTypes: - - Egress - egress: - - to: - - ipBlock: - cidr: 10.0.0.0/24 - ports: - - protocol: TCP - port: 32000 - endPort: 32768 -``` +{{< codenew file="service/networking/networkpolicy-multiport-egress.yaml" >}} The above rule allows any Pod with label `role=db` on the namespace `default` to communicate with any IP within the range `10.0.0.0/24` over TCP, provided that the target port is between the range 32000 and 32768. The following restrictions apply when using this field: + * The `endPort` field must be equal to or greater than the `port` field. * `endPort` can only be defined if `port` is also defined. * Both ports must be numeric. @@ -259,22 +314,34 @@ standardized label to target a specific namespace. ## What you can't do with network policies (at least, not yet) -As of Kubernetes {{< skew currentVersion >}}, the following functionality does not exist in the NetworkPolicy API, but you might be able to implement workarounds using Operating System components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress controllers, Service Mesh implementations) or admission controllers. In case you are new to network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be implemented using the NetworkPolicy API. +As of Kubernetes {{< skew currentVersion >}}, the following functionality does not exist in the +NetworkPolicy API, but you might be able to implement workarounds using Operating System +components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress +controllers, Service Mesh implementations) or admission controllers. In case you are new to +network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be +implemented using the NetworkPolicy API. -- Forcing internal cluster traffic to go through a common gateway (this might be best served with a service mesh or other proxy). +- Forcing internal cluster traffic to go through a common gateway (this might be best served with + a service mesh or other proxy). - Anything TLS related (use a service mesh or ingress controller for this). -- Node specific policies (you can use CIDR notation for these, but you cannot target nodes by their Kubernetes identities specifically). -- Targeting of services by name (you can, however, target pods or namespaces by their {{< glossary_tooltip text="labels" term_id="label" >}}, which is often a viable workaround). +- Node specific policies (you can use CIDR notation for these, but you cannot target nodes by + their Kubernetes identities specifically). +- Targeting of services by name (you can, however, target pods or namespaces by their + {{< glossary_tooltip text="labels" term_id="label" >}}, which is often a viable workaround). - Creation or management of "Policy requests" that are fulfilled by a third party. -- Default policies which are applied to all namespaces or pods (there are some third party Kubernetes distributions and projects which can do this). +- Default policies which are applied to all namespaces or pods (there are some third party + Kubernetes distributions and projects which can do this). - Advanced policy querying and reachability tooling. - The ability to log network security events (for example connections that are blocked or accepted). -- The ability to explicitly deny policies (currently the model for NetworkPolicies are deny by default, with only the ability to add allow rules). -- The ability to prevent loopback or incoming host traffic (Pods cannot currently block localhost access, nor do they have the ability to block access from their resident node). +- The ability to explicitly deny policies (currently the model for NetworkPolicies are deny by + default, with only the ability to add allow rules). +- The ability to prevent loopback or incoming host traffic (Pods cannot currently block localhost + access, nor do they have the ability to block access from their resident node). ## {{% heading "whatsnext" %}} - - See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) walkthrough for further examples. -- See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource. +- See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common + scenarios enabled by the NetworkPolicy resource. + diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index c98d344b8dedd..b761e056018da 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -193,7 +193,7 @@ spec: ``` Because this Service has no selector, the corresponding EndpointSlice (and -legacy Endpoints) objects are not created automatically. You can manually map the Service +legacy Endpoints) objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an EndpointSlice object manually. For example: @@ -255,6 +255,13 @@ Accessing a Service without a selector works the same as if it had a selector. In the [example](#services-without-selectors) for a Service without a selector, traffic is routed to one of the two endpoints defined in the EndpointSlice manifest: a TCP connection to 10.1.2.3 or 10.4.5.6, on port 9376. +{{< note >}} +The Kubernetes API server does not allow proxying to endpoints that are not mapped to +pods. Actions such as `kubectl proxy ` where the service has no +selector will fail due to this constraint. This prevents the Kubernetes API server +from being used as a proxy to endpoints the caller may not be authorized to access. +{{< /note >}} + An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead. For more information, see the [ExternalName](#externalname) section later in this document. @@ -476,6 +483,8 @@ Kubernetes `ServiceTypes` allow you to specify what kind of Service you want. * `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don't explicitly specify a `type` for a Service. + You can expose the service to the public with an [Ingress](docs/reference/kubernetes-api/service-resources/ingress-v1/) or the + [Gateway API](https://gateway-api.sigs.k8s.io/). * [`NodePort`](#type-nodeport): Exposes the Service on each Node's IP at a static port (the `NodePort`). To make the node port available, Kubernetes sets up a cluster IP address, @@ -1071,42 +1080,6 @@ in those modified security groups. Further documentation on annotations for Elastic IPs and other common use-cases may be found in the [AWS Load Balancer Controller documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/). -#### Other CLB annotations on Tencent Kubernetes Engine (TKE) - -There are other annotations for managing Cloud Load Balancers on TKE as shown below. - -```yaml - metadata: - name: my-service - annotations: - # Bind Loadbalancers with specified nodes - service.kubernetes.io/qcloud-loadbalancer-backends-label: key in (value1, value2) - - # ID of an existing load balancer - service.kubernetes.io/tke-existed-lbid:lb-6swtxxxx - - # Custom parameters for the load balancer (LB), does not support modification of LB type yet - service.kubernetes.io/service.extensiveParameters: "" - - # Custom parameters for the LB listener - service.kubernetes.io/service.listenerParameters: "" - - # Specifies the type of Load balancer; - # valid values: classic (Classic Cloud Load Balancer) or application (Application Cloud Load Balancer) - service.kubernetes.io/loadbalance-type: xxxxx - - # Specifies the public network bandwidth billing method; - # valid values: TRAFFIC_POSTPAID_BY_HOUR(bill-by-traffic) and BANDWIDTH_POSTPAID_BY_HOUR (bill-by-bandwidth). - service.kubernetes.io/qcloud-loadbalancer-internet-charge-type: xxxxxx - - # Specifies the bandwidth value (value range: [1,2000] Mbps). - service.kubernetes.io/qcloud-loadbalancer-internet-max-bandwidth-out: "10" - - # When this annotation is set,the loadbalancers will only register nodes - # with pod running on it, otherwise all nodes will be registered. - service.kubernetes.io/local-svc-only-bind-node-with-pod: true -``` - ### Type ExternalName {#externalname} Services of type ExternalName map a Service to a DNS name, not to a typical selector such as diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 4b265eceb7820..9ae9febe3bb83 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -388,7 +388,8 @@ You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to th beforehand so that Kubernetes hosts can access them. {{< /note >}} -See the [fibre channel example](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel) for more details. +See the [fibre channel example](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel) +for more details. ### gcePersistentDisk (deprecated) {#gcepersistentdisk} @@ -515,7 +516,9 @@ and the kubelet, set the `InTreePluginGCEUnregister` flag to `true`. ### gitRepo (deprecated) {#gitrepo} {{< warning >}} -The `gitRepo` volume type is deprecated. To provision a container with a git repo, mount an [EmptyDir](#emptydir) into an InitContainer that clones the repo using git, then mount the [EmptyDir](#emptydir) into the Pod's container. +The `gitRepo` volume type is deprecated. To provision a container with a git repo, mount an +[EmptyDir](#emptydir) into an InitContainer that clones the repo using git, then mount the +[EmptyDir](#emptydir) into the Pod's container. {{< /warning >}} A `gitRepo` volume is an example of a volume plugin. This plugin @@ -546,7 +549,7 @@ spec: -- + Kubernetes {{< skew currentVersion >}} does not include a `glusterfs` volume type. The GlusterFS in-tree storage driver was deprecated in the Kubernetes v1.25 release @@ -785,10 +788,13 @@ spec: {{< note >}} You must have your own NFS server running with the share exported before you can use it. -Also note that you can't specify NFS mount options in a Pod spec. You can either set mount options server-side or use [/etc/nfsmount.conf](https://man7.org/linux/man-pages/man5/nfsmount.conf.5.html). You can also mount NFS volumes via PersistentVolumes which do allow you to set mount options. +Also note that you can't specify NFS mount options in a Pod spec. You can either set mount options server-side or +use [/etc/nfsmount.conf](https://man7.org/linux/man-pages/man5/nfsmount.conf.5.html). +You can also mount NFS volumes via PersistentVolumes which do allow you to set mount options. {{< /note >}} -See the [NFS example](https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs) for an example of mounting NFS volumes with PersistentVolumes. +See the [NFS example](https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs) +for an example of mounting NFS volumes with PersistentVolumes. ### persistentVolumeClaim {#persistentvolumeclaim} @@ -1163,7 +1169,7 @@ persistent volume: volume expansion, the kubelet passes that data via the `NodeExpandVolume()` call to the CSI driver. In order to use the `nodeExpandSecretRef` field, your cluster should be running Kubernetes version 1.25 or later and you must enable - the [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) + the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) named `CSINodeExpandSecret` for each kube-apiserver and for the kubelet on every node. You must also be using a CSI driver that supports or requires secret data during node-initiated storage resize operations. diff --git a/content/en/docs/concepts/windows/intro.md b/content/en/docs/concepts/windows/intro.md index 2d0fa8d0a7856..e054688f4d52c 100644 --- a/content/en/docs/concepts/windows/intro.md +++ b/content/en/docs/concepts/windows/intro.md @@ -382,8 +382,6 @@ troubleshooting ideas prior to creating a ticket. The kubeadm tool helps you to deploy a Kubernetes cluster, providing the control plane to manage the cluster it, and nodes to run your workloads. -[Adding Windows nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/) -explains how to deploy Windows nodes to your cluster using kubeadm. The Kubernetes [cluster API](https://cluster-api.sigs.k8s.io/) project also provides means to automate deployment of Windows nodes. diff --git a/content/en/docs/concepts/windows/user-guide.md b/content/en/docs/concepts/windows/user-guide.md index ab648e9b6ff68..df3306f01ab4d 100644 --- a/content/en/docs/concepts/windows/user-guide.md +++ b/content/en/docs/concepts/windows/user-guide.md @@ -22,12 +22,11 @@ This guide walks you through the steps to configure and deploy Windows container ## Before you begin -* Create a Kubernetes cluster that includes a -control plane and a [worker node running Windows Server](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/) +* Create a Kubernetes cluster that includes a control plane and a worker node running Windows Server * It is important to note that creating and deploying services and workloads on Kubernetes -behaves in much the same way for Linux and Windows containers. -[Kubectl commands](/docs/reference/kubectl/) to interface with the cluster are identical. -The example in the section below is provided to jumpstart your experience with Windows containers. + behaves in much the same way for Linux and Windows containers. + [Kubectl commands](/docs/reference/kubectl/) to interface with the cluster are identical. + The example in the section below is provided to jumpstart your experience with Windows containers. ## Getting Started: Deploying a Windows container diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md index d2795e0efbb32..dd327758e332b 100644 --- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md @@ -14,44 +14,27 @@ weight: 80 A _CronJob_ creates {{< glossary_tooltip term_id="job" text="Jobs" >}} on a repeating schedule. -One CronJob object is like one line of a _crontab_ (cron table) file. It runs a job periodically -on a given schedule, written in [Cron](https://en.wikipedia.org/wiki/Cron) format. - -{{< caution >}} -All **CronJob** `schedule:` times are based on the timezone of the -{{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}. - -If your control plane runs the kube-controller-manager in Pods or bare -containers, the timezone set for the kube-controller-manager container determines the timezone -that the cron job controller uses. -{{< /caution >}} - -{{< caution >}} -The [v1 CronJob API](/docs/reference/kubernetes-api/workload-resources/cron-job-v1/) -does not officially support setting timezone as explained above. - -Setting variables such as `CRON_TZ` or `TZ` is not officially supported by the Kubernetes project. -`CRON_TZ` or `TZ` is an implementation detail of the internal library being used -for parsing and calculating the next Job creation time. Any usage of it is not -recommended in a production cluster. -{{< /caution >}} - -When creating the manifest for a CronJob resource, make sure the name you provide -is a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). -The name must be no longer than 52 characters. This is because the CronJob controller will automatically -append 11 characters to the job name provided and there is a constraint that the -maximum length of a Job name is no more than 63 characters. +CronJob is meant for performing regular scheduled actions such as backups, report generation, +and so on. One CronJob object is like one line of a _crontab_ (cron table) file on a +Unix system. It runs a job periodically on a given schedule, written in +[Cron](https://en.wikipedia.org/wiki/Cron) format. + +CronJobs have limitations and idiosyncrasies. +For example, in certain circumstances, a single CronJob can create multiple concurrent Jobs. See the [limitations](#cron-job-limitations) below. + +When the control plane creates new Jobs and (indirectly) Pods for a CronJob, the `.metadata.name` +of the CronJob is part of the basis for naming those Pods. The name of a CronJob must be a valid +[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) +value, but this can produce unexpected results for the Pod hostnames. For best compatibility, +the name should follow the more restrictive rules for a +[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names). +Even when the name is a DNS subdomain, the name must be no longer than 52 +characters. This is because the CronJob controller will automatically append +11 characters to the name you provide and there is a constraint that the +length of a Job name is no more than 63 characters. - -## CronJob - -CronJobs are meant for performing regular scheduled actions such as backups, -report generation, and so on. Each of those tasks should be configured to recur -indefinitely (for example: once a day / week / month); you can define the point -in time within that interval when the job should start. - -### Example +## Example This example CronJob manifest prints the current time and a hello message every minute: @@ -60,7 +43,9 @@ This example CronJob manifest prints the current time and a hello message every ([Running Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/) takes you through this example in more detail). -### Cron schedule syntax +## Writing a CronJob spec +### Schedule syntax +The `.spec.schedule` field is required. The value of that field follows the [Cron](https://en.wikipedia.org/wiki/Cron) syntax: ``` # ┌───────────── minute (0 - 59) @@ -74,6 +59,24 @@ takes you through this example in more detail). # * * * * * ``` +For example, `0 0 13 * 5` states that the task must be started every Friday at midnight, as well as on the 13th of each month at midnight. + +The format also includes extended "Vixie cron" step values. As explained in the +[FreeBSD manual](https://www.freebsd.org/cgi/man.cgi?crontab%285%29): + +> Step values can be used in conjunction with ranges. Following a range +> with `/` specifies skips of the number's value through the +> range. For example, `0-23/2` can be used in the hours field to specify +> command execution every other hour (the alternative in the V7 standard is +> `0,2,4,6,8,10,12,14,16,18,20,22`). Steps are also permitted after an +> asterisk, so if you want to say "every two hours", just use `*/2`. + +{{< note >}} +A question mark (`?`) in the schedule has the same meaning as an asterisk `*`, that is, +it stands for any of available value for a given field. +{{< /note >}} + +Other than the standard syntax, some macros like `@monthly` can also be used: | Entry | Description | Equivalent to | | ------------- | ------------- |------------- | @@ -83,17 +86,83 @@ takes you through this example in more detail). | @daily (or @midnight) | Run once a day at midnight | 0 0 * * * | | @hourly | Run once an hour at the beginning of the hour | 0 * * * * | +To generate CronJob schedule expressions, you can also use web tools like [crontab.guru](https://crontab.guru/). +### Job template -For example, the line below states that the task must be started every Friday at midnight, as well as on the 13th of each month at midnight: +The `.spec.jobTemplate` defines a template for the Jobs that the CronJob creates, and it is required. +It has exactly the same schema as a [Job](/docs/concepts/workloads/controllers/job/), except that +it is nested and does not have an `apiVersion` or `kind`. +You can specify common metadata for the templated Jobs, such as +{{< glossary_tooltip text="labels" term_id="label" >}} or +{{< glossary_tooltip text="annotations" term_id="annotation" >}}. +For information about writing a Job `.spec`, see [Writing a Job Spec](/docs/concepts/workloads/controllers/job/#writing-a-job-spec). -`0 0 13 * 5` +### Deadline for delayed job start {#starting-deadline} -To generate CronJob schedule expressions, you can also use web tools like [crontab.guru](https://crontab.guru/). +The `.spec.startingDeadlineSeconds` field is optional. +This field defines a deadline (in whole seconds) for starting the Job, if that Job misses its scheduled time +for any reason. + +After missing the deadline, the CronJob skips that instance of the Job (future occurrences are still scheduled). +For example, if you have a backup job that runs twice a day, you might allow it to start up to 8 hours late, +but no later, because a backup taken any later wouldn't be useful: you would instead prefer to wait for +the next scheduled run. + +For Jobs that miss their configured deadline, Kubernetes treats them as failed Jobs. +If you don't specify `startingDeadlineSeconds` for a CronJob, the Job occurrences have no deadline. + +If the `.spec.startingDeadlineSeconds` field is set (not null), the CronJob +controller measures the time between when a job is expected to be created and +now. If the difference is higher than that limit, it will skip this execution. + +For example, if it is set to `200`, it allows a job to be created for up to 200 +seconds after the actual schedule. + +### Concurrency policy + +The `.spec.concurrencyPolicy` field is also optional. +It specifies how to treat concurrent executions of a job that is created by this CronJob. +The spec may specify only one of the following concurrency policies: + +* `Allow` (default): The CronJob allows concurrently running jobs +* `Forbid`: The CronJob does not allow concurrent runs; if it is time for a new job run and the + previous job run hasn't finished yet, the CronJob skips the new job run +* `Replace`: If it is time for a new job run and the previous job run hasn't finished yet, the + CronJob replaces the currently running job run with a new job run -## Time zones +Note that concurrency policy only applies to the jobs created by the same cron job. +If there are multiple CronJobs, their respective jobs are always allowed to run concurrently. -For CronJobs with no time zone specified, the kube-controller-manager interprets schedules relative to its local time zone. +### Schedule suspension + +You can suspend execution of Jobs for a CronJob, by setting the optional `.spec.suspend` field +to true. The field defaults to false. + +This setting does _not_ affect Jobs that the CronJob has already started. + +If you do set that field to true, all subsequent executions are suspended (they remain +scheduled, but the CronJob controller does not start the Jobs to run the tasks) until +you unsuspend the CronJob. + +{{< caution >}} +Executions that are suspended during their scheduled time count as missed jobs. +When `.spec.suspend` changes from `true` to `false` on an existing CronJob without a +[starting deadline](#starting-deadline), the missed jobs are scheduled immediately. +{{< /caution >}} + +### Jobs history limits + +The `.spec.successfulJobsHistoryLimit` and `.spec.failedJobsHistoryLimit` fields are optional. +These fields specify how many completed and failed jobs should be kept. +By default, they are set to 3 and 1 respectively. Setting a limit to `0` corresponds to keeping +none of the corresponding kind of jobs after they finish. + +For another way to clean up jobs automatically, see [Clean up finished jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically). + +### Time zones + +For CronJobs with no time zone specified, the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}} interprets schedules relative to its local time zone. {{< feature-state for_k8s_version="v1.25" state="beta" >}} @@ -102,16 +171,39 @@ you can specify a time zone for a CronJob (if you don't enable that feature gate Kubernetes that does not have experimental time zone support, all CronJobs in your cluster have an unspecified timezone). -When you have the feature enabled, you can set `spec.timeZone` to the name of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For example, setting -`spec.timeZone: "Etc/UTC"` instructs Kubernetes to interpret the schedule relative to Coordinated Universal Time. +When you have the feature enabled, you can set `.spec.timeZone` to the name of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For example, setting +`.spec.timeZone: "Etc/UTC"` instructs Kubernetes to interpret the schedule relative to Coordinated Universal Time. + +{{< caution >}} +The implementation of the CronJob API in Kubernetes {{< skew currentVersion >}} lets you set +the `.spec.schedule` field to include a timezone; for example: `CRON_TZ=UTC * * * * *` +or `TZ=UTC * * * * *`. + +Specifying a timezone that way is **not officially supported** (and never has been). + +If you try to set a schedule that includes `TZ` or `CRON_TZ` timezone specification, +Kubernetes reports a [warning](/blog/2020/09/03/warnings/) to the client. +Future versions of Kubernetes might not implement that unofficial timezone mechanism at all. +{{< /caution >}} A time zone database from the Go standard library is included in the binaries and used as a fallback in case an external database is not available on the system. ## CronJob limitations {#cron-job-limitations} -A cron job creates a job object _about_ once per execution time of its schedule. We say "about" because there -are certain circumstances where two jobs might be created, or no job might be created. We attempt to make these rare, -but do not completely prevent them. Therefore, jobs should be _idempotent_. +### Modifying a CronJob +By design, a CronJob contains a template for _new_ Jobs. +If you modify an existing CronJob, the changes you make will apply to new Jobs that +start to run after your modification is complete. Jobs (and their Pods) that have already +started continue to run without changes. +That is, the CronJob does _not_ update existing Jobs, even if those remain running. + +### Job creation + +A CronJob creates a Job object approximately once per execution time of its schedule. +The scheduling is approximate because there +are certain circumstances where two Jobs might be created, or no Job might be created. +Kubernetes tries to avoid those situations, but do not completely prevent them. Therefore, +the Jobs that you define should be _idempotent_. If `startingDeadlineSeconds` is set to a large value or left unset (the default) and if `concurrencyPolicy` is set to `Allow`, the jobs will always run @@ -143,32 +235,16 @@ be down for the same period as the previous example (`08:29:00` to `10:21:00`,) The CronJob is only responsible for creating Jobs that match its schedule, and the Job in turn is responsible for the management of the Pods it represents. -## Controller version {#new-controller} - -Starting with Kubernetes v1.21 the second version of the CronJob controller -is the default implementation. To disable the default CronJob controller -and use the original CronJob controller instead, pass the `CronJobControllerV2` -[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) -flag to the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}, -and set this flag to `false`. For example: - -``` ---feature-gates="CronJobControllerV2=false" -``` - - ## {{% heading "whatsnext" %}} * Learn about [Pods](/docs/concepts/workloads/pods/) and [Jobs](/docs/concepts/workloads/controllers/job/), two concepts that CronJobs rely upon. -* Read about the [format](https://pkg.go.dev/github.com/robfig/cron/v3#hdr-CRON_Expression_Format) +* Read about the detailed [format](https://pkg.go.dev/github.com/robfig/cron/v3#hdr-CRON_Expression_Format) of CronJob `.spec.schedule` fields. * For instructions on creating and working with CronJobs, and for an example of a CronJob manifest, see [Running automated tasks with CronJobs](/docs/tasks/job/automated-tasks-with-cron-jobs/). -* For instructions to clean up failed or completed jobs automatically, - see [Clean up Jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) * `CronJob` is part of the Kubernetes REST API. Read the {{< api-reference page="workload-resources/cron-job-v1" >}} - object definition to understand the API for Kubernetes cron jobs. + API reference for more details. diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index 9a69ed9b8aeee..e5fc14f64d732 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -44,7 +44,10 @@ The following is an example of a Deployment. It creates a ReplicaSet to bring up In this example: -* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field. +* A Deployment named `nginx-deployment` is created, indicated by the + `.metadata.name` field. This name will become the basis for the ReplicaSets + and Pods which are created later. See [Writing a Deployment Spec](#writing-a-deployment-spec) + for more details. * The Deployment creates a ReplicaSet that creates three replicated Pods, indicated by the `.spec.replicas` field. * The `.spec.selector` field defines how the created ReplicaSet finds which Pods to manage. In this case, you select a label that is defined in the Pod template (`app: nginx`). @@ -120,8 +123,11 @@ Follow the steps given below to create the above Deployment: * `CURRENT` displays how many replicas are currently running. * `READY` displays how many replicas of the application are available to your users. * `AGE` displays the amount of time that the application has been running. - - Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[HASH]`. + + Notice that the name of the ReplicaSet is always formatted as + `[DEPLOYMENT-NAME]-[HASH]`. This name will become the basis for the Pods + which are created. + The `HASH` string is the same as the `pod-template-hash` label on the ReplicaSet. 6. To see the labels automatically generated for each Pod, run `kubectl get pods --show-labels`. @@ -1076,8 +1082,13 @@ As with all other Kubernetes configs, a Deployment needs `.apiVersion`, `.kind`, For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents. -The name of a Deployment object must be a valid -[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). + +When the control plane creates new Pods for a Deployment, the `.metadata.name` of the +Deployment is part of the basis for naming those Pods. The name of a Deployment must be a valid +[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) +value, but this can produce unexpected results for the Pod hostnames. For best compatibility, +the name should follow the more restrictive rules for a +[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names). A Deployment also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md index 08855b8b08eed..a05ad752c3d8d 100644 --- a/content/en/docs/concepts/workloads/controllers/job.md +++ b/content/en/docs/concepts/workloads/controllers/job.md @@ -54,21 +54,21 @@ Check on the status of the Job with `kubectl`: {{< tabs name="Check status of Job" >}} {{< tab name="kubectl describe job pi" codelang="bash" >}} -Name: pi -Namespace: default -Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c -Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c - job-name=pi -Annotations: kubectl.kubernetes.io/last-applied-configuration: - {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":... -Parallelism: 1 -Completions: 1 -Start Time: Mon, 02 Dec 2019 15:20:11 +0200 -Completed At: Mon, 02 Dec 2019 15:21:16 +0200 -Duration: 65s -Pods Statuses: 0 Running / 1 Succeeded / 0 Failed +Name: pi +Namespace: default +Selector: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578 +Labels: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578 + job-name=pi +Annotations: batch.kubernetes.io/job-tracking: +Parallelism: 1 +Completions: 1 +Completion Mode: NonIndexed +Start Time: Fri, 28 Oct 2022 13:05:18 +0530 +Completed At: Fri, 28 Oct 2022 13:05:21 +0530 +Duration: 3s +Pods Statuses: 0 Active / 1 Succeeded / 0 Failed Pod Template: - Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c + Labels: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578 job-name=pi Containers: pi: @@ -86,24 +86,26 @@ Pod Template: Events: Type Reason Age From Message ---- ------ ---- ---- ------- - Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7 + Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4 + Normal Completed 18s job-controller Job completed {{< /tab >}} {{< tab name="kubectl get job pi -o yaml" codelang="bash" >}} apiVersion: batch/v1 kind: Job metadata: annotations: + batch.kubernetes.io/job-tracking: "" kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":{"spec":{"containers":[{"command":["perl","-Mbignum=bpi","-wle","print bpi(2000)"],"image":"perl","name":"pi"}],"restartPolicy":"Never"}}}} - creationTimestamp: "2022-06-15T08:40:15Z" + {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":{"spec":{"containers":[{"command":["perl","-Mbignum=bpi","-wle","print bpi(2000)"],"image":"perl:5.34.0","name":"pi"}],"restartPolicy":"Never"}}}} + creationTimestamp: "2022-11-10T17:53:53Z" generation: 1 labels: - controller-uid: 863452e6-270d-420e-9b94-53a54146c223 + controller-uid: 204fb678-040b-497f-9266-35ffa8716d14 job-name: pi name: pi namespace: default - resourceVersion: "987" - uid: 863452e6-270d-420e-9b94-53a54146c223 + resourceVersion: "4751" + uid: 204fb678-040b-497f-9266-35ffa8716d14 spec: backoffLimit: 4 completionMode: NonIndexed @@ -111,13 +113,13 @@ spec: parallelism: 1 selector: matchLabels: - controller-uid: 863452e6-270d-420e-9b94-53a54146c223 + controller-uid: 204fb678-040b-497f-9266-35ffa8716d14 suspend: false template: metadata: creationTimestamp: null labels: - controller-uid: 863452e6-270d-420e-9b94-53a54146c223 + controller-uid: 204fb678-040b-497f-9266-35ffa8716d14 job-name: pi spec: containers: @@ -127,7 +129,7 @@ spec: - -wle - print bpi(2000) image: perl:5.34.0 - imagePullPolicy: Always + imagePullPolicy: IfNotPresent name: pi resources: {} terminationMessagePath: /dev/termination-log @@ -139,8 +141,9 @@ spec: terminationGracePeriodSeconds: 30 status: active: 1 - ready: 1 - startTime: "2022-06-15T08:40:15Z" + ready: 0 + startTime: "2022-11-10T17:53:57Z" + uncountedTerminatedPods: {} {{< /tab >}} {{< /tabs >}} @@ -177,7 +180,15 @@ The output is similar to this: ## Writing a Job spec As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. -Its name must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). + +When the control plane creates new Pods for a Job, the `.metadata.name` of the +Job is part of the basis for naming those Pods. The name of a Job must be a valid +[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) +value, but this can produce unexpected results for the Pod hostnames. For best compatibility, +the name should follow the more restrictive rules for a +[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names). +Even when the name is a DNS subdomain, the name must be no longer than 63 +characters. A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index da0aa76ddc420..35162c8dbc1c0 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -228,8 +228,12 @@ In this manner, a ReplicaSet can own a non-homogenous set of Pods As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. For ReplicaSets, the `kind` is always a ReplicaSet. -The name of a ReplicaSet object must be a valid -[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +When the control plane creates new Pods for a ReplicaSet, the `.metadata.name` of the +ReplicaSet is part of the basis for naming those Pods. The name of a ReplicaSet must be a valid +[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) +value, but this can produce unexpected results for the Pod hostnames. For best compatibility, +the name should follow the more restrictive rules for a +[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names). A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md index 1360bd69f0a3c..2e658c7cba484 100644 --- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md @@ -112,11 +112,17 @@ Here, the selector is the same as the selector for the ReplicationController (se `kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option specifies an expression with the name from each pod in the returned list. -## Writing a ReplicationController Spec +## Writing a ReplicationController Manifest As with all other Kubernetes config, a ReplicationController needs `apiVersion`, `kind`, and `metadata` fields. -The name of a ReplicationController object must be a valid -[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). + +When the control plane creates new Pods for a ReplicationController, the `.metadata.name` of the +ReplicationController is part of the basis for naming those Pods. The name of a ReplicationController must be a valid +[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) +value, but this can produce unexpected results for the Pod hostnames. For best compatibility, +the name should follow the more restrictive rules for a +[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names). + For general information about working with configuration files, see [object management](/docs/concepts/overview/working-with-objects/object-management/). A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). diff --git a/content/en/docs/concepts/workloads/controllers/statefulset.md b/content/en/docs/concepts/workloads/controllers/statefulset.md index bfe29e81a84ff..cfa65e285ad99 100644 --- a/content/en/docs/concepts/workloads/controllers/statefulset.md +++ b/content/en/docs/concepts/workloads/controllers/statefulset.md @@ -121,7 +121,7 @@ In the above example: PersistentVolume Provisioner. The name of a StatefulSet object must be a valid -[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names). ### Pod Selector diff --git a/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md b/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md index a51c88602fcf6..aca3c090ebcd0 100644 --- a/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md +++ b/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md @@ -1,75 +1,87 @@ --- reviewers: - janetkuo -title: Automatic Clean-up for Finished Jobs +title: Automatic Cleanup for Finished Jobs content_type: concept weight: 70 +description: >- + A time-to-live mechanism to clean up old Jobs that have finished execution. --- {{< feature-state for_k8s_version="v1.23" state="stable" >}} -TTL-after-finished {{}} provides a -TTL (time to live) mechanism to limit the lifetime of resource objects that -have finished execution. TTL controller only handles -{{< glossary_tooltip text="Jobs" term_id="job" >}}. +When your Job has finished, it's useful to keep that Job in the API (and not immediately delete the Job) +so that you can tell whether the Job succeeded or failed. + +Kubernetes' TTL-after-finished {{}} provides a +TTL (time to live) mechanism to limit the lifetime of Job objects that +have finished execution. -## TTL-after-finished Controller +## Cleanup for finished Jobs -The TTL-after-finished controller is only supported for Jobs. A cluster operator can use this feature to clean +The TTL-after-finished controller is only supported for Jobs. You can use this mechanism to clean up finished Jobs (either `Complete` or `Failed`) automatically by specifying the `.spec.ttlSecondsAfterFinished` field of a Job, as in this [example](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically). -The TTL-after-finished controller will assume that a job is eligible to be cleaned up -TTL seconds after the job has finished, in other words, when the TTL has expired. When the + +The TTL-after-finished controller assumes that a Job is eligible to be cleaned up +TTL seconds after the Job has finished. The timer starts once the +status condition of the Job changes to show that the Job is either `Complete` or `Failed`; once the TTL has +expired, that Job becomes eligible for +[cascading](/docs/concepts/architecture/garbage-collection/#cascading-deletion) removal. When the TTL-after-finished controller cleans up a job, it will delete it cascadingly, that is to say it will delete -its dependent objects together with it. Note that when the job is deleted, -its lifecycle guarantees, such as finalizers, will be honored. +its dependent objects together with it. + +Kubernetes honors object lifecycle guarantees on the Job, such as waiting for +[finalizers](/docs/concepts/overview/working-with-objects/finalizers/). -The TTL seconds can be set at any time. Here are some examples for setting the +You can set the TTL seconds at any time. Here are some examples for setting the `.spec.ttlSecondsAfterFinished` field of a Job: -* Specify this field in the job manifest, so that a Job can be cleaned up +* Specify this field in the Job manifest, so that a Job can be cleaned up automatically some time after it finishes. -* Set this field of existing, already finished jobs, to adopt this new - feature. +* Manually set this field of existing, already finished Jobs, so that they become eligible + for cleanup. * Use a - [mutating admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) - to set this field dynamically at job creation time. Cluster administrators can + [mutating admission webhook](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook) + to set this field dynamically at Job creation time. Cluster administrators can use this to enforce a TTL policy for finished jobs. * Use a - [mutating admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) - to set this field dynamically after the job has finished, and choose - different TTL values based on job status, labels, etc. + [mutating admission webhook](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook) + to set this field dynamically after the Job has finished, and choose + different TTL values based on job status, labels. For this case, the webhook needs + to detect changes to the `.status` of the Job and only set a TTL when the Job + is being marked as completed. +* Write your own controller to manage the cleanup TTL for Jobs that match a particular + {{< glossary_tooltip term_id="selector" text="selector-selector" >}}. -## Caveat +## Caveats -### Updating TTL Seconds +### Updating TTL for finished Jobs -Note that the TTL period, e.g. `.spec.ttlSecondsAfterFinished` field of Jobs, -can be modified after the job is created or has finished. However, once the -Job becomes eligible to be deleted (when the TTL has expired), the system won't -guarantee that the Jobs will be kept, even if an update to extend the TTL -returns a successful API response. +You can modify the TTL period, e.g. `.spec.ttlSecondsAfterFinished` field of Jobs, +after the job is created or has finished. If you extend the TTL period after the +existing `ttlSecondsAfterFinished` period has expired, Kubernetes doesn't guarantee +to retain that Job, even if an update to extend the TTL returns a successful API +response. -### Time Skew +### Time skew -Because TTL-after-finished controller uses timestamps stored in the Kubernetes jobs to +Because the TTL-after-finished controller uses timestamps stored in the Kubernetes jobs to determine whether the TTL has expired or not, this feature is sensitive to time -skew in the cluster, which may cause TTL-after-finish controller to clean up job objects +skew in your cluster, which may cause the control plane to clean up Job objects at the wrong time. Clocks aren't always correct, but the difference should be very small. Please be aware of this risk when setting a non-zero TTL. - - ## {{% heading "whatsnext" %}} -* [Clean up Jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) - -* [Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md) +* Read [Clean up Jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) +* Refer to the [Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md) + (KEP) for adding this mechanism. diff --git a/content/en/docs/concepts/workloads/pods/_index.md b/content/en/docs/concepts/workloads/pods/_index.md index e49bbefc0725e..76c966757a38f 100644 --- a/content/en/docs/concepts/workloads/pods/_index.md +++ b/content/en/docs/concepts/workloads/pods/_index.md @@ -133,8 +133,11 @@ is not a process, but an environment for running container(s). A Pod persists un it is deleted. {{< /note >}} -When you create the manifest for a Pod object, make sure the name specified is a valid -[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +The name of a Pod must be a valid +[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) +value, but this can produce unexpected results for the Pod hostname. For best compatibility, +the name should follow the more restrictive rules for a +[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names). ### Pod OS diff --git a/content/en/docs/contribute/advanced.md b/content/en/docs/contribute/advanced.md index c48712729cfbe..6bb0daa39c413 100644 --- a/content/en/docs/contribute/advanced.md +++ b/content/en/docs/contribute/advanced.md @@ -190,3 +190,7 @@ When you're ready to start the recording, click Record to Cloud. When you're ready to stop recording, click Stop. The video uploads automatically to YouTube. + +### Offboarding a SIG Co-chair (Emeritus) + +See: [k/community/sig-docs/offboarding.md](https://github.com/kubernetes/community/blob/master/sig-docs/offboarding.md) \ No newline at end of file diff --git a/content/en/docs/contribute/new-content/open-a-pr.md b/content/en/docs/contribute/new-content/open-a-pr.md index 50b5a00608e23..589644223a99d 100644 --- a/content/en/docs/contribute/new-content/open-a-pr.md +++ b/content/en/docs/contribute/new-content/open-a-pr.md @@ -65,8 +65,7 @@ class id1 k8s Figure 1. Steps for opening a PR using GitHub. -1. On the page where you see the issue, select the pencil icon at the top right. - You can also scroll to the bottom of the page and select **Edit this page**. +1. On the page where you see the issue, select the **Edit this page** option in the right-hand side navigation panel. 1. Make your changes in the GitHub markdown editor. diff --git a/content/en/docs/home/_index.md b/content/en/docs/home/_index.md index 7297da2806811..a580c9aeadb09 100644 --- a/content/en/docs/home/_index.md +++ b/content/en/docs/home/_index.md @@ -56,9 +56,9 @@ cards: description: Anyone can contribute, whether you're new to the project or you've been around a long time. button: Contribute to the docs button_path: /docs/contribute -- name: release-notes - title: K8s Release Notes - description: If you are installing Kubernetes or upgrading to the newest version, refer to the current release notes. +- name: Download + title: Download Kubernetes + description: Install Kubernetes or upgrade to the newest version. button: "Download Kubernetes" button_path: "/releases/download" - name: about diff --git a/content/en/docs/images/podSchedulingGates.svg b/content/en/docs/images/podSchedulingGates.svg new file mode 100644 index 0000000000000..c87b23a7c6ef4 --- /dev/null +++ b/content/en/docs/images/podSchedulingGates.svg @@ -0,0 +1 @@ +
scheduling gate removed
no
yes
pod created
pod scheduling gated
pod scheduling ready
pod running
empty scheduling gates?
\ No newline at end of file diff --git a/content/en/docs/reference/_index.md b/content/en/docs/reference/_index.md index 9e9fa6bcd9fff..05db47b7b46e3 100644 --- a/content/en/docs/reference/_index.md +++ b/content/en/docs/reference/_index.md @@ -9,13 +9,10 @@ content_type: concept no_list: true --- - This section of the Kubernetes documentation contains references. - - ## API Reference @@ -44,7 +41,7 @@ client libraries: ## CLI * [kubectl](/docs/reference/kubectl/) - Main CLI tool for running commands and managing Kubernetes clusters. - * [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](https://goessner.net/articles/JsonPath/) with kubectl. + * [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](https://goessner.net/articles/JsonPath/) with kubectl. * [kubeadm](/docs/reference/setup-tools/kubeadm/) - CLI tool to easily provision a secure Kubernetes cluster. ## Components @@ -55,16 +52,18 @@ client libraries: * [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - REST API that validates and configures data for API objects such as pods, services, replication controllers. -* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes. +* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - + Daemon that embeds the core control loops shipped with Kubernetes. * [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of back-ends. -* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler that manages availability, performance, and capacity. +* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - + Scheduler that manages availability, performance, and capacity. * [Scheduler Policies](/docs/reference/scheduling/policies) * [Scheduler Profiles](/docs/reference/scheduling/config#profiles) -* List of [ports and protocols](/docs/reference/ports-and-protocols/) that +* List of [ports and protocols](/docs/reference/networking/ports-and-protocols/) that should be open on control plane and worker nodes ## Config APIs @@ -74,14 +73,19 @@ configure kubernetes components or tools. Most of these APIs are not exposed by the API server in a RESTful way though they are essential for a user or an operator to use or manage a cluster. -* [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/) -* [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/) + +* [kubeconfig (v1)](/docs/reference/config-api/kubeconfig.v1/) +* [kube-apiserver admission (v1)](/docs/reference/config-api/apiserver-admission.v1/) +* [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/) and + [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/) * [kube-apiserver encryption (v1)](/docs/reference/config-api/apiserver-encryption.v1/) * [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/) * [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/) and [kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/) -* [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/) -* [kubelet credential providers (v1beta1)](/docs/reference/config-api/kubelet-credentialprovider.v1beta1/) + [kubelet configuration (v1)](/docs/reference/config-api/kubelet-config.v1/) +* [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/), + [kubelet credential providers (v1beta1)](/docs/reference/config-api/kubelet-credentialprovider.v1beta1/) and + [kubelet credential providers (v1)](/docs/reference/config-api/kubelet-credentialprovider.v1/) * [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/), [kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) and [kube-scheduler configuration (v1)](/docs/reference/config-api/kube-scheduler-config.v1/) diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index f819c237b1684..734a0a333b4d9 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -110,7 +110,7 @@ The [`ValidatingAdmissionPolicy`](#validatingadmissionpolicy) admission plugin i by default, but is only active if you enable the the `ValidatingAdmissionPolicy` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) **and** the `admissionregistration.k8s.io/v1alpha1` API. -{{< note >}} +{{< /note >}} ## What does each admission controller do? @@ -373,21 +373,21 @@ An example request body: ```json { - "apiVersion":"imagepolicy.k8s.io/v1alpha1", - "kind":"ImageReview", - "spec":{ - "containers":[ + "apiVersion": "imagepolicy.k8s.io/v1alpha1", + "kind": "ImageReview", + "spec": { + "containers": [ { - "image":"myrepo/myimage:v1" + "image": "myrepo/myimage:v1" }, { - "image":"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed" + "image": "myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed" } ], - "annotations":{ + "annotations": { "mycluster.image-policy.k8s.io/ticket-1234": "break-glass" }, - "namespace":"mynamespace" + "namespace": "mynamespace" } } ``` @@ -610,9 +610,9 @@ This file may be json or yaml and has the following format: ```yaml podNodeSelectorPluginConfig: - clusterDefaultNodeSelector: name-of-node-selector - namespace1: name-of-node-selector - namespace2: name-of-node-selector + clusterDefaultNodeSelector: name-of-node-selector + namespace1: name-of-node-selector + namespace2: name-of-node-selector ``` Reference the `PodNodeSelector` configuration file from the file provided to the API server's @@ -744,17 +744,37 @@ for more information. ### SecurityContextDeny {#securitycontextdeny} -This admission controller will deny any Pod that attempts to set certain escalating -[SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core) -fields, as shown in the -[Configure a Security Context for a Pod or Container](/docs/tasks/configure-pod-container/security-context/) -task. -If you don't use [Pod Security admission](/docs/concepts/security/pod-security-admission/), -[PodSecurityPolicies](/docs/concepts/security/pod-security-policy/), nor any external enforcement mechanism, -then you could use this admission controller to restrict the set of values a security context can take. - -See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for more context on restricting -pod privileges. +{{< feature-state for_k8s_version="v1.0" state="alpha" >}} + +{{< caution >}} +This admission controller plugin is **outdated** and **incomplete**, it may be +unusable or not do what you would expect. It was originally designed to prevent +the use of some, but not all, security-sensitive fields. Indeed, fields like +`privileged`, were not filtered at creation and the plugin was not updated with +the most recent fields, and new APIs like the `ephemeralContainers` field for a +Pod. + +The [Pod Security Admission](/docs/concepts/security/pod-security-admission/) +plugin enforcing the [Pod Security Standards](/docs/concepts/security/pod-security-standards/) +`Restricted` profile captures what this plugin was trying to achieve in a better +and up-to-date way. +{{< /caution >}} + +This admission controller will deny any Pod that attempts to set the following +[SecurityContext](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) +fields: +- `.spec.securityContext.supplementalGroups` +- `.spec.securityContext.seLinuxOptions` +- `.spec.securityContext.runAsUser` +- `.spec.securityContext.fsGroup` +- `.spec.(init)Containers[*].securityContext.seLinuxOptions` +- `.spec.(init)Containers[*].securityContext.runAsUser` + +For more historical context on this plugin, see +[The birth of PodSecurityPolicy](/blog/2022/08/23/podsecuritypolicy-the-historical-context/#the-birth-of-podsecuritypolicy) +from the Kubernetes blog article about PodSecurityPolicy and its removal. The +article details the PodSecurityPolicy historical context and the birth of the +`securityContext` field for Pods. ### ServiceAccount {#serviceaccount} diff --git a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md index 71beccb53f5b0..31ab932589918 100644 --- a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md +++ b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md @@ -404,23 +404,25 @@ However, you _can_ enable its server certificate, at least partially, via certif ### Certificate Rotation -Kubernetes v1.8 and higher kubelet implements __beta__ features for enabling -rotation of its client and/or serving certificates. These can be enabled through -the respective `RotateKubeletClientCertificate` and -`RotateKubeletServerCertificate` feature flags on the kubelet and are enabled by -default. +Kubernetes v1.8 and higher kubelet implements features for enabling +rotation of its client and/or serving certificates. Note, rotation of serving +certificate is a __beta__ feature and requires the `RotateKubeletServerCertificate` +feature flag on the kubelet (enabled by default). -`RotateKubeletClientCertificate` causes the kubelet to rotate its client -certificates by creating new CSRs as its existing credentials expire. To enable -this feature pass the following flag to the kubelet: +You can configure the kubelet to rotate its client certificates by creating new CSRs +as its existing credentials expire. To enable this feature, use the `rotateCertificates` +field of [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/) +or pass the following command line argument to the kubelet (deprecated): ``` --rotate-certificates ``` -`RotateKubeletServerCertificate` causes the kubelet **both** to request a serving +Enabling `RotateKubeletServerCertificate` causes the kubelet **both** to request a serving certificate after bootstrapping its client credentials **and** to rotate that -certificate. To enable this feature pass the following flag to the kubelet: +certificate. To enable this behavior, use the field `serverTLSBootstrap` of +the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/) +or pass the following command line argument to the kubelet (deprecated): ``` --rotate-server-certificates diff --git a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md index 332e757313e8a..f78f0f81fb714 100644 --- a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md +++ b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md @@ -184,7 +184,7 @@ it does the following when a Pod is created: `/var/run/secrets/kubernetes.io/serviceaccount`. For Linux containers, that volume is mounted at `/var/run/secrets/kubernetes.io/serviceaccount`; on Windows nodes, the mount is at the equivalent path. -1. If the spec of the incoming Pod does already contain any `imagePullSecrets`, then the +1. If the spec of the incoming Pod doesn't already contain any `imagePullSecrets`, then the admission controller adds `imagePullSecrets`, copying them from the `ServiceAccount`. ### TokenRequest API diff --git a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md index 1cb2e0a2f579d..2bf6610eebd27 100644 --- a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md +++ b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md @@ -20,23 +20,32 @@ This page provides an overview of Validating Admission Policy. Validating admission policies offer a declarative, in-process alternative to validating admission webhooks. -Validating admission policies use the Common Expression Language (CEL) to declare the validation rules of a policy. -Validation admission policies are highly configurable, enabling policy authors to define policies that can be parameterized and scoped to resources as needed by cluster administrators. +Validating admission policies use the Common Expression Language (CEL) to declare the validation +rules of a policy. +Validation admission policies are highly configurable, enabling policy authors to define policies +that can be parameterized and scoped to resources as needed by cluster administrators. ## What Resources Make a Policy A policy is generally made up of three resources: -- The `ValidatingAdmissionPolicy` describes the abstract logic of a policy (think: "this policy makes sure a particular label is set to a particular value"). +- The `ValidatingAdmissionPolicy` describes the abstract logic of a policy + (think: "this policy makes sure a particular label is set to a particular value"). -- A `ValidatingAdmissionPolicyBinding` links the above resources together and provides scoping. If you only want to require an `owner` label to be set for `Pods`, the binding is where you would specify this restriction. +- A `ValidatingAdmissionPolicyBinding` links the above resources together and provides scoping. + If you only want to require an `owner` label to be set for `Pods`, the binding is where you would + specify this restriction. -- A parameter resource provides information to a ValidatingAdmissionPolicy to make it a concrete statement (think "the `owner` label must be set to something that ends in `.company.com`"). A native type such as ConfigMap or a CRD defines the schema of a parameter resource. `ValidatingAdmissionPolicy` objects specify what Kind they are expecting for their parameter resource. +- A parameter resource provides information to a ValidatingAdmissionPolicy to make it a concrete + statement (think "the `owner` label must be set to something that ends in `.company.com`"). + A native type such as ConfigMap or a CRD defines the schema of a parameter resource. + `ValidatingAdmissionPolicy` objects specify what Kind they are expecting for their parameter resource. +At least a `ValidatingAdmissionPolicy` and a corresponding `ValidatingAdmissionPolicyBinding` +must be defined for a policy to have an effect. -At least a `ValidatingAdmissionPolicy` and a corresponding `ValidatingAdmissionPolicyBinding` must be defined for a policy to have an effect. - -If a `ValidatingAdmissionPolicy` does not need to be configured via parameters, simply leave `spec.paramKind` in `ValidatingAdmissionPolicy` unset. +If a `ValidatingAdmissionPolicy` does not need to be configured via parameters, simply leave +`spec.paramKind` in `ValidatingAdmissionPolicy` unset. ## {{% heading "prerequisites" %}} @@ -45,11 +54,13 @@ If a `ValidatingAdmissionPolicy` does not need to be configured via parameters, ## Getting Started with Validating Admission Policy -Validating Admission Policy is part of the cluster control-plane. You should write and deploy them with great caution. The following describes how to quickly experiment with Validating Admission Policy. +Validating Admission Policy is part of the cluster control-plane. You should write and deploy them +with great caution. The following describes how to quickly experiment with Validating Admission Policy. ### Creating a ValidatingAdmissionPolicy The following is an example of a ValidatingAdmissionPolicy. + ```yaml apiVersion: admissionregistration.k8s.io/v1alpha1 kind: ValidatingAdmissionPolicy @@ -66,26 +77,31 @@ spec: validations: - expression: "object.spec.replicas <= 5" ``` + `spec.validations` contains CEL expressions which use the [Common Expression Language (CEL)](https://github.com/google/cel-spec) -to validate the request. If an expression evaluates to false, the validation check is enforced according to the `spec.failurePolicy` field. +to validate the request. If an expression evaluates to false, the validation check is enforced +according to the `spec.failurePolicy` field. + +To configure a validating admission policy for use in a cluster, a binding is required. +The following is an example of a ValidatingAdmissionPolicyBinding.: -To configure a validating admission policy for use in a cluster, a binding is required. The following is an example of a ValidatingAdmissionPolicyBinding.: ```yaml apiVersion: admissionregistration.k8s.io/v1alpha1 kind: ValidatingAdmissionPolicyBinding metadata: name: "demo-binding-test.example.com" spec: - policy: "replicalimit-policy.example.com" + policyName: "demo-policy.example.com" matchResources: - namespaceSelectors: - - key: environment, - operator: In, - values: ["test"] + namespaceSelector: + matchLabels: + environment: test ``` -When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message: -``` +When trying to create a deployment with replicas set not satisfying the validation expression, an +error will return containing message: + +```none ValidatingAdmissionPolicy 'demo-policy.example.com' with binding 'demo-binding-test.example.com' denied request: failed expression: object.spec.replicas <= 5 ``` @@ -97,13 +113,15 @@ Parameter resources allow a policy configuration to be separate from its definit A policy can define paramKind, which outlines GVK of the parameter resource, and then a policy binding ties a policy by name (via policyName) to a particular parameter resource via paramRef. -If parameter configuration is needed, the following is an example of a ValidatingAdmissionPolicy with parameter configuration. +If parameter configuration is needed, the following is an example of a ValidatingAdmissionPolicy +with parameter configuration. + ```yaml apiVersion: admissionregistration.k8s.io/v1alpha1 kind: ValidatingAdmissionPolicy metadata: name: "replicalimit-policy.example.com" -Spec: +spec: failurePolicy: Fail paramKind: apiVersion: rules.example.com/v1 @@ -118,32 +136,39 @@ Spec: - expression: "object.spec.replicas <= params.maxReplicas" reason: Invalid ``` -The `spec.paramKind` field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. -Note in this example how the CEL expression references the parameters via the CEL params variable, e.g. `params.maxReplicas`. -spec.matchConstraints specifies what resources this policy is designed to validate. -Note that the native types such like `ConfigMap` could also be used as parameter reference. -The `spec.validations` fields contain CEL expressions. If an expression evaluates to false, the validation check is enforced according to the `spec.failurePolicy` field. +The `spec.paramKind` field of the ValidatingAdmissionPolicy specifies the kind of resources used +to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. +Note in this example how the CEL expression references the parameters via the CEL params variable, +e.g. `params.maxReplicas`. `spec.matchConstraints` specifies what resources this policy is +designed to validate. Note that the native types such like `ConfigMap` could also be used as +parameter reference. + +The `spec.validations` fields contain CEL expressions. If an expression evaluates to false, the +validation check is enforced according to the `spec.failurePolicy` field. The validating admission policy author is responsible for providing the ReplicaLimit parameter CRD. -To configure an validating admission policy for use in a cluster, a binding and parameter resource are created. The following is an example of a ValidatingAdmissionPolicyBinding. +To configure an validating admission policy for use in a cluster, a binding and parameter resource +are created. The following is an example of a ValidatingAdmissionPolicyBinding. + ```yaml apiVersion: admissionregistration.k8s.io/v1alpha1 kind: ValidatingAdmissionPolicyBinding metadata: name: "replicalimit-binding-test.example.com" spec: - policy: "replicalimit-policy.example.com" - paramsRef: + policyName: "replicalimit-policy.example.com" + paramRef: name: "replica-limit-test.example.com" matchResources: - namespaceSelectors: - - key: environment, - operator: In, - values: ["test"] + namespaceSelector: + matchLabels: + environment: test ``` + The parameter resource could be as following: + ```yaml apiVersion: rules.example.com/v1 kind: ReplicaLimit @@ -151,24 +176,31 @@ metadata: name: "replica-limit-test.example.com" maxReplicas: 3 ``` -This policy parameter resource limits deployments to a max of 3 replicas in all namespaces in the test environment. -An admission policy may have multiple bindings. To bind all other environments environment to have a maxReplicas limit of 100, create another ValidatingAdmissionPolicyBinding: + +This policy parameter resource limits deployments to a max of 3 replicas in all namespaces in the +test environment. An admission policy may have multiple bindings. To bind all other environments +environment to have a maxReplicas limit of 100, create another ValidatingAdmissionPolicyBinding: + ```yaml apiVersion: admissionregistration.k8s.io/v1alpha1 kind: ValidatingAdmissionPolicyBinding metadata: name: "replicalimit-binding-nontest" spec: - policy: "replicalimit-policy.example.com" - paramsRef: + policyName: "replicalimit-policy.example.com" + paramRef: name: "replica-limit-clusterwide.example.com" matchResources: - namespaceSelectors: - - key: environment, - operator: NotIn, - values: ["test"] + namespaceSelector: + matchExpressions: + - key: environment + operator: NotIn + values: + - test ``` + And have a parameter resource like: + ```yaml apiVersion: rules.example.com/v1 kind: ReplicaLimit @@ -176,57 +208,75 @@ metadata: name: "replica-limit-clusterwide.example.com" maxReplicas: 100 ``` -Bindings can have overlapping match criteria. The policy is evaluated for each matching binding. In the above example, the "nontest" policy binding could instead have been defined as a global policy: + +Bindings can have overlapping match criteria. The policy is evaluated for each matching binding. +In the above example, the "nontest" policy binding could instead have been defined as a global policy: + ```yaml apiVersion: admissionregistration.k8s.io/v1alpha1 kind: ValidatingAdmissionPolicyBinding metadata: name: "replicalimit-binding-global" spec: - policy: "replicalimit-policy.example.com" + policyName: "replicalimit-policy.example.com" params: "replica-limit-clusterwide.example.com" matchResources: - namespaceSelectors: - - key: environment, - operator: Exists + namespaceSelector: + matchExpressions: + - key: environment + operator: Exists ``` -The params object representing a parameter resource will not be set if a parameter resource has not been bound, -so for policies requiring a parameter resource, -it can be useful to add a check to ensure one has been bound. +The params object representing a parameter resource will not be set if a parameter resource has +not been bound, so for policies requiring a parameter resource, it can be useful to add a check to +ensure one has been bound. + +For the use cases require parameter configuration, we recommend to add a param check in +`spec.validations[0].expression`: -For the use cases require parameter configuration, -we recommend to add a param check in `spec.validations[0].expression`: ``` - expression: "params != null" message: "params missing but required to bind to this policy" ``` -It can be convenient to be able to have optional parameters as part of a parameter resource, and only validate them if present. -CEL provides has(), which checks if the key passed to it exists. CEL also implements Boolean short-circuiting: -If the first half of a logical OR evaluates to true, it won’t evaluate the other half (since the result of the entire OR will be true regardless). +It can be convenient to be able to have optional parameters as part of a parameter resource, and +only validate them if present. CEL provides `has()`, which checks if the key passed to it exists. +CEL also implements Boolean short-circuiting. If the first half of a logical OR evaluates to true, +it won’t evaluate the other half (since the result of the entire OR will be true regardless). + Combining the two, we can provide a way to validate optional parameters: + `!has(params.optionalNumber) || (params.optionalNumber >= 5 && params.optionalNumber <= 10)` + Here, we first check that the optional parameter is present with `!has(params.optionalNumber)`. -If `optionalNumber` hasn’t been defined, then the expression short-circuits since `!has(params.optionalNumber)` will evaluate to true. -If `optionalNumber` has been defined, then the latter half of the CEL expression will be evaluated, and optionalNumber will be checked to ensure that it contains a value between 5 and 10 inclusive. + +- If `optionalNumber` hasn’t been defined, then the expression short-circuits since + `!has(params.optionalNumber)` will evaluate to true. +- If `optionalNumber` has been defined, then the latter half of the CEL expression will be + evaluated, and optionalNumber will be checked to ensure that it contains a value between 5 and + 10 inclusive. #### Authorization Check We introduced the authorization check for parameter resources. -User is expected to have `read` access to the resources referenced by `paramKind` in `ValidatingAdmissionPolicy` and `paramRef` in `ValidatingAdmissionPolicyBinding`. +User is expected to have `read` access to the resources referenced by `paramKind` in +`ValidatingAdmissionPolicy` and `paramRef` in `ValidatingAdmissionPolicyBinding`. -Note that if a resource in `paramKind` fails resolving via the restmapper, `read` access to all resources of groups is required. +Note that if a resource in `paramKind` fails resolving via the restmapper, `read` access to all +resources of groups is required. ### Failure Policy -`failurePolicy` defines how mis-configurations and CEL expressions evaluating to error from the admission policy are handled. -Allowed values are `Ignore` or `Fail`. +`failurePolicy` defines how mis-configurations and CEL expressions evaluating to error from the +admission policy are handled. Allowed values are `Ignore` or `Fail`. -- `Ignore` means that an error calling the ValidatingAdmissionPolicy is ignored and the API request is allowed to continue. -- `Fail` means that an error calling the ValidatingAdmissionPolicy causes the admission to fail and the API request to be rejected. +- `Ignore` means that an error calling the ValidatingAdmissionPolicy is ignored and the API + request is allowed to continue. +- `Fail` means that an error calling the ValidatingAdmissionPolicy causes the admission to fail + and the API request to be rejected. Note that the `failurePolicy` is defined inside `ValidatingAdmissionPolicy`: + ```yaml apiVersion: admissionregistration.k8s.io/v1alpha1 kind: ValidatingAdmissionPolicy @@ -241,18 +291,21 @@ validations: `spec.validations[i].expression` represents the expression which will be evaluated by CEL. To learn more, see the [CEL language specification](https://github.com/google/cel-spec) -CEL expressions have access to the contents of the Admission request/response, organized into CEL variables as well as some other useful variables: +CEL expressions have access to the contents of the Admission request/response, organized into CEL +variables as well as some other useful variables: + - 'object' - The object from the incoming request. The value is null for DELETE requests. - 'oldObject' - The existing object. The value is null for CREATE requests. -- 'request' - Attributes of the [admission request](/pkg/apis/admission/types.go#AdmissionRequest). -- 'params' - Parameter resource referred to by the policy binding being evaluated. The value is null if `ParamKind` is unset. +- 'request' - Attributes of the [admission request](/docs/reference/config-api/apiserver-admission.v1/#admission-k8s-io-v1-AdmissionRequest). +- 'params' - Parameter resource referred to by the policy binding being evaluated. The value is + null if `ParamKind` is unset. -The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from the root of the -object. No other metadata properties are accessible. +The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from +the root of the object. No other metadata properties are accessible. Only property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible. -Only property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible. -Accessible property names are escaped according to the following rules when accessed in the expression: +Accessible property names are escaped according to the following rules when accessed in the +expression: | escape sequence | property name equivalent | | ----------------------- | -----------------------| @@ -303,10 +356,12 @@ Concatenation on arrays with x-kubernetes-list-type use the semantics of the lis | `size(object.names) == size(object.details) && object.names.all(n, n in object.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet | | `size(object.clusters.filter(c, c.name == object.primary)) == 1` | Validate that the 'primary' property has one and only one occurrence in the 'clusters' listMap | -Read [Supported evaluation on CEL](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#evaluation) for more information about CEL rules. +Read [Supported evaluation on CEL](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#evaluation) +for more information about CEL rules. `spec.validation[i].reason` represents a machine-readable description of why this validation failed. -If this is the first validation in the list to fail, this reason, as well as the corresponding HTTP response code, are used in the -HTTP response to the client. +If this is the first validation in the list to fail, this reason, as well as the corresponding +HTTP response code, are used in the HTTP response to the client. The currently supported reasons are: `Unauthorized`, `Forbidden`, `Invalid`, `RequestEntityTooLarge`. If not set, `StatusReasonInvalid` is used in the response to the client. + diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md b/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md index 26f6663e90291..a3b704d891e5b 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md @@ -13,8 +13,8 @@ However, a GA'ed or a deprecated feature gate is still recognized by the corresp components although they are unable to cause any behavior differences in a cluster. For feature gates that are still recognized by the Kubernetes components, please refer to -the [Alpha/Beta feature gate table](/docs/reference/command-line-tools/reference/feature-gates/#feature-gates-for-alpha-or-beta-features) -or the [Graduated/Deprecated feature gate table](/docs/reference/command-line-tools/reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features) +the [Alpha/Beta feature gate table](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) +or the [Graduated/Deprecated feature gate table](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features) ### Feature gates that are removed @@ -36,6 +36,8 @@ In the following table: | `AffinityInAnnotations` | - | Deprecated | 1.8 | 1.8 | | `AllowExtTrafficLocalEndpoints` | `false` | Beta | 1.4 | 1.6 | | `AllowExtTrafficLocalEndpoints` | `true` | GA | 1.7 | 1.9 | +| `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | 1.20 | +| `AllowInsecureBackendProxy` | `true` | GA | 1.21 | 1.25 | | `AttachVolumeLimit` | `false` | Alpha | 1.11 | 1.11 | | `AttachVolumeLimit` | `true` | Beta | 1.12 | 1.16 | | `AttachVolumeLimit` | `true` | GA | 1.17 | 1.21 | @@ -64,6 +66,9 @@ In the following table: | `CSIMigrationAzureFileComplete` | - | Deprecated | 1.21 | 1.21 | | `CSIMigrationGCEComplete` | `false` | Alpha | 1.17 | 1.20 | | `CSIMigrationGCEComplete` | - | Deprecated | 1.21 | 1.21 | +| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | 1.17 | +| `CSIMigrationOpenStack` | `true` | Beta | 1.18 | 1.23 | +| `CSIMigrationOpenStack` | `true` | GA | 1.24 | 1.25 | | `CSIMigrationOpenStackComplete` | `false` | Alpha | 1.17 | 1.20 | | `CSIMigrationOpenStackComplete` | - | Deprecated | 1.21 | 1.21 | | `CSIMigrationvSphereComplete` | `false` | Beta | 1.19 | 1.21 | @@ -106,8 +111,14 @@ In the following table: | `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 | | `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | 1.15 | | `CustomResourceWebhookConversion` | `true` | GA | 1.16 | 1.18 | +| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 | +| `DefaultPodTopologySpread` | `true` | Beta | 1.20 | 1.23 | +| `DefaultPodTopologySpread` | `true` | GA | 1.24 | 1.25 | | `DynamicAuditing` | `false` | Alpha | 1.13 | 1.18 | | `DynamicAuditing` | - | Deprecated | 1.19 | 1.19 | +| `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 | +| `DynamicKubeletConfig` | `true` | Beta | 1.11 | 1.21 | +| `DynamicKubeletConfig` | `false` | Deprecated | 1.22 | 1.25 | | `DynamicProvisioningScheduling` | `false` | Alpha | 1.11 | 1.11 | | `DynamicProvisioningScheduling` | - | Deprecated| 1.12 | - | | `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 | @@ -149,6 +160,9 @@ In the following table: | `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | 1.18 | | `ImmutableEphemeralVolumes` | `true` | Beta | 1.19 | 1.20 | | `ImmutableEphemeralVolumes` | `true` | GA | 1.21 | 1.24 | +| `IndexedJob` | `false` | Alpha | 1.21 | 1.21 | +| `IndexedJob` | `true` | Beta | 1.22 | 1.23 | +| `IndexedJob` | `true` | GA | 1.24 | 1.25 | | `IngressClassNamespacedParams` | `false` | Alpha | 1.21 | 1.21 | | `IngressClassNamespacedParams` | `true` | Beta | 1.22 | 1.22 | | `IngressClassNamespacedParams` | `true` | GA | 1.23 | 1.24 | @@ -175,11 +189,17 @@ In the following table: | `NodeLease` | `false` | Alpha | 1.12 | 1.13 | | `NodeLease` | `true` | Beta | 1.14 | 1.16 | | `NodeLease` | `true` | GA | 1.17 | 1.23 | +| `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 | +| `NonPreemptingPriority` | `true` | Beta | 1.19 | 1.23 | +| `NonPreemptingPriority` | `true` | GA | 1.24 | 1.25 | | `PVCProtection` | `false` | Alpha | 1.9 | 1.9 | | `PVCProtection` | - | Deprecated | 1.10 | 1.10 | | `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 | | `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 | | `PersistentLocalVolumes` | `true` | GA | 1.14 | 1.16 | +| `PodAffinityNamespaceSelector` | `false` | Alpha | 1.21 | 1.21 | +| `PodAffinityNamespaceSelector` | `true` | Beta | 1.22 | 1.23 | +| `PodAffinityNamespaceSelector` | `true` | GA | 1.24 | 1.25 | | `PodDisruptionBudget` | `false` | Alpha | 1.3 | 1.4 | | `PodDisruptionBudget` | `true` | Beta | 1.5 | 1.20 | | `PodDisruptionBudget` | `true` | GA | 1.21 | 1.25 | @@ -195,6 +215,9 @@ In the following table: | `PodShareProcessNamespace` | `false` | Alpha | 1.10 | 1.11 | | `PodShareProcessNamespace` | `true` | Beta | 1.12 | 1.16 | | `PodShareProcessNamespace` | `true` | GA | 1.17 | 1.19 | +| `PreferNominatedNode` | `false` | Alpha | 1.21 | 1.21 | +| `PreferNominatedNode` | `true` | Beta | 1.22 | 1.23 | +| `PreferNominatedNode` | `true` | GA | 1.24 | 1.25 | | `RequestManagement` | `false` | Alpha | 1.15 | 1.16 | | `RequestManagement` | - | Deprecated | 1.17 | 1.17 | | `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | 1.18 | @@ -227,6 +250,12 @@ In the following table: | `ServiceAppProtocol` | `false` | Alpha | 1.18 | 1.18 | | `ServiceAppProtocol` | `true` | Beta | 1.19 | 1.19 | | `ServiceAppProtocol` | `true` | GA | 1.20 | 1.22 | +| `ServiceLBNodePortControl` | `false` | Alpha | 1.20 | 1.21 | +| `ServiceLBNodePortControl` | `true` | Beta | 1.22 | 1.23 | +| `ServiceLBNodePortControl` | `true` | GA | 1.24 | 1.25 | +| `ServiceLoadBalancerClass` | `false` | Alpha | 1.21 | 1.21 | +| `ServiceLoadBalancerClass` | `true` | Beta | 1.22 | 1.23 | +| `ServiceLoadBalancerClass` | `true` | GA | 1.24 | 1.25 | | `ServiceLoadBalancerFinalizer` | `false` | Alpha | 1.15 | 1.15 | | `ServiceLoadBalancerFinalizer` | `true` | Beta | 1.16 | 1.16 | | `ServiceLoadBalancerFinalizer` | `true` | GA | 1.17 | 1.20 | @@ -257,6 +286,9 @@ In the following table: | `SupportPodPidsLimit` | `false` | Alpha | 1.10 | 1.13 | | `SupportPodPidsLimit` | `true` | Beta | 1.14 | 1.19 | | `SupportPodPidsLimit` | `true` | GA | 1.20 | 1.23 | +| `SuspendJob` | `false` | Alpha | 1.21 | 1.21 | +| `SuspendJob` | `true` | Beta | 1.22 | 1.23 | +| `SuspendJob` | `true` | GA | 1.24 | 1.25 | | `Sysctls` | `true` | Beta | 1.11 | 1.20 | | `Sysctls` | `true` | GA | 1.21 | 1.22 | | `TTLAfterFinished` | `false` | Alpha | 1.12 | 1.20 | @@ -314,6 +346,9 @@ In the following table: - `AllowExtTrafficLocalEndpoints`: Enable a service to route external requests to node local endpoints. +- `AllowInsecureBackendProxy`: Enable the users to skip TLS verification of + kubelets on Pod log requests. + - `AttachVolumeLimit`: Enable volume plugins to report limits on number of volumes that can be attached to a node. See [dynamic volume limits](/docs/concepts/storage/storage-limits/#dynamic-volume-limits) @@ -383,6 +418,14 @@ In the following table: been deprecated in favor of the `InTreePluginGCEUnregister` feature flag which prevents the registration of in-tree GCE PD plugin. +- `CSIMigrationOpenStack`: Enables shims and translation logic to route volume + operations from the Cinder in-tree plugin to Cinder CSI plugin. Supports + falling back to in-tree Cinder plugin for mount operations to nodes that have + the feature disabled or that do not have Cinder CSI plugin installed and + configured. Does not support falling back for provision operations, for those + the CSI plugin must be installed and configured. Requires CSIMigration + feature flag enabled. + - `CSIMigrationOpenStackComplete`: Stops registering the Cinder in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the Cinder in-tree plugin to Cinder CSI plugin. @@ -442,8 +485,15 @@ In the following table: - `CustomResourceWebhookConversion`: Enable webhook-based conversion on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/). +- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do + [default spreading](/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints). + - `DynamicAuditing`: Used to enable dynamic auditing before v1.19. +- `DynamicKubeletConfig`: Enable the dynamic configuration of kubelet. The + feature is no longer supported outside of supported skew policy. The feature + gate was removed from kubelet in 1.24. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/). + - `DynamicProvisioningScheduling`: Extend the default scheduler to be aware of volume topology and handle PV provisioning. This feature was superseded by the `VolumeScheduling` feature in v1.12. @@ -500,6 +550,9 @@ In the following table: - `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as immutable for better safety and performance. +- `IndexedJob`: Allows the [Job](/docs/concepts/workloads/controllers/job/) + controller to manage Pod completions per completion index. + - `IngressClassNamespacedParams`: Allow namespace-scoped parameters reference in `IngressClass` resource. This feature adds two fields - `Scope` and `Namespace` to `IngressClass.spec.parameters`. @@ -533,12 +586,19 @@ In the following table: - `NodeLease`: Enable the new Lease API to report node heartbeats, which could be used as a node health signal. +- `NonPreemptingPriority`: Enable `preemptionPolicy` field for PriorityClass and Pod. + - `PVCProtection`: Enable the prevention of a PersistentVolumeClaim (PVC) from being deleted when it is still used by any Pod. - `PersistentLocalVolumes`: Enable the usage of `local` volume type in Pods. Pod affinity has to be specified if requesting a `local` volume. +- `PodAffinityNamespaceSelector`: Enable the + [Pod Affinity Namespace Selector](/docs/concepts/scheduling-eviction/assign-pod-node/#namespace-selector) + and [CrossNamespacePodAffinity](/docs/concepts/policy/resource-quotas/#cross-namespace-pod-affinity-quota) + quota scope features. + - `PodDisruptionBudget`: Enable the [PodDisruptionBudget](/docs/tasks/run-application/configure-pdb/) feature. - `PodOverhead`: Enable the [PodOverhead](/docs/concepts/scheduling-eviction/pod-overhead/) @@ -555,6 +615,10 @@ In the following table: a single process namespace between containers running in a pod. More details can be found in [Share Process Namespace between Containers in a Pod](/docs/tasks/configure-pod-container/share-process-namespace/). +- `PreferNominatedNode`: This flag tells the scheduler whether the nominated + nodes will be checked first before looping through all the other nodes in + the cluster. + - `RequestManagement`: Enables managing request concurrency with prioritization and fairness at each API server. Deprecated by `APIPriorityAndFairness` since 1.17. @@ -597,8 +661,14 @@ In the following table: - `ServiceAppProtocol`: Enables the `appProtocol` field on Services and Endpoints. +- `ServiceLoadBalancerClass`: Enables the `loadBalancerClass` field on Services. See + [Specifying class of load balancer implementation](/docs/concepts/services-networking/service/#load-balancer-class) + for more details. + - `ServiceLoadBalancerFinalizer`: Enable finalizer protection for Service load balancers. +- `ServiceLBNodePortControl`: Enables the `allocateLoadBalancerNodePorts` field on Services. + - `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers created by a cloud provider. A node is eligible for exclusion if labelled with "`node.kubernetes.io/exclude-from-external-load-balancers`". @@ -629,6 +699,9 @@ In the following table: - `SupportPodPidsLimit`: Enable the support to limiting PIDs in Pods. +- `SuspendJob`: Enable support to suspend and resume Jobs. For more details, see + [the Jobs docs](/docs/concepts/workloads/controllers/job/). + - `Sysctls`: Enable support for namespaced kernel parameters (sysctls) that can be set for each pod. See [sysctls](/docs/tasks/administer-cluster/sysctl-cluster/) for more details. diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 48de9a13dd29c..a7b1058b10b6d 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -62,11 +62,11 @@ For a reference to old feature gates that are removed, please refer to | `APIPriorityAndFairness` | `true` | Beta | 1.20 | | | `APIResponseCompression` | `false` | Alpha | 1.7 | 1.15 | | `APIResponseCompression` | `true` | Beta | 1.16 | | -| `APISelfSubjectAttributesReview` | `false` | Alpha | 1.26 | | +| `APISelfSubjectReview` | `false` | Alpha | 1.26 | | | `APIServerIdentity` | `false` | Alpha | 1.20 | 1.25 | | `APIServerIdentity` | `true` | Beta | 1.26 | | | `APIServerTracing` | `false` | Alpha | 1.22 | | -| `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | | +| `AggregatedDiscoveryEndpoint` | `false` | Alpha | 1.26 | | | `AnyVolumeDataSource` | `false` | Alpha | 1.18 | 1.23 | | `AnyVolumeDataSource` | `true` | Beta | 1.24 | | | `AppArmor` | `true` | Beta | 1.4 | | @@ -79,9 +79,12 @@ For a reference to old feature gates that are removed, please refer to | `CSIMigrationRBD` | `false` | Alpha | 1.23 | | | `CSINodeExpandSecret` | `false` | Alpha | 1.25 | | | `CSIVolumeHealth` | `false` | Alpha | 1.21 | | -| `CrossNamespaceVolumeDataSource` | `false` | Alpha| 1.26 | | +| `ComponentSLIs` | `false` | Alpha | 1.26 | | | `ContainerCheckpoint` | `false` | Alpha | 1.25 | | | `ContextualLogging` | `false` | Alpha | 1.24 | | +| `CronJobTimeZone` | `false` | Alpha | 1.24 | 1.24 | +| `CronJobTimeZone` | `true` | Beta | 1.25 | | +| `CrossNamespaceVolumeDataSource` | `false` | Alpha| 1.26 | | | `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | | | `CustomResourceValidationExpressions` | `false` | Alpha | 1.23 | 1.24 | | `CustomResourceValidationExpressions` | `true` | Beta | 1.25 | | @@ -91,9 +94,9 @@ For a reference to old feature gates that are removed, please refer to | `DownwardAPIHugePages` | `false` | Beta | 1.21 | 1.21 | | `DownwardAPIHugePages` | `true` | Beta | 1.22 | | | `DynamicResourceAllocation` | `false` | Alpha | 1.26 | | -| `EndpointSliceTerminatingCondition` | `false` | Alpha | 1.20 | 1.21 | -| `EndpointSliceTerminatingCondition` | `true` | Beta | 1.22 | | -| `ExpandedDNSConfig` | `false` | Alpha | 1.22 | | +| `EventedPLEG` | `false` | Alpha | 1.26 | - | +| `ExpandedDNSConfig` | `false` | Alpha | 1.22 | 1.25 | +| `ExpandedDNSConfig` | `true` | Beta | 1.26 | | | `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | | | `GRPCContainerProbe` | `false` | Alpha | 1.23 | 1.23 | | `GRPCContainerProbe` | `true` | Beta | 1.24 | | @@ -104,6 +107,7 @@ For a reference to old feature gates that are removed, please refer to | `HPAContainerMetrics` | `false` | Alpha | 1.20 | | | `HPAScaleToZero` | `false` | Alpha | 1.16 | | | `HonorPVReclaimPolicy` | `false` | Alpha | 1.23 | | +| `IPTablesOwnershipCleanup` | `false` | Alpha | 1.25 | | | `InTreePluginAWSUnregister` | `false` | Alpha | 1.21 | | | `InTreePluginAzureDiskUnregister` | `false` | Alpha | 1.21 | | | `InTreePluginAzureFileUnregister` | `false` | Alpha | 1.21 | | @@ -112,15 +116,11 @@ For a reference to old feature gates that are removed, please refer to | `InTreePluginPortworxUnregister` | `false` | Alpha | 1.23 | | | `InTreePluginRBDUnregister` | `false` | Alpha | 1.23 | | | `InTreePluginvSphereUnregister` | `false` | Alpha | 1.21 | | -| `IPTablesOwnershipCleanup` | `false` | Alpha | 1.25 | | | `JobMutableNodeSchedulingDirectives` | `true` | Beta | 1.23 | | | `JobPodFailurePolicy` | `false` | Alpha | 1.25 | 1.25 | | `JobPodFailurePolicy` | `true` | Beta | 1.26 | | | `JobReadyPods` | `false` | Alpha | 1.23 | 1.23 | | `JobReadyPods` | `true` | Beta | 1.24 | | -| `JobTrackingWithFinalizers` | `false` | Alpha | 1.22 | 1.22 | -| `JobTrackingWithFinalizers` | `false` | Beta | 1.23 | 1.24 | -| `JobTrackingWithFinalizers` | `true` | Beta | 1.25 | | | `KMSv2` | `false` | Alpha | 1.25 | | | `KubeletInUserNamespace` | `false` | Alpha | 1.22 | | | `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 | @@ -128,11 +128,12 @@ For a reference to old feature gates that are removed, please refer to | `KubeletPodResourcesGetAllocatable` | `false` | Alpha | 1.21 | 1.22 | | `KubeletPodResourcesGetAllocatable` | `true` | Beta | 1.23 | | | `KubeletTracing` | `false` | Alpha | 1.25 | | -| `LegacyServiceAccountTokenTracking` | `false` | Alpha | 1.26 | | -| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | 1.24 | -| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `true` | Beta | 1.25 | | +| `LegacyServiceAccountTokenTracking` | `false` | Alpha | 1.25 | | +| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | - | | `LogarithmicScaleDown` | `false` | Alpha | 1.21 | 1.21 | | `LogarithmicScaleDown` | `true` | Beta | 1.22 | | +| `LoggingAlphaOptions` | `false` | Alpha | 1.24 | - | +| `LoggingBetaOptions` | `true` | Beta | 1.24 | - | | `MatchLabelKeysInPodTopologySpread` | `false` | Alpha | 1.25 | | | `MaxUnavailableStatefulSet` | `false` | Alpha | 1.24 | | | `MemoryManager` | `false` | Alpha | 1.21 | 1.21 | @@ -140,11 +141,11 @@ For a reference to old feature gates that are removed, please refer to | `MemoryQoS` | `false` | Alpha | 1.22 | | | `MinDomainsInPodTopologySpread` | `false` | Alpha | 1.24 | 1.24 | | `MinDomainsInPodTopologySpread` | `false` | Beta | 1.25 | | -| `MixedProtocolLBService` | `false` | Alpha | 1.20 | 1.23 | -| `MixedProtocolLBService` | `true` | Beta | 1.24 | | +| `MinimizeIPTablesRestore` | `false` | Alpha | 1.26 | - | | `MultiCIDRRangeAllocator` | `false` | Alpha | 1.25 | | | `NetworkPolicyStatus` | `false` | Alpha | 1.24 | | -| `NodeInclusionPolicyInPodTopologySpread` | `false` | Alpha | 1.25 | | +| `NodeInclusionPolicyInPodTopologySpread` | `false` | Alpha | 1.25 | 1.25 | +| `NodeInclusionPolicyInPodTopologySpread` | `true` | Beta | 1.26 | | | `NodeOutOfServiceVolumeDetach` | `false` | Alpha | 1.24 | 1.25 | | `NodeOutOfServiceVolumeDetach` | `true` | Beta | 1.26 | | | `NodeSwap` | `false` | Alpha | 1.22 | | @@ -196,7 +197,7 @@ For a reference to old feature gates that are removed, please refer to | `TopologyManagerPolicyBetaOptions` | `false` | Beta | 1.26 | | | `TopologyManagerPolicyOptions` | `false` | Alpha | 1.26 | | | `UserNamespacesStatelessPodsSupport` | `false` | Alpha | 1.25 | | -| `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | | +| `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | | | `VolumeCapacityPriority` | `false` | Alpha | 1.21 | - | | `WinDSR` | `false` | Alpha | 1.14 | | | `WinOverlay` | `false` | Alpha | 1.14 | 1.19 | @@ -242,45 +243,37 @@ For a reference to old feature gates that are removed, please refer to | `CSIMigrationvSphere` | `false` | Beta | 1.19 | 1.24 | | `CSIMigrationvSphere` | `true` | Beta | 1.25 | 1.25 | | `CSIMigrationvSphere` | `true` | GA | 1.26 | - | -| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | 1.17 | -| `CSIMigrationOpenStack` | `true` | Beta | 1.18 | 1.23 | -| `CSIMigrationOpenStack` | `true` | GA | 1.24 | | | `CSIStorageCapacity` | `false` | Alpha | 1.19 | 1.20 | | `CSIStorageCapacity` | `true` | Beta | 1.21 | 1.23 | | `CSIStorageCapacity` | `true` | GA | 1.24 | - | +| `ConsistentHTTPGetHandlers` | `true` | GA | 1.25 | - | | `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | 1.21 | | `ControllerManagerLeaderMigration` | `true` | Beta | 1.22 | 1.23 | | `ControllerManagerLeaderMigration` | `true` | GA | 1.24 | - | -| `CronJobTimeZone` | `false` | Alpha | 1.24 | 1.24 | -| `CronJobTimeZone` | `true` | Beta | 1.25 | | | `DaemonSetUpdateSurge` | `false` | Alpha | 1.21 | 1.21 | | `DaemonSetUpdateSurge` | `true` | Beta | 1.22 | 1.24 | | `DaemonSetUpdateSurge` | `true` | GA | 1.25 | - | -| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 | -| `DefaultPodTopologySpread` | `true` | Beta | 1.20 | 1.23 | -| `DefaultPodTopologySpread` | `true` | GA | 1.24 | - | | `DelegateFSGroupToCSIDriver` | `false` | Alpha | 1.22 | 1.22 | | `DelegateFSGroupToCSIDriver` | `true` | Beta | 1.23 | 1.25 | | `DelegateFSGroupToCSIDriver` | `true` | GA | 1.26 |-| -| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 | -| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.24 | -| `DisableAcceleratorUsageMetrics` | `true` | GA | 1.25 |- | | `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 | | `DevicePlugins` | `true` | Beta | 1.10 | 1.25 | | `DevicePlugins` | `true` | GA | 1.26 | - | +| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 | +| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.24 | +| `DisableAcceleratorUsageMetrics` | `true` | GA | 1.25 |- | | `DryRun` | `false` | Alpha | 1.12 | 1.12 | | `DryRun` | `true` | Beta | 1.13 | 1.18 | | `DryRun` | `true` | GA | 1.19 | - | -| `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 | -| `DynamicKubeletConfig` | `true` | Beta | 1.11 | 1.21 | -| `DynamicKubeletConfig` | `false` | Deprecated | 1.22 | - | | `EfficientWatchResumption` | `false` | Alpha | 1.20 | 1.20 | | `EfficientWatchResumption` | `true` | Beta | 1.21 | 1.23 | | `EfficientWatchResumption` | `true` | GA | 1.24 | - | +| `EndpointSliceTerminatingCondition` | `false` | Alpha | 1.20 | 1.21 | +| `EndpointSliceTerminatingCondition` | `true` | Beta | 1.22 | 1.25 | +| `EndpointSliceTerminatingCondition` | `true` | GA | 1.26 | | | `EphemeralContainers` | `false` | Alpha | 1.16 | 1.22 | | `EphemeralContainers` | `true` | Beta | 1.23 | 1.24 | | `EphemeralContainers` | `true` | GA | 1.25 | - | -| `EventedPLEG` | `false` | Alpha | 1.26 | - | | `ExecProbeTimeout` | `true` | GA | 1.20 | - | | `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 | | `ExpandCSIVolumes` | `true` | Beta | 1.16 | 1.23 | @@ -294,9 +287,6 @@ For a reference to old feature gates that are removed, please refer to | `IdentifyPodOS` | `false` | Alpha | 1.23 | 1.23 | | `IdentifyPodOS` | `true` | Beta | 1.24 | 1.24 | | `IdentifyPodOS` | `true` | GA | 1.25 | - | -| `IndexedJob` | `false` | Alpha | 1.21 | 1.21 | -| `IndexedJob` | `true` | Beta | 1.22 | 1.23 | -| `IndexedJob` | `true` | GA | 1.24 | - | | `JobTrackingWithFinalizers` | `false` | Alpha | 1.22 | 1.22 | | `JobTrackingWithFinalizers` | `false` | Beta | 1.23 | 1.24 | | `JobTrackingWithFinalizers` | `true` | Beta | 1.25 | 1.25 | @@ -309,45 +299,30 @@ For a reference to old feature gates that are removed, please refer to | `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 | | `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | 1.24 | | `LocalStorageCapacityIsolation` | `true` | GA | 1.25 | - | +| `MixedProtocolLBService` | `false` | Alpha | 1.20 | 1.23 | +| `MixedProtocolLBService` | `true` | Beta | 1.24 | 1.25 | +| `MixedProtocolLBService` | `true` | GA | 1.26 | - | | `NetworkPolicyEndPort` | `false` | Alpha | 1.21 | 1.21 | | `NetworkPolicyEndPort` | `true` | Beta | 1.22 | 1.24 | | `NetworkPolicyEndPort` | `true` | GA | 1.25 | - | -| `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 | -| `NonPreemptingPriority` | `true` | Beta | 1.19 | 1.23 | -| `NonPreemptingPriority` | `true` | GA | 1.24 | - | -| `PodAffinityNamespaceSelector` | `false` | Alpha | 1.21 | 1.21 | -| `PodAffinityNamespaceSelector` | `true` | Beta | 1.22 | 1.23 | -| `PodAffinityNamespaceSelector` | `true` | GA | 1.24 | - | | `PodSecurity` | `false` | Alpha | 1.22 | 1.22 | | `PodSecurity` | `true` | Beta | 1.23 | 1.24 | | `PodSecurity` | `true` | GA | 1.25 | | -| `PreferNominatedNode` | `false` | Alpha | 1.21 | 1.21 | -| `PreferNominatedNode` | `true` | Beta | 1.22 | 1.23 | -| `PreferNominatedNode` | `true` | GA | 1.24 | - | | `RemoveSelfLink` | `false` | Alpha | 1.16 | 1.19 | | `RemoveSelfLink` | `true` | Beta | 1.20 | 1.23 | | `RemoveSelfLink` | `true` | GA | 1.24 | - | | `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 | | `ServerSideApply` | `true` | Beta | 1.16 | 1.21 | | `ServerSideApply` | `true` | GA | 1.22 | - | -| `ServiceInternalTrafficPolicy` | `false` | Alpha | 1.21 | 1.21 | -| `ServiceInternalTrafficPolicy` | `true` | Beta | 1.22 | 1.25 | -| `ServiceInternalTrafficPolicy` | `true` | GA | 1.26 | - | | `ServiceIPStaticSubrange` | `false` | Alpha | 1.24 | 1.24 | | `ServiceIPStaticSubrange` | `true` | Beta | 1.25 | 1.25 | | `ServiceIPStaticSubrange` | `true` | GA | 1.26 | - | -| `ServiceLBNodePortControl` | `false` | Alpha | 1.20 | 1.21 | -| `ServiceLBNodePortControl` | `true` | Beta | 1.22 | 1.23 | -| `ServiceLBNodePortControl` | `true` | GA | 1.24 | - | -| `ServiceLoadBalancerClass` | `false` | Alpha | 1.21 | 1.21 | -| `ServiceLoadBalancerClass` | `true` | Beta | 1.22 | 1.23 | -| `ServiceLoadBalancerClass` | `true` | GA | 1.24 | - | +| `ServiceInternalTrafficPolicy` | `false` | Alpha | 1.21 | 1.21 | +| `ServiceInternalTrafficPolicy` | `true` | Beta | 1.22 | 1.25 | +| `ServiceInternalTrafficPolicy` | `true` | GA | 1.26 | - | | `StatefulSetMinReadySeconds` | `false` | Alpha | 1.22 | 1.22 | | `StatefulSetMinReadySeconds` | `true` | Beta | 1.23 | 1.24 | | `StatefulSetMinReadySeconds` | `true` | GA | 1.25 | - | -| `SuspendJob` | `false` | Alpha | 1.21 | 1.21 | -| `SuspendJob` | `true` | Beta | 1.22 | 1.23 | -| `SuspendJob` | `true` | GA | 1.24 | - | | `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 | | `WatchBookmark` | `true` | Beta | 1.16 | 1.16 | | `WatchBookmark` | `true` | GA | 1.17 | - | @@ -404,16 +379,16 @@ Each feature gate is designed for enabling/disabling a specific feature: - `APIPriorityAndFairness`: Enable managing request concurrency with prioritization and fairness at each server. (Renamed from `RequestManagement`) - `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests. -- `APIServerIdentity`: Assign each API server an ID in a cluster. -- `APIServerTracing`: Add support for distributed tracing in the API server. - See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces) for more details. -- `APISelfSubjectAttributesReview`: Activate the `SelfSubjectReview` API which allows users +- `APISelfSubjectReview`: Activate the `SelfSubjectReview` API which allows users to see the requesting subject's authentication information. See [API access to authentication information for a client](/docs/reference/access-authn-authz/authentication/#self-subject-review) for more details. +- `APIServerIdentity`: Assign each API server an ID in a cluster. +- `APIServerTracing`: Add support for distributed tracing in the API server. + See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces) for more details. - `AdvancedAuditing`: Enable [advanced auditing](/docs/tasks/debug/debug-cluster/audit/#advanced-audit) -- `AllowInsecureBackendProxy`: Enable the users to skip TLS verification of - kubelets on Pod log requests. +- `AggregatedDiscoveryEndpoint`: Enable a single HTTP endpoint `/discovery/` which + supports native HTTP caching with ETags containing all APIResources known to the API server. - `AnyVolumeDataSource`: Enable use of any custom resource as the `DataSource` of a {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}. - `AppArmor`: Enable use of AppArmor mandatory access control for Pods running on Linux nodes. @@ -437,9 +412,6 @@ Each feature gate is designed for enabling/disabling a specific feature: This feature gate guards *a group* of CPUManager options whose quality level is beta. This feature gate will never graduate to stable. - `CPUManagerPolicyOptions`: Allow fine-tuning of CPUManager policies. -- `CrossNamespaceVolumeDataSource`: Enable the usage of cross namespace volume data source - to allow you to specify a source namespace in the `dataSourceRef` field of a - PersistentVolumeClaim. - `CSIInlineVolume`: Enable CSI Inline volumes support for pods. - `CSIMigration`: Enables shims and translation logic to route volume operations from in-tree plugins to corresponding pre-installed CSI plugins @@ -470,14 +442,7 @@ Each feature gate is designed for enabling/disabling a specific feature: Does not support falling back for provision operations, for those the CSI plugin must be installed and configured. Requires CSIMigration feature flag enabled. -- `CSIMigrationOpenStack`: Enables shims and translation logic to route volume - operations from the Cinder in-tree plugin to Cinder CSI plugin. Supports - falling back to in-tree Cinder plugin for mount operations to nodes that have - the feature disabled or that do not have Cinder CSI plugin installed and - configured. Does not support falling back for provision operations, for those - the CSI plugin must be installed and configured. Requires CSIMigration - feature flag enabled. -- `csiMigrationRBD`: Enables shims and translation logic to route volume +- `CSIMigrationRBD`: Enables shims and translation logic to route volume operations from the RBD in-tree plugin to Ceph RBD CSI plugin. Requires CSIMigration and csiMigrationRBD feature flags enabled and Ceph CSI plugin installed and configured in the cluster. This flag has been deprecated in @@ -500,11 +465,19 @@ Each feature gate is designed for enabling/disabling a specific feature: [Storage Capacity](/docs/concepts/storage/storage-capacity/). Check the [`csi` volume type](/docs/concepts/storage/volumes/#csi) documentation for more details. - `CSIVolumeHealth`: Enable support for CSI volume health monitoring on node. +- `ComponentSLIs`: Enable the `/metrics/slis` endpoint on Kubernetes components like + kubelet, kube-scheduler, kube-proxy, kube-controller-manager, cloud-controller-manager + allowing you to scrape health check metrics. +- `ConsistentHTTPGetHandlers`: Normalize HTTP get URL and Header passing for lifecycle + handlers with probers. - `ContextualLogging`: When you enable this feature gate, Kubernetes components that support contextual logging add extra detail to log output. - `ControllerManagerLeaderMigration`: Enables leader migration for `kube-controller-manager` and `cloud-controller-manager`. - `CronJobTimeZone`: Allow the use of the `timeZone` optional field in [CronJobs](/docs/concepts/workloads/controllers/cron-jobs/) +- `CrossNamespaceVolumeDataSource`: Enable the usage of cross namespace volume data source + to allow you to specify a source namespace in the `dataSourceRef` field of a + PersistentVolumeClaim. - `CustomCPUCFSQuotaPeriod`: Enable nodes to change `cpuCFSQuotaPeriod` in [kubelet config](/docs/tasks/administer-cluster/kubelet-config-file/). - `CustomResourceValidationExpressions`: Enable expression language validation in CRD @@ -513,8 +486,6 @@ Each feature gate is designed for enabling/disabling a specific feature: - `DaemonSetUpdateSurge`: Enables the DaemonSet workloads to maintain availability during update per node. See [Perform a Rolling Update on a DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/). -- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do - [default spreading](/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints). - `DelegateFSGroupToCSIDriver`: If supported by the CSI driver, delegates the role of applying `fsGroup` from a Pod's `securityContext` to the driver by passing `fsGroup` through the NodeStageVolume and NodePublishVolume CSI calls. @@ -531,9 +502,8 @@ Each feature gate is designed for enabling/disabling a specific feature: [downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information). - `DryRun`: Enable server-side [dry run](/docs/reference/using-api/api-concepts/#dry-run) requests so that validation, merging, and mutation can be tested without committing. -- `DynamicKubeletConfig`: Enable the dynamic configuration of kubelet. The - feature is no longer supported outside of supported skew policy. The feature - gate was removed from kubelet in 1.24. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/). +- `DynamicResourceAllocation": Enables support for resources with custom parameters and a lifecycle + that is independent of a Pod. - `EndpointSliceTerminatingCondition`: Enables EndpointSlice `terminating` and `serving` condition fields. - `EfficientWatchResumption`: Allows for storage-originated bookmark (progress @@ -584,13 +554,11 @@ Each feature gate is designed for enabling/disabling a specific feature: metrics from individual containers in target pods. - `HPAScaleToZero`: Enables setting `minReplicas` to 0 for `HorizontalPodAutoscaler` resources when using custom or external metrics. -- `IPTablesOwnershipCleanup`: This causes kubelet to no longer create legacy IPTables rules. +- `IPTablesOwnershipCleanup`: This causes kubelet to no longer create legacy iptables rules. - `IdentifyPodOS`: Allows the Pod OS field to be specified. This helps in identifying the OS of the pod authoritatively during the API server admission time. In Kubernetes {{< skew currentVersion >}}, the allowed values for the `pod.spec.os.name` are `windows` and `linux`. -- `IndexedJob`: Allows the [Job](/docs/concepts/workloads/controllers/job/) - controller to manage Pod completions per completion index. - `InTreePluginAWSUnregister`: Stops registering the aws-ebs in-tree plugin in kubelet and volume controllers. - `InTreePluginAzureDiskUnregister`: Stops registering the azuredisk in-tree plugin in kubelet @@ -655,6 +623,8 @@ Each feature gate is designed for enabling/disabling a specific feature: filesystem walk for better performance and accuracy. - `LogarithmicScaleDown`: Enable semi-random selection of pods to evict on controller scaledown based on logarithmic bucketing of pod timestamps. +- `LoggingAlphaOptions`: Allow fine-tuing of experimental, alpha-quality logging options. +- `LoggingBetaOptions`: Allow fine-tuing of experimental, beta-quality logging options. - `MatchLabelKeysInPodTopologySpread`: Enable the `matchLabelKeys` field for [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/). - `MaxUnavailableStatefulSet`: Enables setting the `maxUnavailable` field for the @@ -667,6 +637,8 @@ Each feature gate is designed for enabling/disabling a specific feature: cgroup v2 memory controller. - `MinDomainsInPodTopologySpread`: Enable `minDomains` in [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/). +- `MinimizeIPTablesRestore`: Enables new performance improvement logics + in the kube-proxy iptables mode. - `MixedProtocolLBService`: Enable using different protocols in the same `LoadBalancer` type Service instance. - `MultiCIDRRangeAllocator`: Enables the MultiCIDR range allocator. @@ -683,7 +655,6 @@ Each feature gate is designed for enabling/disabling a specific feature: - `NodeSwap`: Enable the kubelet to allocate swap memory for Kubernetes workloads on a node. Must be used with `KubeletConfiguration.failSwapOn` set to false. For more details, please see [swap memory](/docs/concepts/architecture/nodes/#swap-memory) -- `NonPreemptingPriority`: Enable `preemptionPolicy` field for PriorityClass and Pod. - `OpenAPIEnums`: Enables populating "enum" fields of OpenAPI schemas in the spec returned from the API server. - `OpenAPIV3`: Enables the API server to publish OpenAPI v3. @@ -692,19 +663,12 @@ Each feature gate is designed for enabling/disabling a specific feature: for more details. - `PodDeletionCost`: Enable the [Pod Deletion Cost](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost) feature which allows users to influence ReplicaSet downscaling order. -- `PodAffinityNamespaceSelector`: Enable the - [Pod Affinity Namespace Selector](/docs/concepts/scheduling-eviction/assign-pod-node/#namespace-selector) - and [CrossNamespacePodAffinity](/docs/concepts/policy/resource-quotas/#cross-namespace-pod-affinity-quota) - quota scope features. - `PodAndContainerStatsFromCRI`: Configure the kubelet to gather container and pod stats from the CRI container runtime rather than gathering them from cAdvisor. As of 1.26, this also includes gathering metrics from CRI and emitting them over `/metrics/cadvisor` (rather than having cAdvisor emit them directly). - `PodDisruptionConditions`: Enables support for appending a dedicated pod condition indicating that the pod is being deleted due to a disruption. - `PodHasNetworkCondition`: Enable the kubelet to mark the [PodHasNetwork](/docs/concepts/workloads/pods/pod-lifecycle/#pod-has-network) condition on pods. - `PodSchedulingReadiness`: Enable setting `schedulingGates` field to control a Pod's [scheduling readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness). - `PodSecurity`: Enables the `PodSecurity` admission plugin. -- `PreferNominatedNode`: This flag tells the scheduler whether the nominated - nodes will be checked first before looping through all the other nodes in - the cluster. - `ProbeTerminationGracePeriod`: Enable [setting probe-level `terminationGracePeriodSeconds`](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#probe-level-terminationgraceperiodseconds) on pods. See the [enhancement proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2238-liveness-probe-grace-period) @@ -733,25 +697,18 @@ Each feature gate is designed for enabling/disabling a specific feature: - `RotateKubeletServerCertificate`: Enable the rotation of the server TLS certificate on the kubelet. See [kubelet configuration](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubelet-configuration) for more details. -- `SELinuxMountReadWriteOncePod`: Speed up container startup by mounting volumes with the correct - SELinux label instead of changing each file on the volumes recursively. The initial implementation - focused on ReadWriteOncePod volumes. +- `SELinuxMountReadWriteOncePod`: Speeds up container startup by allowing kubelet to mount volumes + for a Pod directly with the correct SELinux label instead of changing each file on the volumes + recursively. The initial implementation focused on ReadWriteOncePod volumes. - `SeccompDefault`: Enables the use of `RuntimeDefault` as the default seccomp profile for all workloads. The seccomp profile is specified in the `securityContext` of a Pod and/or a Container. -- `SELinuxMountReadWriteOncePod`: Allows kubelet to mount volumes for a Pod directly with the - right SELinux label instead of applying the SELinux label recursively on every file on the - volume. - `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/server-side-apply/) feature on the API Server. - `ServerSideFieldValidation`: Enables server-side field validation. This means the validation of resource schema is performed at the API server side rather than the client side (for example, the `kubectl create` or `kubectl apply` command line). - `ServiceInternalTrafficPolicy`: Enables the `internalTrafficPolicy` field on Services -- `ServiceLBNodePortControl`: Enables the `allocateLoadBalancerNodePorts` field on Services. -- `ServiceLoadBalancerClass`: Enables the `loadBalancerClass` field on Services. See - [Specifying class of load balancer implementation](/docs/concepts/services-networking/service/#load-balancer-class) - for more details. - `ServiceIPStaticSubrange`: Enables a strategy for Services ClusterIP allocations, whereby the ClusterIP range is subdivided. Dynamic allocated ClusterIP addresses will be allocated preferently from the upper range allowing users to assign static ClusterIPs from the lower range with a low @@ -770,8 +727,6 @@ Each feature gate is designed for enabling/disabling a specific feature: [storage version API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageversion-v1alpha1-internal-apiserver-k8s-io). - `StorageVersionHash`: Allow API servers to expose the storage version hash in the discovery. -- `SuspendJob`: Enable support to suspend and resume Jobs. For more details, see - [the Jobs docs](/docs/concepts/workloads/controllers/job/). - `TopologyAwareHints`: Enables topology aware routing based on topology hints in EndpointSlices. See [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/) for more @@ -785,7 +740,7 @@ Each feature gate is designed for enabling/disabling a specific feature: This feature gate will never graduate to beta or stable. - `TopologyManagerPolicyBetaOptions`: Allow fine-tuning of topology manager policies, experimental, Beta-quality options. - This feature gate guards *a group* of topology manager options whose quality level is alpha. + This feature gate guards *a group* of topology manager options whose quality level is beta. This feature gate will never graduate to stable. - `TopologyManagerPolicyOptions`: Allow fine-tuning of topology manager policies, - `UserNamespacesStatelessPodsSupport`: Enable user namespace support for stateless Pods. @@ -795,6 +750,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `WatchBookmark`: Enable support for watch bookmark events. - `WinDSR`: Allows kube-proxy to create DSR loadbalancers for Windows. - `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows. +- `WindowsHostNetwork`: Enables support for joining Windows containers to a hosts' network namespace. - `WindowsHostProcessContainers`: Enables support for Windows HostProcess containers. diff --git a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md index cff6120589904..af3687662e61c 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md @@ -53,6 +53,13 @@ kube-apiserver [flags]

The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.

+ +--aggregator-reject-forwarding-redirect     Default: true + + +

Aggregator reject forwarding redirect response back to client.

+ + --allow-metric-labels stringToString     Default: [] @@ -449,7 +456,7 @@ kube-apiserver [flags] --disable-admission-plugins strings -

admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.

+

admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.

@@ -470,7 +477,7 @@ kube-apiserver [flags] --enable-admission-plugins strings -

admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.

+

admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.

@@ -508,6 +515,13 @@ kube-apiserver [flags]

The file containing configuration for encryption providers to be used for storing secrets in etcd

+ +--encryption-provider-config-automatic-reload + + +

Determines if the file set by --encryption-provider-config should be automatically reloaded if the disk contents change. Setting this to true disables the ability to uniquely identify distinct KMS plugins via the API server healthz endpoints.

+ + --endpoint-reconciler-type string     Default: "lease" @@ -610,7 +624,7 @@ kube-apiserver [flags] --feature-gates <comma-separated 'key=True|False' pairs> -

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
APIServerTracing=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AnyVolumeDataSource=true|false (BETA - default=true)
AppArmor=true|false (BETA - default=true)
CPUManager=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
CSIMigrationAzureFile=true|false (BETA - default=true)
CSIMigrationPortworx=true|false (BETA - default=false)
CSIMigrationRBD=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=true)
CSINodeExpandSecret=true|false (ALPHA - default=false)
CSIVolumeHealth=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
CronJobTimeZone=true|false (BETA - default=true)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
DelegateFSGroupToCSIDriver=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DownwardAPIHugePages=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (BETA - default=true)
ExpandedDNSConfig=true|false (ALPHA - default=false)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GRPCContainerProbe=true|false (BETA - default=true)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
IPTablesOwnershipCleanup=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
JobPodFailurePolicy=true|false (ALPHA - default=false)
JobReadyPods=true|false (BETA - default=true)
JobTrackingWithFinalizers=true|false (BETA - default=true)
KMSv2=true|false (ALPHA - default=false)
KubeletCredentialProviders=true|false (BETA - default=true)
KubeletInUserNamespace=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (ALPHA - default=false)
LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (BETA - default=true)
LogarithmicScaleDown=true|false (BETA - default=true)
LoggingAlphaOptions=true|false (ALPHA - default=false)
LoggingBetaOptions=true|false (BETA - default=true)
MatchLabelKeysInPodTopologySpread=true|false (ALPHA - default=false)
MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=false)
MixedProtocolLBService=true|false (BETA - default=true)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
NetworkPolicyStatus=true|false (ALPHA - default=false)
NodeInclusionPolicyInPodTopologySpread=true|false (ALPHA - default=false)
NodeOutOfServiceVolumeDetach=true|false (ALPHA - default=false)
NodeSwap=true|false (ALPHA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
OpenAPIV3=true|false (BETA - default=true)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (ALPHA - default=false)
PodHasNetworkCondition=true|false (ALPHA - default=false)
ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
ProxyTerminatingEndpoints=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (ALPHA - default=false)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RetroactiveDefaultStorageClass=true|false (ALPHA - default=false)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (ALPHA - default=false)
SeccompDefault=true|false (BETA - default=true)
ServerSideFieldValidation=true|false (BETA - default=true)
ServiceIPStaticSubrange=true|false (BETA - default=true)
ServiceInternalTrafficPolicy=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsHostProcessContainers=true|false (BETA - default=true)

+

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APISelfSubjectReview=true|false (ALPHA - default=false)
APIServerIdentity=true|false (BETA - default=true)
APIServerTracing=true|false (ALPHA - default=false)
AggregatedDiscoveryEndpoint=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AnyVolumeDataSource=true|false (BETA - default=true)
AppArmor=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
CSIMigrationPortworx=true|false (BETA - default=false)
CSIMigrationRBD=true|false (ALPHA - default=false)
CSINodeExpandSecret=true|false (ALPHA - default=false)
CSIVolumeHealth=true|false (ALPHA - default=false)
ComponentSLIs=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
CronJobTimeZone=true|false (BETA - default=true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DownwardAPIHugePages=true|false (BETA - default=true)
DynamicResourceAllocation=true|false (ALPHA - default=false)
EventedPLEG=true|false (ALPHA - default=false)
ExpandedDNSConfig=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GRPCContainerProbe=true|false (BETA - default=true)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
IPTablesOwnershipCleanup=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
JobPodFailurePolicy=true|false (BETA - default=true)
JobReadyPods=true|false (BETA - default=true)
KMSv2=true|false (ALPHA - default=false)
KubeletInUserNamespace=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (ALPHA - default=false)
LegacyServiceAccountTokenTracking=true|false (ALPHA - default=false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
LogarithmicScaleDown=true|false (BETA - default=true)
LoggingAlphaOptions=true|false (ALPHA - default=false)
LoggingBetaOptions=true|false (BETA - default=true)
MatchLabelKeysInPodTopologySpread=true|false (ALPHA - default=false)
MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=false)
MinimizeIPTablesRestore=true|false (ALPHA - default=false)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
NetworkPolicyStatus=true|false (ALPHA - default=false)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)
NodeOutOfServiceVolumeDetach=true|false (BETA - default=true)
NodeSwap=true|false (ALPHA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
OpenAPIV3=true|false (BETA - default=true)
PDBUnhealthyPodEvictionPolicy=true|false (ALPHA - default=false)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (BETA - default=true)
PodHasNetworkCondition=true|false (ALPHA - default=false)
PodSchedulingReadiness=true|false (ALPHA - default=false)
ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
ProxyTerminatingEndpoints=true|false (BETA - default=true)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (ALPHA - default=false)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RetroactiveDefaultStorageClass=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (ALPHA - default=false)
SeccompDefault=true|false (BETA - default=true)
ServerSideFieldValidation=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
StatefulSetStartOrdinal=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
TopologyManagerPolicyBetaOptions=true|false (BETA - default=false)
TopologyManagerPolicyOptions=true|false (ALPHA - default=false)
UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
ValidatingAdmissionPolicy=true|false (ALPHA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsHostNetwork=true|false (ALPHA - default=true)

@@ -634,20 +648,6 @@ kube-apiserver [flags]

The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.

- ---identity-lease-duration-seconds int     Default: 3600 - - -

The duration of kube-apiserver lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)

- - - ---identity-lease-renew-interval-seconds int     Default: 10 - - -

The interval of kube-apiserver renewing its lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)

- - --kubelet-certificate-authority string @@ -715,14 +715,7 @@ kube-apiserver [flags] --logging-format string     Default: "text" -

Sets the log format. Permitted formats: "text".
Non-default formats don't honor these flags: --add-dir-header, --alsologtostderr, --log-backtrace-at, --log-dir, --log-file, --log-file-max-size, --logtostderr, --one-output, --skip-headers, --skip-log-headers, --stderrthreshold, --vmodule.
Non-default choices are currently alpha and subject to change without warning.

- - - ---master-service-namespace string     Default: "default" - - -

DEPRECATED: the namespace from which the Kubernetes master services should be injected into pods.

+

Sets the log format. Permitted formats: "text".

@@ -1002,7 +995,7 @@ kube-apiserver [flags] --storage-media-type string     Default: "application/vnd.kubernetes.protobuf" -

The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting.

+

The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. Supported media types: [application/json, application/yaml, application/vnd.kubernetes.protobuf]

diff --git a/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md b/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md index c28e641e1f742..0d448987d0638 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md @@ -288,6 +288,13 @@ kube-controller-manager [flags]

The number of garbage collector workers that are allowed to sync concurrently.

+ +--concurrent-horizontal-pod-autoscaler-syncs int32     Default: 5 + + +

The number of horizontal pod autoscaler objects that are allowed to sync concurrently. Larger number = more responsive horizontal pod autoscaler objects processing, but more CPU (and network) load.

+ + --concurrent-namespace-syncs int32     Default: 10 @@ -446,7 +453,7 @@ kube-controller-manager [flags] --feature-gates <comma-separated 'key=True|False' pairs> -

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
APIServerTracing=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AnyVolumeDataSource=true|false (BETA - default=true)
AppArmor=true|false (BETA - default=true)
CPUManager=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
CSIMigrationAzureFile=true|false (BETA - default=true)
CSIMigrationPortworx=true|false (BETA - default=false)
CSIMigrationRBD=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=true)
CSINodeExpandSecret=true|false (ALPHA - default=false)
CSIVolumeHealth=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
CronJobTimeZone=true|false (BETA - default=true)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
DelegateFSGroupToCSIDriver=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DownwardAPIHugePages=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (BETA - default=true)
ExpandedDNSConfig=true|false (ALPHA - default=false)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GRPCContainerProbe=true|false (BETA - default=true)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
IPTablesOwnershipCleanup=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
JobPodFailurePolicy=true|false (ALPHA - default=false)
JobReadyPods=true|false (BETA - default=true)
JobTrackingWithFinalizers=true|false (BETA - default=true)
KMSv2=true|false (ALPHA - default=false)
KubeletCredentialProviders=true|false (BETA - default=true)
KubeletInUserNamespace=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (ALPHA - default=false)
LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (BETA - default=true)
LogarithmicScaleDown=true|false (BETA - default=true)
LoggingAlphaOptions=true|false (ALPHA - default=false)
LoggingBetaOptions=true|false (BETA - default=true)
MatchLabelKeysInPodTopologySpread=true|false (ALPHA - default=false)
MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=false)
MixedProtocolLBService=true|false (BETA - default=true)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
NetworkPolicyStatus=true|false (ALPHA - default=false)
NodeInclusionPolicyInPodTopologySpread=true|false (ALPHA - default=false)
NodeOutOfServiceVolumeDetach=true|false (ALPHA - default=false)
NodeSwap=true|false (ALPHA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
OpenAPIV3=true|false (BETA - default=true)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (ALPHA - default=false)
PodHasNetworkCondition=true|false (ALPHA - default=false)
ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
ProxyTerminatingEndpoints=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (ALPHA - default=false)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RetroactiveDefaultStorageClass=true|false (ALPHA - default=false)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (ALPHA - default=false)
SeccompDefault=true|false (BETA - default=true)
ServerSideFieldValidation=true|false (BETA - default=true)
ServiceIPStaticSubrange=true|false (BETA - default=true)
ServiceInternalTrafficPolicy=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsHostProcessContainers=true|false (BETA - default=true)

+

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APISelfSubjectReview=true|false (ALPHA - default=false)
APIServerIdentity=true|false (BETA - default=true)
APIServerTracing=true|false (ALPHA - default=false)
AggregatedDiscoveryEndpoint=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AnyVolumeDataSource=true|false (BETA - default=true)
AppArmor=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
CSIMigrationPortworx=true|false (BETA - default=false)
CSIMigrationRBD=true|false (ALPHA - default=false)
CSINodeExpandSecret=true|false (ALPHA - default=false)
CSIVolumeHealth=true|false (ALPHA - default=false)
ComponentSLIs=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
CronJobTimeZone=true|false (BETA - default=true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DownwardAPIHugePages=true|false (BETA - default=true)
DynamicResourceAllocation=true|false (ALPHA - default=false)
EventedPLEG=true|false (ALPHA - default=false)
ExpandedDNSConfig=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GRPCContainerProbe=true|false (BETA - default=true)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
IPTablesOwnershipCleanup=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
JobPodFailurePolicy=true|false (BETA - default=true)
JobReadyPods=true|false (BETA - default=true)
KMSv2=true|false (ALPHA - default=false)
KubeletInUserNamespace=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (ALPHA - default=false)
LegacyServiceAccountTokenTracking=true|false (ALPHA - default=false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
LogarithmicScaleDown=true|false (BETA - default=true)
LoggingAlphaOptions=true|false (ALPHA - default=false)
LoggingBetaOptions=true|false (BETA - default=true)
MatchLabelKeysInPodTopologySpread=true|false (ALPHA - default=false)
MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=false)
MinimizeIPTablesRestore=true|false (ALPHA - default=false)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
NetworkPolicyStatus=true|false (ALPHA - default=false)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)
NodeOutOfServiceVolumeDetach=true|false (BETA - default=true)
NodeSwap=true|false (ALPHA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
OpenAPIV3=true|false (BETA - default=true)
PDBUnhealthyPodEvictionPolicy=true|false (ALPHA - default=false)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (BETA - default=true)
PodHasNetworkCondition=true|false (ALPHA - default=false)
PodSchedulingReadiness=true|false (ALPHA - default=false)
ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
ProxyTerminatingEndpoints=true|false (BETA - default=true)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (ALPHA - default=false)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RetroactiveDefaultStorageClass=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (ALPHA - default=false)
SeccompDefault=true|false (BETA - default=true)
ServerSideFieldValidation=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
StatefulSetStartOrdinal=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
TopologyManagerPolicyBetaOptions=true|false (BETA - default=false)
TopologyManagerPolicyOptions=true|false (ALPHA - default=false)
UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
ValidatingAdmissionPolicy=true|false (ALPHA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsHostNetwork=true|false (ALPHA - default=true)

@@ -558,7 +565,7 @@ kube-controller-manager [flags] --leader-elect-renew-deadline duration     Default: 10s -

The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.

+

The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than the lease duration. This is only applicable if leader election is enabled.

@@ -607,7 +614,7 @@ kube-controller-manager [flags] --logging-format string     Default: "text" -

Sets the log format. Permitted formats: "text".
Non-default formats don't honor these flags: --add-dir-header, --alsologtostderr, --log-backtrace-at, --log-dir, --log-file, --log-file-max-size, --logtostderr, --one-output, --skip-headers, --skip-log-headers, --stderrthreshold, --vmodule.
Non-default choices are currently alpha and subject to change without warning.

+

Sets the log format. Permitted formats: "text".

@@ -722,13 +729,6 @@ kube-controller-manager [flags]

If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]

- ---pod-eviction-timeout duration     Default: 5m0s - - -

The grace period for deleting pods on failed nodes.

- - --profiling     Default: true diff --git a/content/en/docs/reference/command-line-tools-reference/kube-proxy.md b/content/en/docs/reference/command-line-tools-reference/kube-proxy.md index a94a190732899..15b1aa9d2f4c4 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-proxy.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-proxy.md @@ -53,7 +53,7 @@ kube-proxy [flags] --alsologtostderr -

log to standard error as well as files

+

log to standard error as well as files (no effect when -logtostderr=true)

@@ -144,7 +144,7 @@ kube-proxy [flags] --feature-gates <comma-separated 'key=True|False' pairs> -

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
APIServerTracing=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AnyVolumeDataSource=true|false (BETA - default=true)
AppArmor=true|false (BETA - default=true)
CPUManager=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
CSIMigrationAzureFile=true|false (BETA - default=true)
CSIMigrationPortworx=true|false (BETA - default=false)
CSIMigrationRBD=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=true)
CSINodeExpandSecret=true|false (ALPHA - default=false)
CSIVolumeHealth=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
CronJobTimeZone=true|false (BETA - default=true)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
DelegateFSGroupToCSIDriver=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DownwardAPIHugePages=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (BETA - default=true)
ExpandedDNSConfig=true|false (ALPHA - default=false)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GRPCContainerProbe=true|false (BETA - default=true)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
IPTablesOwnershipCleanup=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
JobPodFailurePolicy=true|false (ALPHA - default=false)
JobReadyPods=true|false (BETA - default=true)
JobTrackingWithFinalizers=true|false (BETA - default=true)
KMSv2=true|false (ALPHA - default=false)
KubeletCredentialProviders=true|false (BETA - default=true)
KubeletInUserNamespace=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (ALPHA - default=false)
LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (BETA - default=true)
LogarithmicScaleDown=true|false (BETA - default=true)
MatchLabelKeysInPodTopologySpread=true|false (ALPHA - default=false)
MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=false)
MixedProtocolLBService=true|false (BETA - default=true)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
NetworkPolicyStatus=true|false (ALPHA - default=false)
NodeInclusionPolicyInPodTopologySpread=true|false (ALPHA - default=false)
NodeOutOfServiceVolumeDetach=true|false (ALPHA - default=false)
NodeSwap=true|false (ALPHA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
OpenAPIV3=true|false (BETA - default=true)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (ALPHA - default=false)
PodHasNetworkCondition=true|false (ALPHA - default=false)
ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
ProxyTerminatingEndpoints=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (ALPHA - default=false)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RetroactiveDefaultStorageClass=true|false (ALPHA - default=false)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (ALPHA - default=false)
SeccompDefault=true|false (BETA - default=true)
ServerSideFieldValidation=true|false (BETA - default=true)
ServiceIPStaticSubrange=true|false (BETA - default=true)
ServiceInternalTrafficPolicy=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsHostProcessContainers=true|false (BETA - default=true)This parameter is ignored if a config file is specified by --config.

+

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APISelfSubjectReview=true|false (ALPHA - default=false)
APIServerIdentity=true|false (BETA - default=true)
APIServerTracing=true|false (ALPHA - default=false)
AggregatedDiscoveryEndpoint=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AnyVolumeDataSource=true|false (BETA - default=true)
AppArmor=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
CSIMigrationPortworx=true|false (BETA - default=false)
CSIMigrationRBD=true|false (ALPHA - default=false)
CSINodeExpandSecret=true|false (ALPHA - default=false)
CSIVolumeHealth=true|false (ALPHA - default=false)
ComponentSLIs=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
CronJobTimeZone=true|false (BETA - default=true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DownwardAPIHugePages=true|false (BETA - default=true)
DynamicResourceAllocation=true|false (ALPHA - default=false)
EventedPLEG=true|false (ALPHA - default=false)
ExpandedDNSConfig=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GRPCContainerProbe=true|false (BETA - default=true)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
IPTablesOwnershipCleanup=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
JobPodFailurePolicy=true|false (BETA - default=true)
JobReadyPods=true|false (BETA - default=true)
KMSv2=true|false (ALPHA - default=false)
KubeletInUserNamespace=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (ALPHA - default=false)
LegacyServiceAccountTokenTracking=true|false (ALPHA - default=false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
LogarithmicScaleDown=true|false (BETA - default=true)
MatchLabelKeysInPodTopologySpread=true|false (ALPHA - default=false)
MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=false)
MinimizeIPTablesRestore=true|false (ALPHA - default=false)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
NetworkPolicyStatus=true|false (ALPHA - default=false)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)
NodeOutOfServiceVolumeDetach=true|false (BETA - default=true)
NodeSwap=true|false (ALPHA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
OpenAPIV3=true|false (BETA - default=true)
PDBUnhealthyPodEvictionPolicy=true|false (ALPHA - default=false)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (BETA - default=true)
PodHasNetworkCondition=true|false (ALPHA - default=false)
PodSchedulingReadiness=true|false (ALPHA - default=false)
ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
ProxyTerminatingEndpoints=true|false (BETA - default=true)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (ALPHA - default=false)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RetroactiveDefaultStorageClass=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (ALPHA - default=false)
SeccompDefault=true|false (BETA - default=true)
ServerSideFieldValidation=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
StatefulSetStartOrdinal=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
TopologyManagerPolicyBetaOptions=true|false (BETA - default=false)
TopologyManagerPolicyOptions=true|false (ALPHA - default=false)
UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
ValidatingAdmissionPolicy=true|false (ALPHA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsHostNetwork=true|false (ALPHA - default=true)
This parameter is ignored if a config file is specified by --config.

@@ -168,6 +168,13 @@ kube-proxy [flags]

If non-empty, will use this string as identification instead of the actual hostname.

+ +--iptables-localhost-nodeports     Default: true + + +

If false Kube-proxy will disable the legacy behavior of allowing NodePort services to be accessed via localhost, This only applies to iptables mode and ipv4.

+ + --iptables-masquerade-bit int32     Default: 14 @@ -284,21 +291,21 @@ kube-proxy [flags] --log_dir string -

If non-empty, write log files in this directory

+

If non-empty, write log files in this directory (no effect when -logtostderr=true)

--log_file string -

If non-empty, use this log file

+

If non-empty, use this log file (no effect when -logtostderr=true)

--log_file_max_size uint     Default: 1800 -

Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.

+

Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited.

@@ -347,7 +354,7 @@ kube-proxy [flags] --one_output -

If true, only write logs to their native severity level (vs also writing to each lower severity level)

+

If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)

@@ -382,7 +389,7 @@ kube-proxy [flags] --proxy-mode ProxyMode -

Which proxy mode to use: 'iptables' (Linux-only), 'ipvs' (Linux-only), 'kernelspace' (Windows-only), or 'userspace' (Linux/Windows, deprecated). The default value is 'iptables' on Linux and 'userspace' on Windows(will be 'kernelspace' in a future release).This parameter is ignored if a config file is specified by --config.

+

Which proxy mode to use: on Linux this can be 'iptables' (default) or 'ipvs'. On Windows the only supported value is 'kernelspace'.This parameter is ignored if a config file is specified by --config.

@@ -396,7 +403,7 @@ kube-proxy [flags] --show-hidden-metrics-for-version string -

The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.This parameter is ignored if a config file is specified by --config.

+

The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that. This parameter is ignored if a config file is specified by --config.

@@ -410,21 +417,14 @@ kube-proxy [flags] --skip_log_headers -

If true, avoid headers when opening log files

+

If true, avoid headers when opening log files (no effect when -logtostderr=true)

--stderrthreshold int     Default: 2 -

logs at or above this threshold go to stderr

- - - ---udp-timeout duration     Default: 250ms - - -

How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace

+

logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false)

diff --git a/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md b/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md index df865f2982402..d0032be6a1640 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md @@ -159,7 +159,7 @@ kube-scheduler [flags] --feature-gates <comma-separated 'key=True|False' pairs> -

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
APIServerTracing=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AnyVolumeDataSource=true|false (BETA - default=true)
AppArmor=true|false (BETA - default=true)
CPUManager=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
CSIMigrationAzureFile=true|false (BETA - default=true)
CSIMigrationPortworx=true|false (BETA - default=false)
CSIMigrationRBD=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=true)
CSINodeExpandSecret=true|false (ALPHA - default=false)
CSIVolumeHealth=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
CronJobTimeZone=true|false (BETA - default=true)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
DelegateFSGroupToCSIDriver=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DownwardAPIHugePages=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (BETA - default=true)
ExpandedDNSConfig=true|false (ALPHA - default=false)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GRPCContainerProbe=true|false (BETA - default=true)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
IPTablesOwnershipCleanup=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
JobPodFailurePolicy=true|false (ALPHA - default=false)
JobReadyPods=true|false (BETA - default=true)
JobTrackingWithFinalizers=true|false (BETA - default=true)
KMSv2=true|false (ALPHA - default=false)
KubeletCredentialProviders=true|false (BETA - default=true)
KubeletInUserNamespace=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (ALPHA - default=false)
LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (BETA - default=true)
LogarithmicScaleDown=true|false (BETA - default=true)
LoggingAlphaOptions=true|false (ALPHA - default=false)
LoggingBetaOptions=true|false (BETA - default=true)
MatchLabelKeysInPodTopologySpread=true|false (ALPHA - default=false)
MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=false)
MixedProtocolLBService=true|false (BETA - default=true)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
NetworkPolicyStatus=true|false (ALPHA - default=false)
NodeInclusionPolicyInPodTopologySpread=true|false (ALPHA - default=false)
NodeOutOfServiceVolumeDetach=true|false (ALPHA - default=false)
NodeSwap=true|false (ALPHA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
OpenAPIV3=true|false (BETA - default=true)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (ALPHA - default=false)
PodHasNetworkCondition=true|false (ALPHA - default=false)
ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
ProxyTerminatingEndpoints=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (ALPHA - default=false)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RetroactiveDefaultStorageClass=true|false (ALPHA - default=false)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (ALPHA - default=false)
SeccompDefault=true|false (BETA - default=true)
ServerSideFieldValidation=true|false (BETA - default=true)
ServiceIPStaticSubrange=true|false (BETA - default=true)
ServiceInternalTrafficPolicy=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsHostProcessContainers=true|false (BETA - default=true)

+

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APISelfSubjectReview=true|false (ALPHA - default=false)
APIServerIdentity=true|false (BETA - default=true)
APIServerTracing=true|false (ALPHA - default=false)
AggregatedDiscoveryEndpoint=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AnyVolumeDataSource=true|false (BETA - default=true)
AppArmor=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
CSIMigrationPortworx=true|false (BETA - default=false)
CSIMigrationRBD=true|false (ALPHA - default=false)
CSINodeExpandSecret=true|false (ALPHA - default=false)
CSIVolumeHealth=true|false (ALPHA - default=false)
ComponentSLIs=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
CronJobTimeZone=true|false (BETA - default=true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DownwardAPIHugePages=true|false (BETA - default=true)
DynamicResourceAllocation=true|false (ALPHA - default=false)
EventedPLEG=true|false (ALPHA - default=false)
ExpandedDNSConfig=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GRPCContainerProbe=true|false (BETA - default=true)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
IPTablesOwnershipCleanup=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
JobPodFailurePolicy=true|false (BETA - default=true)
JobReadyPods=true|false (BETA - default=true)
KMSv2=true|false (ALPHA - default=false)
KubeletInUserNamespace=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (ALPHA - default=false)
LegacyServiceAccountTokenTracking=true|false (ALPHA - default=false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
LogarithmicScaleDown=true|false (BETA - default=true)
LoggingAlphaOptions=true|false (ALPHA - default=false)
LoggingBetaOptions=true|false (BETA - default=true)
MatchLabelKeysInPodTopologySpread=true|false (ALPHA - default=false)
MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=false)
MinimizeIPTablesRestore=true|false (ALPHA - default=false)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
NetworkPolicyStatus=true|false (ALPHA - default=false)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)
NodeOutOfServiceVolumeDetach=true|false (BETA - default=true)
NodeSwap=true|false (ALPHA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
OpenAPIV3=true|false (BETA - default=true)
PDBUnhealthyPodEvictionPolicy=true|false (ALPHA - default=false)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (BETA - default=true)
PodHasNetworkCondition=true|false (ALPHA - default=false)
PodSchedulingReadiness=true|false (ALPHA - default=false)
ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
ProxyTerminatingEndpoints=true|false (BETA - default=true)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (ALPHA - default=false)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RetroactiveDefaultStorageClass=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (ALPHA - default=false)
SeccompDefault=true|false (BETA - default=true)
ServerSideFieldValidation=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
StatefulSetStartOrdinal=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
TopologyManagerPolicyBetaOptions=true|false (BETA - default=false)
TopologyManagerPolicyOptions=true|false (ALPHA - default=false)
UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
ValidatingAdmissionPolicy=true|false (ALPHA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsHostNetwork=true|false (ALPHA - default=true)

@@ -222,7 +222,7 @@ kube-scheduler [flags] --leader-elect-renew-deadline duration     Default: 10s -

The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.

+

The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than the lease duration. This is only applicable if leader election is enabled.

@@ -278,7 +278,7 @@ kube-scheduler [flags] --logging-format string     Default: "text" -

Sets the log format. Permitted formats: "text".
Non-default formats don't honor these flags: --add-dir-header, --alsologtostderr, --log-backtrace-at, --log-dir, --log-file, --log-file-max-size, --logtostderr, --one-output, --skip-headers, --skip-log-headers, --stderrthreshold, --vmodule.
Non-default choices are currently alpha and subject to change without warning.

+

Sets the log format. Permitted formats: "text".

diff --git a/content/en/docs/reference/config-api/_index.md b/content/en/docs/reference/config-api/_index.md index 9c05466727aee..4941431f85092 100644 --- a/content/en/docs/reference/config-api/_index.md +++ b/content/en/docs/reference/config-api/_index.md @@ -2,4 +2,3 @@ title: Configuration APIs weight: 130 --- - diff --git a/content/en/docs/reference/config-api/apiserver-admission.v1.md b/content/en/docs/reference/config-api/apiserver-admission.v1.md new file mode 100644 index 0000000000000..a4c70ac9f0f09 --- /dev/null +++ b/content/en/docs/reference/config-api/apiserver-admission.v1.md @@ -0,0 +1,301 @@ +--- +title: kube-apiserver Admission (v1) +content_type: tool-reference +package: admission.k8s.io/v1 +auto_generated: true +--- + + +## Resource Types + + +- [AdmissionReview](#admission-k8s-io-v1-AdmissionReview) + + + +## `AdmissionReview` {#admission-k8s-io-v1-AdmissionReview} + + + +

AdmissionReview describes an admission review request/response.

+ + + + + + + + + + + + + + + + + +
FieldDescription
apiVersion
string
admission.k8s.io/v1
kind
string
AdmissionReview
request
+AdmissionRequest +
+

Request describes the attributes for the admission request.

+
response
+AdmissionResponse +
+

Response describes the attributes for the admission response.

+
+ +## `AdmissionRequest` {#admission-k8s-io-v1-AdmissionRequest} + + +**Appears in:** + +- [AdmissionReview](#admission-k8s-io-v1-AdmissionReview) + + +

AdmissionRequest describes the admission.Attributes for the admission request.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
uid [Required]
+k8s.io/apimachinery/pkg/types.UID +
+

UID is an identifier for the individual request/response. It allows us to distinguish instances of requests which are +otherwise identical (parallel requests, requests when earlier requests did not modify etc) +The UID is meant to track the round trip (request/response) between the KAS and the WebHook, not the user request. +It is suitable for correlating log entries between the webhook and apiserver, for either auditing or debugging.

+
kind [Required]
+meta/v1.GroupVersionKind +
+

Kind is the fully-qualified type of object being submitted (for example, v1.Pod or autoscaling.v1.Scale)

+
resource [Required]
+meta/v1.GroupVersionResource +
+

Resource is the fully-qualified resource being requested (for example, v1.pods)

+
subResource
+string +
+

SubResource is the subresource being requested, if any (for example, "status" or "scale")

+
requestKind
+meta/v1.GroupVersionKind +
+

RequestKind is the fully-qualified type of the original API request (for example, v1.Pod or autoscaling.v1.Scale). +If this is specified and differs from the value in "kind", an equivalent match and conversion was performed.

+

For example, if deployments can be modified via apps/v1 and apps/v1beta1, and a webhook registered a rule of +apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] and matchPolicy: Equivalent, +an API request to apps/v1beta1 deployments would be converted and sent to the webhook +with kind: {group:"apps", version:"v1", kind:"Deployment"} (matching the rule the webhook registered for), +and requestKind: {group:"apps", version:"v1beta1", kind:"Deployment"} (indicating the kind of the original API request).

+

See documentation for the "matchPolicy" field in the webhook configuration type for more details.

+
requestResource
+meta/v1.GroupVersionResource +
+

RequestResource is the fully-qualified resource of the original API request (for example, v1.pods). +If this is specified and differs from the value in "resource", an equivalent match and conversion was performed.

+

For example, if deployments can be modified via apps/v1 and apps/v1beta1, and a webhook registered a rule of +apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] and matchPolicy: Equivalent, +an API request to apps/v1beta1 deployments would be converted and sent to the webhook +with resource: {group:"apps", version:"v1", resource:"deployments"} (matching the resource the webhook registered for), +and requestResource: {group:"apps", version:"v1beta1", resource:"deployments"} (indicating the resource of the original API request).

+

See documentation for the "matchPolicy" field in the webhook configuration type.

+
requestSubResource
+string +
+

RequestSubResource is the name of the subresource of the original API request, if any (for example, "status" or "scale") +If this is specified and differs from the value in "subResource", an equivalent match and conversion was performed. +See documentation for the "matchPolicy" field in the webhook configuration type.

+
name
+string +
+

Name is the name of the object as presented in the request. On a CREATE operation, the client may omit name and +rely on the server to generate the name. If that is the case, this field will contain an empty string.

+
namespace
+string +
+

Namespace is the namespace associated with the request (if any).

+
operation [Required]
+Operation +
+

Operation is the operation being performed. This may be different than the operation +requested. e.g. a patch can result in either a CREATE or UPDATE Operation.

+
userInfo [Required]
+authentication/v1.UserInfo +
+

UserInfo is information about the requesting user

+
object
+k8s.io/apimachinery/pkg/runtime.RawExtension +
+

Object is the object from the incoming request.

+
oldObject
+k8s.io/apimachinery/pkg/runtime.RawExtension +
+

OldObject is the existing object. Only populated for DELETE and UPDATE requests.

+
dryRun
+bool +
+

DryRun indicates that modifications will definitely not be persisted for this request. +Defaults to false.

+
options
+k8s.io/apimachinery/pkg/runtime.RawExtension +
+

Options is the operation option structure of the operation being performed. +e.g. meta.k8s.io/v1.DeleteOptions or meta.k8s.io/v1.CreateOptions. This may be +different than the options the caller provided. e.g. for a patch request the performed +Operation might be a CREATE, in which case the Options will a +meta.k8s.io/v1.CreateOptions even though the caller provided meta.k8s.io/v1.PatchOptions.

+
+ +## `AdmissionResponse` {#admission-k8s-io-v1-AdmissionResponse} + + +**Appears in:** + +- [AdmissionReview](#admission-k8s-io-v1-AdmissionReview) + + +

AdmissionResponse describes an admission response.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
uid [Required]
+k8s.io/apimachinery/pkg/types.UID +
+

UID is an identifier for the individual request/response. +This must be copied over from the corresponding AdmissionRequest.

+
allowed [Required]
+bool +
+

Allowed indicates whether or not the admission request was permitted.

+
status
+meta/v1.Status +
+

Result contains extra details into why an admission request was denied. +This field IS NOT consulted in any way if "Allowed" is "true".

+
patch
+[]byte +
+

The patch body. Currently we only support "JSONPatch" which implements RFC 6902.

+
patchType
+PatchType +
+

The type of Patch. Currently we only allow "JSONPatch".

+
auditAnnotations
+map[string]string +
+

AuditAnnotations is an unstructured key value map set by remote admission controller (e.g. error=image-blacklisted). +MutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controller will prefix the keys with +admission webhook name (e.g. imagepolicy.example.com/error=image-blacklisted). AuditAnnotations will be provided by +the admission webhook to add additional context to the audit log for this request.

+
warnings
+[]string +
+

warnings is a list of warning messages to return to the requesting API client. +Warning messages describe a problem the client making the API request should correct or be aware of. +Limit warnings to 120 characters if possible. +Warnings over 256 characters and large numbers of warnings may be truncated.

+
+ +## `Operation` {#admission-k8s-io-v1-Operation} + +(Alias of `string`) + +**Appears in:** + +- [AdmissionRequest](#admission-k8s-io-v1-AdmissionRequest) + + +

Operation is the type of resource operation being checked for admission control

+ + + + +## `PatchType` {#admission-k8s-io-v1-PatchType} + +(Alias of `string`) + +**Appears in:** + +- [AdmissionResponse](#admission-k8s-io-v1-AdmissionResponse) + + +

PatchType is the type of patch being used to represent the mutated object

+ + + + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/apiserver-audit.v1.md b/content/en/docs/reference/config-api/apiserver-audit.v1.md index 30cdd12dca95e..ffef0b7f2b01f 100644 --- a/content/en/docs/reference/config-api/apiserver-audit.v1.md +++ b/content/en/docs/reference/config-api/apiserver-audit.v1.md @@ -72,14 +72,14 @@ For non-resource requests, this is the lower-cased HTTP method.

user [Required]
-authentication/v1.UserInfo +authentication/v1.UserInfo

Authenticated user information.

impersonatedUser
-authentication/v1.UserInfo +authentication/v1.UserInfo

Impersonated user information.

@@ -117,7 +117,7 @@ Does not apply for List-type requests, or non-resource requests.

responseStatus
-meta/v1.Status +meta/v1.Status

The response status, populated even when the ResponseObject is not a Status type. @@ -145,14 +145,14 @@ at Response Level.

requestReceivedTimestamp
-meta/v1.MicroTime +meta/v1.MicroTime

Time the request reached the apiserver.

stageTimestamp
-meta/v1.MicroTime +meta/v1.MicroTime

Time the request reached current audit stage.

@@ -189,7 +189,7 @@ should be short. Annotations are included in the Metadata level.

metadata
-meta/v1.ListMeta +meta/v1.ListMeta No description provided. @@ -224,7 +224,7 @@ categories are logged.

metadata
-meta/v1.ObjectMeta +meta/v1.ObjectMeta

ObjectMeta is included for interoperability with API infrastructure.

@@ -279,7 +279,7 @@ in a rule will override the global default.

metadata
-meta/v1.ListMeta +meta/v1.ListMeta No description provided. diff --git a/content/en/docs/reference/config-api/client-authentication.v1.md b/content/en/docs/reference/config-api/client-authentication.v1.md index 0c7784a8b3d88..0a3fab1a5c493 100644 --- a/content/en/docs/reference/config-api/client-authentication.v1.md +++ b/content/en/docs/reference/config-api/client-authentication.v1.md @@ -108,6 +108,15 @@ If empty, system roots should be used.

cluster.

+disable-compression
+bool + + +

DisableCompression allows client to opt-out of response compression for all requests to the server. This is useful +to speed up requests (specifically lists) when client-server network bandwidth is ample, by saving time on +compression (server-side) and decompression (client-side): https://github.com/kubernetes/kubernetes/issues/112296.

+ + config
k8s.io/apimachinery/pkg/runtime.RawExtension @@ -197,7 +206,7 @@ itself should at least be protected via file permissions.

expirationTimestamp
-meta/v1.Time +meta/v1.Time

ExpirationTimestamp indicates a time when the provided credentials expire.

diff --git a/content/en/docs/reference/config-api/client-authentication.v1beta1.md b/content/en/docs/reference/config-api/client-authentication.v1beta1.md index 15029d106efe6..09aa4dcc8753e 100644 --- a/content/en/docs/reference/config-api/client-authentication.v1beta1.md +++ b/content/en/docs/reference/config-api/client-authentication.v1beta1.md @@ -108,6 +108,15 @@ If empty, system roots should be used.

cluster.

+disable-compression
+bool + + +

DisableCompression allows client to opt-out of response compression for all requests to the server. This is useful +to speed up requests (specifically lists) when client-server network bandwidth is ample, by saving time on +compression (server-side) and decompression (client-side): https://github.com/kubernetes/kubernetes/issues/112296.

+ + config
k8s.io/apimachinery/pkg/runtime.RawExtension @@ -197,7 +206,7 @@ itself should at least be protected via file permissions.

expirationTimestamp
-meta/v1.Time +meta/v1.Time

ExpirationTimestamp indicates a time when the provided credentials expire.

diff --git a/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md b/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md index f420623559cfa..0eaa8f14ade34 100644 --- a/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md +++ b/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md @@ -29,7 +29,7 @@ auto_generated: true metadata
-meta/v1.ObjectMeta +meta/v1.ObjectMeta

Standard object's metadata. diff --git a/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md b/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md index 6d6c0b13e6fd4..6dfcb913e9fdd 100644 --- a/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md +++ b/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md @@ -136,14 +136,6 @@ the range [-1000, 1000]

in order to proxy service traffic. If unspecified (0-0) then ports will be randomly chosen.

-udpIdleTimeout [Required]
-meta/v1.Duration - - -

udpIdleTimeout is how long an idle UDP connection will be kept open (e.g. '250ms', '2s'). -Must be greater than 0. Only applicable for proxyMode=userspace.

- - conntrack [Required]
KubeProxyConntrackConfiguration @@ -325,6 +317,14 @@ the pure iptables proxy mode. Values must be within the range [0, 31].

masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode.

+localhostNodePorts [Required]
+bool + + +

LocalhostNodePorts tells kube-proxy to allow service NodePorts to be accessed via +localhost (iptables mode only)

+ + syncPeriod [Required]
meta/v1.Duration @@ -511,16 +511,12 @@ Windows

ProxyMode represents modes used by the Kubernetes proxy server.

-

Currently, three modes of proxy are available in Linux platform: 'userspace' (older, going to be EOL), 'iptables' -(newer, faster), 'ipvs'(newest, better in performance and scalability).

-

Two modes of proxy are available in Windows platform: 'userspace'(older, stable) and 'kernelspace' (newer, faster).

-

In Linux platform, if proxy mode is blank, use the best-available proxy (currently iptables, but may change in the -future). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are -insufficient, this always falls back to the userspace proxy. IPVS mode will be enabled when proxy mode is set to 'ipvs', -and the fall back path is firstly iptables and then userspace.

-

In Windows platform, if proxy mode is blank, use the best-available proxy (currently userspace, but may change in the -future). If winkernel proxy is selected, regardless of how, but the Windows kernel can't support this mode of proxy, -this always falls back to the userspace proxy.

+

Currently, two modes of proxy are available on Linux platforms: 'iptables' and 'ipvs'. +One mode of proxy is available on Windows platforms: 'kernelspace'.

+

If the proxy mode is unspecified, the best-available proxy mode will be used (currently this +is iptables on Linux and kernelspace on Windows). If the selected proxy mode cannot be +used (due to lack of kernel support, missing userspace components, etc) then kube-proxy +will exit with an error.

@@ -535,10 +531,12 @@ this always falls back to the userspace proxy.

- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration) -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) - [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + - [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) @@ -595,10 +593,12 @@ client.

**Appears in:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) - [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + - [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) @@ -637,6 +637,8 @@ enableProfiling is true.

- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) + - [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) diff --git a/content/en/docs/reference/config-api/kube-scheduler-config.v1.md b/content/en/docs/reference/config-api/kube-scheduler-config.v1.md index ed03a74a53399..876122ef5410f 100644 --- a/content/en/docs/reference/config-api/kube-scheduler-config.v1.md +++ b/content/en/docs/reference/config-api/kube-scheduler-config.v1.md @@ -144,7 +144,7 @@ at least "minFeasibleNodesToFind" feasible nodes no matter what the va Example: if the cluster size is 500 nodes and the value of this flag is 30, then scheduler stops finding further feasible nodes once it finds 150 feasible ones. When the value is 0, default percentage (5%--50% based on the size of the cluster) of the -nodes will be scored.

+nodes will be scored. It is overridden by profile level PercentageofNodesToScore.

podInitialBackoffSeconds [Required]
@@ -202,7 +202,7 @@ with the extender. These extenders are shared by all scheduler profiles.

addedAffinity
-core/v1.NodeAffinity +core/v1.NodeAffinity

AddedAffinity is applied to all Pods additionally to the NodeAffinity @@ -301,7 +301,7 @@ The default strategy is LeastAllocated with an equal "cpu" and "m defaultConstraints
-[]core/v1.TopologySpreadConstraint +[]core/v1.TopologySpreadConstraint

DefaultConstraints defines topology spread constraints to be applied to @@ -635,6 +635,21 @@ If SchedulerName matches with the pod's "spec.schedulerName", then the is scheduled with this profile.

+percentageOfNodesToScore [Required]
+int32 + + +

PercentageOfNodesToScore is the percentage of all nodes that once found feasible +for running a pod, the scheduler stops its search for more feasible nodes in +the cluster. This helps improve scheduler's performance. Scheduler always tries to find +at least "minFeasibleNodesToFind" feasible nodes no matter what the value of this flag is. +Example: if the cluster size is 500 nodes and the value of this flag is 30, +then scheduler stops finding further feasible nodes once it finds 150 feasible ones. +When the value is 0, default percentage (5%--50% based on the size of the cluster) of the +nodes will be scored. It will override global PercentageOfNodesToScore. If it is empty, +global PercentageOfNodesToScore will be used.

+ + plugins [Required]
Plugins @@ -787,6 +802,13 @@ be invoked before default plugins, default plugins must be disabled and re-enabl +preEnqueue [Required]
+PluginSet + + +

PreEnqueue is a list of plugins that should be invoked before adding pods to the scheduling queue.

+ + queueSort [Required]
PluginSet @@ -1166,12 +1188,12 @@ enableProfiling is true.

**Appears in:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) - - [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) - [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) +

LeaderElectionConfiguration defines the configuration of leader election clients for components that can run with leader election enabled.

diff --git a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta2.md b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta2.md index 8a4c735b32647..edf1071e18a05 100644 --- a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta2.md +++ b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta2.md @@ -218,7 +218,7 @@ with the extender. These extenders are shared by all scheduler profiles.

addedAffinity
-core/v1.NodeAffinity +core/v1.NodeAffinity

AddedAffinity is applied to all Pods additionally to the NodeAffinity @@ -317,7 +317,7 @@ The default strategy is LeastAllocated with an equal "cpu" and "m defaultConstraints
-[]core/v1.TopologySpreadConstraint +[]core/v1.TopologySpreadConstraint

DefaultConstraints defines topology spread constraints to be applied to @@ -803,6 +803,13 @@ be invoked before default plugins, default plugins must be disabled and re-enabl +preEnqueue [Required]
+PluginSet + + +

PreEnqueue is a list of plugins that should be invoked before adding pods to the scheduling queue.

+ + queueSort [Required]
PluginSet diff --git a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md index c9c2d9651bef0..1f67ffce6c466 100644 --- a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md +++ b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md @@ -202,7 +202,7 @@ with the extender. These extenders are shared by all scheduler profiles.

addedAffinity
-core/v1.NodeAffinity +core/v1.NodeAffinity

AddedAffinity is applied to all Pods additionally to the NodeAffinity @@ -301,7 +301,7 @@ The default strategy is LeastAllocated with an equal "cpu" and "m defaultConstraints
-[]core/v1.TopologySpreadConstraint +[]core/v1.TopologySpreadConstraint

DefaultConstraints defines topology spread constraints to be applied to @@ -787,6 +787,13 @@ be invoked before default plugins, default plugins must be disabled and re-enabl +preEnqueue [Required]
+PluginSet + + +

PreEnqueue is a list of plugins that should be invoked before adding pods to the scheduling queue.

+ + queueSort [Required]
PluginSet diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md index 7bd46c2fad2b8..dca15f101f9dc 100644 --- a/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md +++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md @@ -5,6 +5,7 @@ package: kubeadm.k8s.io/v1beta2 auto_generated: true ---

Overview

+

Package v1beta2 has been DEPRECATED by v1beta3.

Package v1beta2 defines the v1beta2 version of the kubeadm configuration file format. This version improves on the v1beta1 format by fixing some minor issues and adding a few new fields.

A list of changes since v1beta1:

@@ -15,7 +16,7 @@ This version improves on the v1beta1 format by fixing some minor issues and addi
  • The JSON "omitempty" tag of the "taints" field (inside NodeRegistrationOptions) is removed.
  • See the Kubernetes 1.15 changelog for further details.

    -

    Migration from old kubeadm config versions

    +

    Migration from old kubeadm config versions

    Please convert your v1beta1 configuration files to v1beta2 using the "kubeadm config migrate" command of kubeadm v1.15.x (conversion from older releases of kubeadm config files requires older release of kubeadm as well e.g.

      @@ -75,16 +76,16 @@ use it to customize the node name, the CRI socket to use or any other settings t node only (e.g. the node ip).

    • -

      apiServer, that represents the endpoint of the instance of the API server to be deployed on this node; +

      localAPIEndpoint, that represents the endpoint of the instance of the API server to be deployed on this node; use it e.g. to customize the API server advertise address.

    apiVersion: kubeadm.k8s.io/v1beta2
     kind: ClusterConfiguration
     networking:
    -    ...
    +  ...
     etcd:
    -    ...
    +  ...
     apiServer:
       extraArgs:
         ...
    @@ -109,7 +110,7 @@ components by adding customized setting or overriding kubeadm default settings.<
     
     
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
     kind: KubeProxyConfiguration
    -  ...
    + ...
     

    The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.

    See https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ or @@ -117,7 +118,7 @@ https://pkg.go.dev/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration for kube proxy official documentation.

    apiVersion: kubelet.config.k8s.io/v1beta1
     kind: KubeletConfiguration
    -  ...
    + ...
     

    The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.

    See https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ or @@ -228,18 +229,18 @@ configuration types to be used during a kubeadm init run.

    When executing kubeadm join with the --config option, the JoinConfiguration type should be provided.

    apiVersion: kubeadm.k8s.io/v1beta2
     kind: JoinConfiguration
    -  ...
    + ...
     

    The JoinConfiguration type should be used to configure runtime settings, that in case of kubeadm join are the discovery method used for accessing the cluster info and all the setting which are specific to the node where kubeadm is executed, including:

    • -

      NodeRegistration, that holds fields that relate to registering the new node to the cluster; +

      nodeRegistration, that holds fields that relate to registering the new node to the cluster; use it to customize the node name, the CRI socket to use or any other settings that should apply to this node only (e.g. the node IP).

    • -

      APIEndpoint, that represents the endpoint of the instance of the API server to be eventually deployed on this node.

      +

      apiEndpoint, that represents the endpoint of the instance of the API server to be eventually deployed on this node.

    @@ -637,7 +638,7 @@ for, so other administrators can know its purpose.

    expires [Required]
    -meta/v1.Time +meta/v1.Time

    expires specifies the timestamp when this token expires. Defaults to being set @@ -948,7 +949,7 @@ Kubeadm has no knowledge of where certificate files live and they must be suppli []string -

    endpoints of etcd members.

    +

    endpoints of etcd members. Required for external etcd.

    caFile [Required]
    @@ -1050,7 +1051,7 @@ from which to load cluster information.

    pathType [Required]
    -core/v1.HostPathType +core/v1.HostPathType

    pathType is the type of the HostPath.

    @@ -1274,7 +1275,7 @@ be annotated to the Node API object, for later re-use.

    taints [Required]
    -[]core/v1.Taint +[]core/v1.Taint

    taints specifies the taints the Node API object should be registered with. diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md index c631b359fabd3..8abeb61fe3572 100644 --- a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md +++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md @@ -137,23 +137,23 @@ configuration types to be used during a kubeadm init run.

    apiVersion: kubeadm.k8s.io/v1beta3
     kind: InitConfiguration
     bootstrapTokens:
    -- token: "9a08jv.c0izixklcxtmnze7"
    -  description: "kubeadm bootstrap token"
    -  ttl: "24h"
    -- token: "783bde.3f89s0fje9f38fhf"
    -  description: "another bootstrap token"
    -  usages:
    -  - authentication
    -  - signing
    -  groups:
    -  - system:bootstrappers:kubeadm:default-node-token
    +  - token: "9a08jv.c0izixklcxtmnze7"
    +    description: "kubeadm bootstrap token"
    +    ttl: "24h"
    +  - token: "783bde.3f89s0fje9f38fhf"
    +    description: "another bootstrap token"
    +    usages:
    +      - authentication
    +      - signing
    +    groups:
    +      - system:bootstrappers:kubeadm:default-node-token
     nodeRegistration:
       name: "ec2-10-100-0-1"
       criSocket: "/var/run/dockershim.sock"
       taints:
    -  - key: "kubeadmNode"
    -    value: "someValue"
    -    effect: "NoSchedule"
    +    - key: "kubeadmNode"
    +      value: "someValue"
    +      effect: "NoSchedule"
       kubeletExtraArgs:
         v: 4
       ignorePreflightErrors:
    @@ -177,9 +177,9 @@ configuration types to be used during a kubeadm init run.

    extraArgs: listen-client-urls: "http://10.100.0.1:2379" serverCertSANs: - - "ec2-10-100-0-1.compute-1.amazonaws.com" + - "ec2-10-100-0-1.compute-1.amazonaws.com" peerCertSANs: - - "10.100.0.1" + - "10.100.0.1" # external: # endpoints: # - "10.100.0.1:2379" @@ -197,33 +197,33 @@ configuration types to be used during a kubeadm init run.

    extraArgs: authorization-mode: "Node,RBAC" extraVolumes: - - name: "some-volume" - hostPath: "/etc/some-path" - mountPath: "/etc/some-pod-path" - readOnly: false - pathType: File + - name: "some-volume" + hostPath: "/etc/some-path" + mountPath: "/etc/some-pod-path" + readOnly: false + pathType: File certSANs: - - "10.100.1.1" - - "ec2-10-100-0-1.compute-1.amazonaws.com" + - "10.100.1.1" + - "ec2-10-100-0-1.compute-1.amazonaws.com" timeoutForControlPlane: 4m0s controllerManager: extraArgs: "node-cidr-mask-size": "20" extraVolumes: - - name: "some-volume" - hostPath: "/etc/some-path" - mountPath: "/etc/some-pod-path" - readOnly: false - pathType: File + - name: "some-volume" + hostPath: "/etc/some-path" + mountPath: "/etc/some-pod-path" + readOnly: false + pathType: File scheduler: extraArgs: address: "10.100.0.1" extraVolumes: - - name: "some-volume" - hostPath: "/etc/some-path" - mountPath: "/etc/some-pod-path" - readOnly: false - pathType: File + - name: "some-volume" + hostPath: "/etc/some-path" + mountPath: "/etc/some-pod-path" + readOnly: false + pathType: File certificatesDir: "/etc/kubernetes/pki" imageRepository: "registry.k8s.io" clusterName: "example-cluster" @@ -264,6 +264,109 @@ node only (e.g. the node ip).

    +## `BootstrapToken` {#BootstrapToken} + + +**Appears in:** + +- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) + + +

    BootstrapToken describes one bootstrap token, stored as a Secret in the cluster

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    token [Required]
    +BootstrapTokenString +
    +

    token is used for establishing bidirectional trust between nodes and control-planes. +Used for joining nodes in the cluster.

    +
    description
    +string +
    +

    description sets a human-friendly message why this token exists and what it's used +for, so other administrators can know its purpose.

    +
    ttl
    +meta/v1.Duration +
    +

    ttl defines the time to live for this token. Defaults to 24h. +expires and ttl are mutually exclusive.

    +
    expires
    +meta/v1.Time +
    +

    expires specifies the timestamp when this token expires. Defaults to being set +dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.

    +
    usages
    +[]string +
    +

    usages describes the ways in which this token can be used. Can by default be used +for establishing bidirectional trust, but that can be changed here.

    +
    groups
    +[]string +
    +

    groups specifies the extra groups that this token will authenticate as when/if +used for authentication

    +
    + +## `BootstrapTokenString` {#BootstrapTokenString} + + +**Appears in:** + +- [BootstrapToken](#BootstrapToken) + + +

    BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used +for both validation of the practically of the API server from a joining node's point +of view and as an authentication method for the node in the bootstrap phase of +"kubeadm join". This token is and should be short-lived.

    + + + + + + + + + + + + + + +
    FieldDescription
    - [Required]
    +string +
    + No description provided.
    - [Required]
    +string +
    + No description provided.
    + + + ## `ClusterConfiguration` {#kubeadm-k8s-io-v1beta3-ClusterConfiguration} @@ -641,7 +744,7 @@ information will be fetched.

    caCertHashes specifies a set of public key pins to verify when token-based discovery is used. The root CA found during discovery must match one of these values. Specifying an empty set disables root CA pinning, which can be unsafe. -Each hash is specified as ":", where the only currently supported type is +Each hash is specified as <type>:<value>, where the only currently supported type is "sha256". This is a hex-encoded SHA-256 hash of the Subject Public Key Info (SPKI) object in DER-encoded ASN.1. These hashes can be calculated using, for example, OpenSSL.

    @@ -933,7 +1036,7 @@ file from which to load cluster information.

    pathType
    -core/v1.HostPathType +core/v1.HostPathType

    pathType is the type of the hostPath.

    @@ -1156,12 +1259,11 @@ This information will be annotated to the Node API object, for later re-use

    taints [Required]
    -[]core/v1.Taint +[]core/v1.Taint

    taints specifies the taints the Node API object should be registered with. -If this field is unset, i.e. nil, in the kubeadm init process it will be defaulted -with a control-plane taint for control-plane nodes. +If this field is unset, i.e. nil, it will be defaulted with a control-plane taint for control-plane nodes. If you don't want to taint your control-plane node, set this field to an empty list, i.e. taints: [] in the YAML file. This field is solely used for Node registration.

    @@ -1173,7 +1275,7 @@ i.e. taints: [] in the YAML file. This field is solely used for Nod

    kubeletExtraArgs passes through extra arguments to the kubelet. The arguments here are passed to the kubelet command line via the environment file kubeadm writes at runtime for the kubelet to source. -This overrides the generic base-level configuration in the 'kubelet-config-1.X' ConfigMap. +This overrides the generic base-level configuration in the kubelet-config ConfigMap. Flags have higher priority when parsing. These values are local and specific to the node kubeadm is executing on. A key in this map is the flag name as it appears on the command line except without leading dash(es).

    @@ -1188,13 +1290,13 @@ the current node is registered.

    imagePullPolicy
    -core/v1.PullPolicy +core/v1.PullPolicy

    imagePullPolicy specifies the policy for image pulling during kubeadm "init" and "join" operations. The value of this field must be one of "Always", "IfNotPresent" or "Never". -If this field is unset kubeadm will default it to "IfNotPresent", or pull the required +If this field is not set, kubeadm will default it to "IfNotPresent", or pull the required images if not present on the host.

    @@ -1236,107 +1338,4 @@ first alpha-numerically.

    - - - - -## `BootstrapToken` {#BootstrapToken} - - -**Appears in:** - -- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) - - -

    BootstrapToken describes one bootstrap token, stored as a Secret in the cluster

    - - - - - - - - - - - - - - - - - - - - - - - - - - -
    FieldDescription
    token [Required]
    -BootstrapTokenString -
    -

    token is used for establishing bidirectional trust between nodes and control-planes. -Used for joining nodes in the cluster.

    -
    description
    -string -
    -

    description sets a human-friendly message why this token exists and what it's used -for, so other administrators can know its purpose.

    -
    ttl
    -meta/v1.Duration -
    -

    ttl defines the time to live for this token. Defaults to 24h. -expires and ttl are mutually exclusive.

    -
    expires
    -meta/v1.Time -
    -

    expires specifies the timestamp when this token expires. Defaults to being set -dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.

    -
    usages
    -[]string -
    -

    usages describes the ways in which this token can be used. Can by default be used -for establishing bidirectional trust, but that can be changed here.

    -
    groups
    -[]string -
    -

    groups specifies the extra groups that this token will authenticate as when/if -used for authentication

    -
    - -## `BootstrapTokenString` {#BootstrapTokenString} - - -**Appears in:** - -- [BootstrapToken](#BootstrapToken) - - -

    BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used -for both validation of the practically of the API server from a joining node's point -of view and as an authentication method for the node in the bootstrap phase of -"kubeadm join". This token is and should be short-lived.

    - - - - - - - - - - - - - - -
    FieldDescription
    - [Required]
    -string -
    - No description provided.
    - [Required]
    -string -
    - No description provided.
    \ No newline at end of file + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/kubeconfig.v1.md b/content/en/docs/reference/config-api/kubeconfig.v1.md new file mode 100644 index 0000000000000..42cf3bd7cc9c6 --- /dev/null +++ b/content/en/docs/reference/config-api/kubeconfig.v1.md @@ -0,0 +1,602 @@ +--- +title: kubeconfig (v1) +content_type: tool-reference +package: v1 +auto_generated: true +--- + +## Resource Types + + +- [Config](#Config) + + + +## `AuthInfo` {#AuthInfo} + + +**Appears in:** + +- [NamedAuthInfo](#NamedAuthInfo) + + +

    AuthInfo contains information that describes identity information. This is use to tell the kubernetes cluster who you are.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    client-certificate
    +string +
    +

    ClientCertificate is the path to a client cert file for TLS.

    +
    client-certificate-data
    +[]byte +
    +

    ClientCertificateData contains PEM-encoded data from a client cert file for TLS. Overrides ClientCertificate

    +
    client-key
    +string +
    +

    ClientKey is the path to a client key file for TLS.

    +
    client-key-data
    +[]byte +
    +

    ClientKeyData contains PEM-encoded data from a client key file for TLS. Overrides ClientKey

    +
    token
    +string +
    +

    Token is the bearer token for authentication to the kubernetes cluster.

    +
    tokenFile
    +string +
    +

    TokenFile is a pointer to a file that contains a bearer token (as described above). If both Token and TokenFile are present, Token takes precedence.

    +
    as
    +string +
    +

    Impersonate is the username to impersonate. The name matches the flag.

    +
    as-uid
    +string +
    +

    ImpersonateUID is the uid to impersonate.

    +
    as-groups
    +[]string +
    +

    ImpersonateGroups is the groups to impersonate.

    +
    as-user-extra
    +map[string][]string +
    +

    ImpersonateUserExtra contains additional information for impersonated user.

    +
    username
    +string +
    +

    Username is the username for basic authentication to the kubernetes cluster.

    +
    password
    +string +
    +

    Password is the password for basic authentication to the kubernetes cluster.

    +
    auth-provider
    +AuthProviderConfig +
    +

    AuthProvider specifies a custom authentication plugin for the kubernetes cluster.

    +
    exec
    +ExecConfig +
    +

    Exec specifies a custom exec-based authentication plugin for the kubernetes cluster.

    +
    extensions
    +[]NamedExtension +
    +

    Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields

    +
    + +## `AuthProviderConfig` {#AuthProviderConfig} + + +**Appears in:** + +- [AuthInfo](#AuthInfo) + + +

    AuthProviderConfig holds the configuration for a specified auth provider.

    + + + + + + + + + + + + + + +
    FieldDescription
    name [Required]
    +string +
    + No description provided.
    config [Required]
    +map[string]string +
    + No description provided.
    + +## `Cluster` {#Cluster} + + +**Appears in:** + +- [NamedCluster](#NamedCluster) + + +

    Cluster contains information about how to communicate with a kubernetes cluster

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    server [Required]
    +string +
    +

    Server is the address of the kubernetes cluster (https://hostname:port).

    +
    tls-server-name
    +string +
    +

    TLSServerName is used to check server certificate. If TLSServerName is empty, the hostname used to contact the server is used.

    +
    insecure-skip-tls-verify
    +bool +
    +

    InsecureSkipTLSVerify skips the validity check for the server's certificate. This will make your HTTPS connections insecure.

    +
    certificate-authority
    +string +
    +

    CertificateAuthority is the path to a cert file for the certificate authority.

    +
    certificate-authority-data
    +[]byte +
    +

    CertificateAuthorityData contains PEM-encoded certificate authority certificates. Overrides CertificateAuthority

    +
    proxy-url
    +string +
    +

    ProxyURL is the URL to the proxy to be used for all requests made by this +client. URLs with "http", "https", and "socks5" schemes are supported. If +this configuration is not provided or the empty string, the client +attempts to construct a proxy configuration from http_proxy and +https_proxy environment variables. If these environment variables are not +set, the client does not attempt to proxy requests.

    +

    socks5 proxying does not currently support spdy streaming endpoints (exec, +attach, port forward).

    +
    disable-compression
    +bool +
    +

    DisableCompression allows client to opt-out of response compression for all requests to the server. This is useful +to speed up requests (specifically lists) when client-server network bandwidth is ample, by saving time on +compression (server-side) and decompression (client-side): https://github.com/kubernetes/kubernetes/issues/112296.

    +
    extensions
    +[]NamedExtension +
    +

    Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields

    +
    + +## `Context` {#Context} + + +**Appears in:** + +- [NamedContext](#NamedContext) + + +

    Context is a tuple of references to a cluster (how do I communicate with a kubernetes cluster), a user (how do I identify myself), and a namespace (what subset of resources do I want to work with)

    + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    cluster [Required]
    +string +
    +

    Cluster is the name of the cluster for this context

    +
    user [Required]
    +string +
    +

    AuthInfo is the name of the authInfo for this context

    +
    namespace
    +string +
    +

    Namespace is the default namespace to use on unspecified requests

    +
    extensions
    +[]NamedExtension +
    +

    Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields

    +
    + +## `ExecConfig` {#ExecConfig} + + +**Appears in:** + +- [AuthInfo](#AuthInfo) + + +

    ExecConfig specifies a command to provide client credentials. The command is exec'd +and outputs structured stdout holding credentials.

    +

    See the client.authentication.k8s.io API group for specifications of the exact input +and output format

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    command [Required]
    +string +
    +

    Command to execute.

    +
    args
    +[]string +
    +

    Arguments to pass to the command when executing it.

    +
    env
    +[]ExecEnvVar +
    +

    Env defines additional environment variables to expose to the process. These +are unioned with the host's environment, as well as variables client-go uses +to pass argument to the plugin.

    +
    apiVersion [Required]
    +string +
    +

    Preferred input version of the ExecInfo. The returned ExecCredentials MUST use +the same encoding version as the input.

    +
    installHint [Required]
    +string +
    +

    This text is shown to the user when the executable doesn't seem to be +present. For example, brew install foo-cli might be a good InstallHint for +foo-cli on Mac OS systems.

    +
    provideClusterInfo [Required]
    +bool +
    +

    ProvideClusterInfo determines whether or not to provide cluster information, +which could potentially contain very large CA data, to this exec plugin as a +part of the KUBERNETES_EXEC_INFO environment variable. By default, it is set +to false. Package k8s.io/client-go/tools/auth/exec provides helper methods for +reading this environment variable.

    +
    interactiveMode
    +ExecInteractiveMode +
    +

    InteractiveMode determines this plugin's relationship with standard input. Valid +values are "Never" (this exec plugin never uses standard input), "IfAvailable" (this +exec plugin wants to use standard input if it is available), or "Always" (this exec +plugin requires standard input to function). See ExecInteractiveMode values for more +details.

    +

    If APIVersion is client.authentication.k8s.io/v1alpha1 or +client.authentication.k8s.io/v1beta1, then this field is optional and defaults +to "IfAvailable" when unset. Otherwise, this field is required.

    +
    + +## `ExecEnvVar` {#ExecEnvVar} + + +**Appears in:** + +- [ExecConfig](#ExecConfig) + + +

    ExecEnvVar is used for setting environment variables when executing an exec-based +credential plugin.

    + + + + + + + + + + + + + + +
    FieldDescription
    name [Required]
    +string +
    + No description provided.
    value [Required]
    +string +
    + No description provided.
    + +## `ExecInteractiveMode` {#ExecInteractiveMode} + +(Alias of `string`) + +**Appears in:** + +- [ExecConfig](#ExecConfig) + + +

    ExecInteractiveMode is a string that describes an exec plugin's relationship with standard input.

    + + + + +## `NamedAuthInfo` {#NamedAuthInfo} + + +**Appears in:** + +- [Config](#Config) + + +

    NamedAuthInfo relates nicknames to auth information

    + + + + + + + + + + + + + + +
    FieldDescription
    name [Required]
    +string +
    +

    Name is the nickname for this AuthInfo

    +
    user [Required]
    +AuthInfo +
    +

    AuthInfo holds the auth information

    +
    + +## `NamedCluster` {#NamedCluster} + + +**Appears in:** + +- [Config](#Config) + + +

    NamedCluster relates nicknames to cluster information

    + + + + + + + + + + + + + + +
    FieldDescription
    name [Required]
    +string +
    +

    Name is the nickname for this Cluster

    +
    cluster [Required]
    +Cluster +
    +

    Cluster holds the cluster information

    +
    + +## `NamedContext` {#NamedContext} + + +**Appears in:** + +- [Config](#Config) + + +

    NamedContext relates nicknames to context information

    + + + + + + + + + + + + + + +
    FieldDescription
    name [Required]
    +string +
    +

    Name is the nickname for this Context

    +
    context [Required]
    +Context +
    +

    Context holds the context information

    +
    + +## `NamedExtension` {#NamedExtension} + + +**Appears in:** + +- [Config](#Config) + +- [AuthInfo](#AuthInfo) + +- [Cluster](#Cluster) + +- [Context](#Context) + +- [Preferences](#Preferences) + + +

    NamedExtension relates nicknames to extension information

    + + + + + + + + + + + + + + +
    FieldDescription
    name [Required]
    +string +
    +

    Name is the nickname for this Extension

    +
    extension [Required]
    +k8s.io/apimachinery/pkg/runtime.RawExtension +
    +

    Extension holds the extension information

    +
    + +## `Preferences` {#Preferences} + + +**Appears in:** + +- [Config](#Config) + + + + + + + + + + + + + + + +
    FieldDescription
    colors
    +bool +
    + No description provided.
    extensions
    +[]NamedExtension +
    +

    Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields

    +
    \ No newline at end of file diff --git a/content/en/docs/reference/config-api/kubelet-config.v1.md b/content/en/docs/reference/config-api/kubelet-config.v1.md new file mode 100644 index 0000000000000..abaf48ec4bb3b --- /dev/null +++ b/content/en/docs/reference/config-api/kubelet-config.v1.md @@ -0,0 +1,379 @@ +--- +title: Kubelet Configuration (v1) +content_type: tool-reference +package: kubelet.config.k8s.io/v1 +auto_generated: true +--- + + +## Resource Types + + +- [CredentialProviderConfig](#kubelet-config-k8s-io-v1-CredentialProviderConfig) + + + +## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1-CredentialProviderConfig} + + + +

    CredentialProviderConfig is the configuration containing information about +each exec credential provider. Kubelet reads this configuration from disk and enables +each provider as specified by the CredentialProvider type.

    + + + + + + + + + + + + + + +
    FieldDescription
    apiVersion
    string
    kubelet.config.k8s.io/v1
    kind
    string
    CredentialProviderConfig
    providers [Required]
    +[]CredentialProvider +
    +

    providers is a list of credential provider plugins that will be enabled by the kubelet. +Multiple providers may match against a single image, in which case credentials +from all providers will be returned to the kubelet. If multiple providers are called +for a single image, the results are combined. If providers return overlapping +auth keys, the value from the provider earlier in this list is used.

    +
    + +## `CredentialProvider` {#kubelet-config-k8s-io-v1-CredentialProvider} + + +**Appears in:** + +- [CredentialProviderConfig](#kubelet-config-k8s-io-v1-CredentialProviderConfig) + + +

    CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only +invoked when an image being pulled matches the images handled by the plugin (see matchImages).

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    name [Required]
    +string +
    +

    name is the required name of the credential provider. It must match the name of the +provider executable as seen by the kubelet. The executable must be in the kubelet's +bin directory (set by the --image-credential-provider-bin-dir flag).

    +
    matchImages [Required]
    +[]string +
    +

    matchImages is a required list of strings used to match against images in order to +determine if this provider should be invoked. If one of the strings matches the +requested image from the kubelet, the plugin will be invoked and given a chance +to provide credentials. Images are expected to contain the registry domain +and URL path.

    +

    Each entry in matchImages is a pattern which can optionally contain a port and a path. +Globs can be used in the domain, but not in the port or the path. Globs are supported +as subdomains like '.k8s.io' or 'k8s..io', and top-level-domains such as 'k8s.'. +Matching partial subdomains like 'app.k8s.io' is also supported. Each glob can only match +a single subdomain segment, so *.io does not match *.k8s.io.

    +

    A match exists between an image and a matchImage when all of the below are true:

    +
      +
    • Both contain the same number of domain parts and each part matches.
    • +
    • The URL path of an imageMatch must be a prefix of the target image URL path.
    • +
    • If the imageMatch contains a port, then the port must match in the image as well.
    • +
    +

    Example values of matchImages:

    +
      +
    • 123456789.dkr.ecr.us-east-1.amazonaws.com
    • +
    • *.azurecr.io
    • +
    • gcr.io
    • +
    • ..registry.io
    • +
    • registry.io:8080/path
    • +
    +
    defaultCacheDuration [Required]
    +meta/v1.Duration +
    +

    defaultCacheDuration is the default duration the plugin will cache credentials in-memory +if a cache duration is not provided in the plugin response. This field is required.

    +
    apiVersion [Required]
    +string +
    +

    Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse +MUST use the same encoding version as the input. Current supported values are:

    +
      +
    • credentialprovider.kubelet.k8s.io/v1
    • +
    +
    args
    +[]string +
    +

    Arguments to pass to the command when executing it.

    +
    env
    +[]ExecEnvVar +
    +

    Env defines additional environment variables to expose to the process. These +are unioned with the host's environment, as well as variables client-go uses +to pass argument to the plugin.

    +
    + +## `ExecEnvVar` {#kubelet-config-k8s-io-v1-ExecEnvVar} + + +**Appears in:** + +- [CredentialProvider](#kubelet-config-k8s-io-v1-CredentialProvider) + + +

    ExecEnvVar is used for setting environment variables when executing an exec-based +credential plugin.

    + + + + + + + + + + + + + + +
    FieldDescription
    name [Required]
    +string +
    + No description provided.
    value [Required]
    +string +
    + No description provided.
    + + + + +## `FormatOptions` {#FormatOptions} + + +**Appears in:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +

    FormatOptions contains options for the different logging formats.

    + + + + + + + + + + + +
    FieldDescription
    json [Required]
    +JSONOptions +
    +

    [Alpha] JSON contains options for logging format "json". +Only available when the LoggingAlphaOptions feature gate is enabled.

    +
    + +## `JSONOptions` {#JSONOptions} + + +**Appears in:** + +- [FormatOptions](#FormatOptions) + + +

    JSONOptions contains options for logging format "json".

    + + + + + + + + + + + + + + +
    FieldDescription
    splitStream [Required]
    +bool +
    +

    [Alpha] SplitStream redirects error messages to stderr while +info messages go to stdout, with buffering. The default is to write +both to stdout, without buffering. Only available when +the LoggingAlphaOptions feature gate is enabled.

    +
    infoBufferSize [Required]
    +k8s.io/apimachinery/pkg/api/resource.QuantityValue +
    +

    [Alpha] InfoBufferSize sets the size of the info stream when +using split streams. The default is zero, which disables buffering. +Only available when the LoggingAlphaOptions feature gate is enabled.

    +
    + +## `LogFormatFactory` {#LogFormatFactory} + + + +

    LogFormatFactory provides support for a certain additional, +non-default log format.

    + + + + +## `LoggingConfiguration` {#LoggingConfiguration} + + +**Appears in:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + + +

    LoggingConfiguration contains logging options.

    + + + + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    format [Required]
    +string +
    +

    Format Flag specifies the structure of log messages. +default value of format is text

    +
    flushFrequency [Required]
    +time.Duration +
    +

    Maximum number of nanoseconds (i.e. 1s = 1000000000) between log +flushes. Ignored if the selected logging backend writes log +messages without buffering.

    +
    verbosity [Required]
    +VerbosityLevel +
    +

    Verbosity is the threshold that determines which log messages are +logged. Default is zero which logs only the most important +messages. Higher values enable additional messages. Error messages +are always logged.

    +
    vmodule [Required]
    +VModuleConfiguration +
    +

    VModule overrides the verbosity threshold for individual files. +Only supported for "text" log format.

    +
    options [Required]
    +FormatOptions +
    +

    [Alpha] Options holds additional parameters that are specific +to the different logging formats. Only the options for the selected +format get used, but all of them get validated. +Only available when the LoggingAlphaOptions feature gate is enabled.

    +
    + +## `TracingConfiguration` {#TracingConfiguration} + + +**Appears in:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + + +

    TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

    + + + + + + + + + + + + + + +
    FieldDescription
    endpoint
    +string +
    +

    Endpoint of the collector this component will report traces to. +The connection is insecure, and does not currently support TLS. +Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

    +
    samplingRatePerMillion
    +int32 +
    +

    SamplingRatePerMillion is the number of samples to collect per million spans. +Recommended is unset. If unset, sampler respects its parent span's sampling +rate, but otherwise never samples.

    +
    + +## `VModuleConfiguration` {#VModuleConfiguration} + +(Alias of `[]k8s.io/component-base/logs/api/v1.VModuleItem`) + +**Appears in:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +

    VModuleConfiguration is a collection of individual file names or patterns +and the corresponding verbosity threshold.

    + + + + +## `VerbosityLevel` {#VerbosityLevel} + +(Alias of `uint32`) + +**Appears in:** + +- [LoggingConfiguration](#LoggingConfiguration) + + + +

    VerbosityLevel represents a klog or logr verbosity threshold.

    + + diff --git a/content/en/docs/reference/config-api/kubelet-config.v1beta1.md b/content/en/docs/reference/config-api/kubelet-config.v1beta1.md index 2d415c617aa9a..a11c179a58aa3 100644 --- a/content/en/docs/reference/config-api/kubelet-config.v1beta1.md +++ b/content/en/docs/reference/config-api/kubelet-config.v1beta1.md @@ -547,6 +547,16 @@ that topology manager requests and hint providers generate. Valid values include Default: "container"

    +topologyManagerPolicyOptions
    +map[string]string + + +

    TopologyManagerPolicyOptions is a set of key=value which allows to set extra options +to fine tune the behaviour of the topology manager policies. +Requires both the "TopologyManager" and "TopologyManagerPolicyOptions" feature gates to be enabled. +Default: nil

    + + qosReserved
    map[string]string @@ -645,7 +655,7 @@ Default: true

    cpuCFSQuotaPeriod is the CPU CFS quota period value, cpu.cfs_period_us. -The value must be between 1 us and 1 second, inclusive. +The value must be between 1 ms and 1 second, inclusive. Requires the CustomCPUCFSQuotaPeriod feature gate to be enabled. Default: "100ms"

    @@ -1145,12 +1155,12 @@ Default: false

    when setting the cgroupv2 memory.high value to enforce MemoryQoS. Decreasing this factor will set lower high limit for container cgroups and put heavier reclaim pressure while increasing will put less reclaim pressure. -See http://kep.k8s.io/2570 for more details. +See https://kep.k8s.io/2570 for more details. Default: 0.8

    registerWithTaints
    -[]core/v1.Taint +[]core/v1.Taint

    registerWithTaints are an array of taints to add to a node object when @@ -1172,7 +1182,7 @@ Default: true

    Tracing specifies the versioned configuration for OpenTelemetry tracing clients. -See http://kep.k8s.io/2832 for more details.

    +See https://kep.k8s.io/2832 for more details.

    localStorageCapacityIsolation
    @@ -1210,7 +1220,7 @@ It exists in the kubeletconfig API group because it is classified as a versioned source
    -core/v1.NodeConfigSource +core/v1.NodeConfigSource

    source is the source that we are serializing.

    @@ -1571,7 +1581,7 @@ and groups corresponding to the Organization in the client certificate.

    No description provided. limits [Required]
    -core/v1.ResourceList +core/v1.ResourceList No description provided. diff --git a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1.md b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1.md new file mode 100644 index 0000000000000..1608442710841 --- /dev/null +++ b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1.md @@ -0,0 +1,169 @@ +--- +title: Kubelet CredentialProvider (v1) +content_type: tool-reference +package: credentialprovider.kubelet.k8s.io/v1 +auto_generated: true +--- + + +## Resource Types + + +- [CredentialProviderRequest](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest) +- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse) + + + +## `CredentialProviderRequest` {#credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest} + + + +

    CredentialProviderRequest includes the image that the kubelet requires authentication for. +Kubelet will pass this request object to the plugin via stdin. In general, plugins should +prefer responding with the same apiVersion they were sent.

    + + + + + + + + + + + + + + +
    FieldDescription
    apiVersion
    string
    credentialprovider.kubelet.k8s.io/v1
    kind
    string
    CredentialProviderRequest
    image [Required]
    +string +
    +

    image is the container image that is being pulled as part of the +credential provider plugin request. Plugins may optionally parse the image +to extract any information required to fetch credentials.

    +
    + +## `CredentialProviderResponse` {#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse} + + + +

    CredentialProviderResponse holds credentials that the kubelet should use for the specified +image provided in the original request. Kubelet will read the response from the plugin via stdout. +This response should be set to the same apiVersion as CredentialProviderRequest.

    + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    apiVersion
    string
    credentialprovider.kubelet.k8s.io/v1
    kind
    string
    CredentialProviderResponse
    cacheKeyType [Required]
    +PluginCacheKeyType +
    +

    cacheKeyType indiciates the type of caching key to use based on the image provided +in the request. There are three valid values for the cache key type: Image, Registry, and +Global. If an invalid value is specified, the response will NOT be used by the kubelet.

    +
    cacheDuration
    +meta/v1.Duration +
    +

    cacheDuration indicates the duration the provided credentials should be cached for. +The kubelet will use this field to set the in-memory cache duration for credentials +in the AuthConfig. If null, the kubelet will use defaultCacheDuration provided in +CredentialProviderConfig. If set to 0, the kubelet will not cache the provided AuthConfig.

    +
    auth
    +map[string]k8s.io/kubelet/pkg/apis/credentialprovider/v1.AuthConfig +
    +

    auth is a map containing authentication information passed into the kubelet. +Each key is a match image string (more on this below). The corresponding authConfig value +should be valid for all images that match against this key. A plugin should set +this field to null if no valid credentials can be returned for the requested image.

    +

    Each key in the map is a pattern which can optionally contain a port and a path. +Globs can be used in the domain, but not in the port or the path. Globs are supported +as subdomains like '.k8s.io' or 'k8s..io', and top-level-domains such as 'k8s.'. +Matching partial subdomains like 'app.k8s.io' is also supported. Each glob can only match +a single subdomain segment, so *.io does not match *.k8s.io.

    +

    The kubelet will match images against the key when all of the below are true:

    +
      +
    • Both contain the same number of domain parts and each part matches.
    • +
    • The URL path of an imageMatch must be a prefix of the target image URL path.
    • +
    • If the imageMatch contains a port, then the port must match in the image as well.
    • +
    +

    When multiple keys are returned, the kubelet will traverse all keys in reverse order so that:

    +
      +
    • longer keys come before shorter keys with the same prefix
    • +
    • non-wildcard keys come before wildcard keys with the same prefix.
    • +
    +

    For any given match, the kubelet will attempt an image pull with the provided credentials, +stopping after the first successfully authenticated pull.

    +

    Example keys:

    +
      +
    • 123456789.dkr.ecr.us-east-1.amazonaws.com
    • +
    • *.azurecr.io
    • +
    • gcr.io
    • +
    • ..registry.io
    • +
    • registry.io:8080/path
    • +
    +
    + +## `AuthConfig` {#credentialprovider-kubelet-k8s-io-v1-AuthConfig} + + +**Appears in:** + +- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse) + + +

    AuthConfig contains authentication information for a container registry. +Only username/password based authentication is supported today, but more authentication +mechanisms may be added in the future.

    + + + + + + + + + + + + + + +
    FieldDescription
    username [Required]
    +string +
    +

    username is the username used for authenticating to the container registry +An empty username is valid.

    +
    password [Required]
    +string +
    +

    password is the password used for authenticating to the container registry +An empty password is valid.

    +
    + +## `PluginCacheKeyType` {#credentialprovider-kubelet-k8s-io-v1-PluginCacheKeyType} + +(Alias of `string`) + +**Appears in:** + +- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse) + + + + + \ No newline at end of file diff --git a/content/en/docs/reference/glossary/feature-gates.md b/content/en/docs/reference/glossary/feature-gates.md new file mode 100644 index 0000000000000..410581ee0ced4 --- /dev/null +++ b/content/en/docs/reference/glossary/feature-gates.md @@ -0,0 +1,23 @@ +--- +title: Feature gate +id: feature-gate +date: 2023-01-12 +full_link: /docs/reference/command-line-tools-reference/feature-gates/ +short_description: > + A way to control whether or not a particular Kubernetes feature is enabled. + +aka: +tags: +- fundamental +- operation +--- + +Feature gates are a set of keys (opaque string values) that you can use to control which +Kubernetes features are enabled in your cluster. + + + +You can turn these features on or off using the `--feature-gates` command line flag on each Kubernetes component. +Each Kubernetes component lets you enable or disable a set of feature gates that are relevant to that component. +The Kubernetes documentation lists all current +[feature gates](/docs/reference/command-line-tools-reference/feature-gates/) and what they control. diff --git a/content/en/docs/reference/glossary/istio.md b/content/en/docs/reference/glossary/istio.md index fbf29f421c952..7dfea5de9e1ce 100644 --- a/content/en/docs/reference/glossary/istio.md +++ b/content/en/docs/reference/glossary/istio.md @@ -2,7 +2,7 @@ title: Istio id: istio date: 2018-04-12 -full_link: https://istio.io/docs/concepts/what-is-istio/ +full_link: https://istio.io/latest/about/service-mesh/#what-is-istio short_description: > An open platform (not Kubernetes-specific) that provides a uniform way to integrate microservices, manage traffic flow, enforce policies, and aggregate telemetry data. @@ -17,4 +17,3 @@ tags: Adding Istio does not require changing application code. It is a layer of infrastructure between a service and the network, which when combined with service deployments, is commonly referred to as a service mesh. Istio's control plane abstracts away the underlying cluster management platform, which may be Kubernetes, Mesosphere, etc. - diff --git a/content/en/docs/reference/glossary/kops.md b/content/en/docs/reference/glossary/kops.md index 0a3da419694f6..3a1ea5628cbda 100644 --- a/content/en/docs/reference/glossary/kops.md +++ b/content/en/docs/reference/glossary/kops.md @@ -1,32 +1,29 @@ --- -title: Kops +title: kOps (Kubernetes Operations) id: kops date: 2018-04-12 -full_link: /docs/getting-started-guides/kops/ +full_link: /docs/setup/production-environment/kops/ short_description: > - A CLI tool that helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters. + kOps will not only help you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes cluster, but it will also provision the necessary cloud infrastructure. aka: tags: - tool - operation --- - A CLI tool that helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters. + +`kOps` will not only help you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes cluster, but it will also provision the necessary cloud infrastructure. {{< note >}} -kops has general availability support only for AWS. -Support for using kops with GCE and VMware vSphere are in alpha. +AWS (Amazon Web Services) is currently officially supported, with DigitalOcean, GCE and OpenStack in beta support, and Azure in alpha. {{< /note >}} -`kops` provisions your cluster with: - +`kOps` is an automated provisioning system: * Fully automated installation - * DNS-based cluster identification - * Self-healing: everything runs in Auto-Scaling Groups - * Limited OS support (Debian preferred, Ubuntu 16.04 supported, early support for CentOS & RHEL) - * High availability (HA) support - * The ability to directly provision, or to generate Terraform manifests - -You can also build your own cluster using {{< glossary_tooltip term_id="kubeadm" >}} as a building block. `kops` builds on the kubeadm work. + * Uses DNS to identify clusters + * Self-healing: everything runs in Auto-Scaling Groups + * Multiple OS support (Amazon Linux, Debian, Flatcar, RHEL, Rocky and Ubuntu) + * High-Availability support + * Can directly provision, or generate terraform manifests diff --git a/content/en/docs/reference/kubernetes-api/workload-resources/job-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/job-v1.md index cefcb950865c1..42cec53db7cfc 100644 --- a/content/en/docs/reference/kubernetes-api/workload-resources/job-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/job-v1.md @@ -140,7 +140,8 @@ JobSpec describes how the job execution will look like. - **podFailurePolicy.rules.action** (string), required - Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are: - FailJob: indicates that the pod's job is marked as Failed and all + Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are: + - FailJob: indicates that the pod's job is marked as Failed and all running pods are terminated. - Ignore: indicates that the counter towards the .backoffLimit is not incremented and a replacement pod is created. @@ -176,7 +177,8 @@ JobSpec describes how the job execution will look like. - **podFailurePolicy.rules.onExitCodes.operator** (string), required - Represents the relationship between the container exit code(s) and the specified values. Containers completed with success (exit code 0) are excluded from the requirement check. Possible values are: - In: the requirement is satisfied if at least one container exit code + Represents the relationship between the container exit code(s) and the specified values. Containers completed with success (exit code 0) are excluded from the requirement check. Possible values are: + - In: the requirement is satisfied if at least one container exit code (might be multiple if there are multiple containers not restricted by the 'containerName' field) is in the set of specified values. - NotIn: the requirement is satisfied if at least one container exit code diff --git a/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md index 2bf88d6e43327..3b4dbd6ef8486 100644 --- a/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md @@ -219,9 +219,12 @@ PodSpec is a description of a pod. - **topologySpreadConstraints.whenUnsatisfiable** (string), required - WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, + WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. + - DoNotSchedule (default) tells the scheduler not to schedule it. + - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. + A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it *more* imbalanced. It's a required field. diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md index 75974454588c5..46cf104bf9753 100644 --- a/content/en/docs/reference/labels-annotations-taints/_index.md +++ b/content/en/docs/reference/labels-annotations-taints/_index.md @@ -155,6 +155,32 @@ This label has been deprecated. Please use `kubernetes.io/arch` instead. This label has been deprecated. Please use `kubernetes.io/os` instead. +### kube-aggregator.kubernetes.io/automanaged {#kube-aggregator-kubernetesio-automanaged} + +Example: `kube-aggregator.kubernetes.io/automanaged: "onstart"` + +Used on: APIService + +The `kube-apiserver` sets this label on any APIService object that the API server has created automatically. The label marks how the control plane should manage that APIService. You should not add, modify, or remove this label by yourself. + +{{< note >}} +Automanaged APIService objects are deleted by kube-apiserver when it has no built-in or custom resource API corresponding to the API group/version of the APIService. +{{< /note >}} + +There are two possible values: +- `onstart`: The APIService should be reconciled when an API server starts up, but not otherwise. +- `true`: The API server should reconcile this APIService continuously. + +### service.alpha.kubernetes.io/tolerate-unready-endpoints (deprecated) + +Used on: StatefulSet + +This annotation on a Service denotes if the Endpoints controller should go ahead and create Endpoints for unready Pods. +Endpoints of these Services retain their DNS records and continue receiving +traffic for the Service from the moment the kubelet starts all containers in the pod +and marks it _Running_, til the kubelet stops all containers and deletes the pod from +the API server. + ### kubernetes.io/hostname {#kubernetesiohostname} Example: `kubernetes.io/hostname: "ip-172-20-114-199.ec2.internal"` @@ -294,6 +320,50 @@ See [topology.kubernetes.io/zone](#topologykubernetesiozone). {{< note >}} Starting in v1.17, this label is deprecated in favor of [topology.kubernetes.io/zone](#topologykubernetesiozone). {{< /note >}} +### pv.kubernetes.io/bind-completed {#pv-kubernetesiobind-completed} + +Example: `pv.kubernetes.io/bind-completed: "yes"` + +Used on: PersistentVolumeClaim + +When this annotation is set on a PersistentVolumeClaim (PVC), that indicates that the lifecycle +of the PVC has passed through initial binding setup. When present, that information changes +how the control plane interprets the state of PVC objects. +The value of this annotation does not matter to Kubernetes. + +### pv.kubernetes.io/bound-by-controller {#pv-kubernetesioboundby-controller} + +Example: `pv.kubernetes.io/bound-by-controller: "yes"` + +Used on: PersistentVolume, PersistentVolumeClaim + +If this annotation is set on a PersistentVolume or PersistentVolumeClaim, it indicates that a storage binding +(PersistentVolume → PersistentVolumeClaim, or PersistentVolumeClaim → PersistentVolume) was installed +by the {{< glossary_tooltip text="controller" term_id="controller" >}}. +If the annotation isn't set, and there is a storage binding in place, the absence of that annotation means that +the binding was done manually. The value of this annotation does not matter. + +### pv.kubernetes.io/provisioned-by {#pv-kubernetesiodynamically-provisioned} + +Example: `pv.kubernetes.io/provisioned-by: "kubernetes.io/rbd"` + +Used on: PersistentVolume + +This annotation is added to a PersistentVolume(PV) that has been dynamically provisioned by Kubernetes. +Its value is the name of volume plugin that created the volume. It serves both user (to show where a PV +comes from) and Kubernetes (to recognize dynamically provisioned PVs in its decisions). + +### pv.kubernetes.io/migrated-to {#pv-kubernetesio-migratedto} + +Example: `pv.kubernetes.io/migrated-to: pd.csi.storage.gke.io` + +Used on: PersistentVolume, PersistentVolumeClaim + +It is added to a PersistentVolume(PV) and PersistentVolumeClaim(PVC) that is supposed to be +dynamically provisioned/deleted by its corresponding CSI driver through the `CSIMigration` feature gate. +When this annotation is set, the Kubernetes components will "stand-down" and the `external-provisioner` +will act on the objects. + ### statefulset.kubernetes.io/pod-name {#statefulsetkubernetesiopod-name} Example: @@ -377,6 +447,25 @@ Used on: PersistentVolumeClaim This annotation will be added to dynamic provisioning required PVC. +### volume.kubernetes.io/selected-node + +Used on: PersistentVolumeClaim + +This annotation is added to a PVC that is triggered by a scheduler to be dynamically provisioned. Its value is the name of the selected node. + +### volumes.kubernetes.io/controller-managed-attach-detach + +Used on: Node + +If a node has set the annotation `volumes.kubernetes.io/controller-managed-attach-detach` +on itself, then its storage attach and detach operations are being managed +by the _volume attach/detach_ +{{< glossary_tooltip text="controller" term_id="controller" >}} running within the +{{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}. + +The value of the annotation isn't important; if this annotation exists on a node, +then storage attaches and detaches are controller managed. + ### node.kubernetes.io/windows-build {#nodekubernetesiowindows-build} Example: `node.kubernetes.io/windows-build: "10.0.17763"` @@ -769,6 +858,16 @@ created from a VolumeSnapshot. Refer to [Converting the volume mode of a Snapshot](/docs/concepts/storage/volume-snapshots/#convert-volume-mode) and the [Kubernetes CSI Developer Documentation](https://kubernetes-csi.github.io/docs/) for more information. +### scheduler.alpha.kubernetes.io/critical-pod (deprecated) + +Example: `scheduler.alpha.kubernetes.io/critical-pod: ""` + +Used on: Pod + +This annotation lets Kubernetes control plane know about a pod being a critical pod so that the descheduler will not remove this pod. + +{{< note >}} Starting in v1.16, this annotation was removed in favor of [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/). {{< /note >}} + ## Annotations used for audit diff --git a/content/en/docs/reference/networking/virtual-ips.md b/content/en/docs/reference/networking/virtual-ips.md index 4022317a82952..af3899a703d2e 100644 --- a/content/en/docs/reference/networking/virtual-ips.md +++ b/content/en/docs/reference/networking/virtual-ips.md @@ -14,7 +14,6 @@ mechanism for {{< glossary_tooltip term_id="service" text="Services">}} of `type` other than [`ExternalName`](/docs/concepts/services-networking/service/#externalname). - A question that pops up every now and then is why Kubernetes relies on proxying to forward inbound traffic to backends. What about other approaches? For example, would it be possible to configure DNS records that @@ -39,15 +38,13 @@ network proxying service on a computer. Although the `kube-proxy` executable su `cleanup` function, this function is not an official feature and thus is only available to use as-is. - -Some of the details in this reference refer to an example: the back end Pods for a stateless +Some of the details in this reference refer to an example: the backend Pods for a stateless image-processing workload, running with three replicas. Those replicas are fungible—frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that, nor should they need to keep track of the set of backends themselves. - ## Proxy modes @@ -87,7 +84,7 @@ to verify that backend Pods are working OK, so that kube-proxy in iptables mode only sees backends that test out as healthy. Doing this means you avoid having traffic sent via kube-proxy to a Pod that's known to have failed. -{{< figure src="/images/docs/services-iptables-overview.svg" title="Services overview diagram for iptables proxy" class="diagram-medium" >}} +{{< figure src="/images/docs/services-iptables-overview.svg" title="Virtual IP mechanism for Services, using iptables mode" class="diagram-medium" >}} #### Example {#packet-processing-iptables} @@ -111,6 +108,91 @@ redirected to the backend without rewriting the client IP address. This same basic flow executes when traffic comes in through a node-port or through a load-balancer, though in those cases the client IP address does get altered. +#### Optimizing iptables mode performance + +In large clusters (with tens of thousands of Pods and Services), the +iptables mode of kube-proxy may take a long time to update the rules +in the kernel when Services (or their EndpointSlices) change. You can adjust the syncing +behavior of kube-proxy via options in the [`iptables` section](/docs/reference/config-api/kube-proxy-config.v1alpha1/#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPTablesConfiguration) +of the +kube-proxy [configuration file](/docs/reference/config-api/kube-proxy-config.v1alpha1/) +(which you specify via `kube-proxy --config `): + +```yaml +... +iptables: + minSyncPeriod: 1s + syncPeriod: 30s +... +``` + +##### `minSyncPeriod` + +The `minSyncPeriod` parameter sets the minimum duration between +attempts to resynchronize iptables rules with the kernel. If it is +`0s`, then kube-proxy will always immediately synchronize the rules +every time any Service or Endpoint changes. This works fine in very +small clusters, but it results in a lot of redundant work when lots of +things change in a small time period. For example, if you have a +Service backed by a Deployment with 100 pods, and you delete the +Deployment, then with `minSyncPeriod: 0s`, kube-proxy would end up +removing the Service's Endpoints from the iptables rules one by one, +for a total of 100 updates. With a larger `minSyncPeriod`, multiple +Pod deletion events would get aggregated together, so kube-proxy might +instead end up making, say, 5 updates, each removing 20 endpoints, +which will be much more efficient in terms of CPU, and result in the +full set of changes being synchronized faster. + +The larger the value of `minSyncPeriod`, the more work that can be +aggregated, but the downside is that each individual change may end up +waiting up to the full `minSyncPeriod` before being processed, meaning +that the iptables rules spend more time being out-of-sync with the +current apiserver state. + +The default value of `1s` is a good compromise for small and medium +clusters. In large clusters, it may be necessary to set it to a larger +value. (Especially, if kube-proxy's +`sync_proxy_rules_duration_seconds` metric indicates an average +time much larger than 1 second, then bumping up `minSyncPeriod` may +make updates more efficient.) + +##### `syncPeriod` + +The `syncPeriod` parameter controls a handful of synchronization +operations that are not directly related to changes in individual +Services and Endpoints. In particular, it controls how quickly +kube-proxy notices if an external component has interfered with +kube-proxy's iptables rules. In large clusters, kube-proxy also only +performs certain cleanup operations once every `syncPeriod` to avoid +unnecessary work. + +For the most part, increasing `syncPeriod` is not expected to have much +impact on performance, but in the past, it was sometimes useful to set +it to a very large value (eg, `1h`). This is no longer recommended, +and is likely to hurt functionality more than it improves performance. + +##### Experimental performance improvements {#minimize-iptables-restore} + +{{< feature-state for_k8s_version="v1.26" state="alpha" >}} + +In Kubernetes 1.26, some new performance improvements were made to the +iptables proxy mode, but they are not enabled by default (and should +probably not be enabled in production clusters yet). To try them out, +enable the `MinimizeIPTablesRestore` [feature +gate](/docs/reference/command-line-tools-reference/feature-gates/) for +kube-proxy with `--feature-gates=MinimizeIPTablesRestore=true,…`. + +If you enable that feature gate and you were previously overriding +`minSyncPeriod`, you should try removing that override and letting +kube-proxy use the default value (`1s`) or at least a smaller value +than you were using before. + +If you notice kube-proxy's +`sync_proxy_rules_iptables_restore_failures_total` or +`sync_proxy_rules_iptables_partial_restore_failures_total` metrics +increasing after enabling this feature, that likely indicates you are +encountering bugs in the feature and you should file a bug report. + ### IPVS proxy mode {#proxy-mode-ipvs} In `ipvs` mode, kube-proxy watches Kubernetes Services and EndpointSlices, @@ -147,7 +229,7 @@ kernel modules are available. If the IPVS kernel modules are not detected, then falls back to running in iptables proxy mode. {{< /note >}} -{{< figure src="/images/docs/services-ipvs-overview.svg" title="Services overview diagram for IPVS proxy" class="diagram-medium" >}} +{{< figure src="/images/docs/services-ipvs-overview.svg" title="Virtual IP address mechanism for Services, using IPVS mode" class="diagram-medium" >}} ## Session affinity @@ -276,9 +358,11 @@ should have seen the node's health check failing and fully removed the node from ## {{% heading "whatsnext" %}} To learn more about Services, -read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/). +read [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/). You can also: -* Read about [Services](/docs/concepts/services-networking/service/) -* Read the [API reference](/docs/reference/kubernetes-api/service-resources/service-v1/) for the Service API \ No newline at end of file +* Read about [Services](/docs/concepts/services-networking/service/) as a concept +* Read about [Ingresses](/docs/concepts/services-networking/ingress/) as a concept +* Read the [API reference](/docs/reference/kubernetes-api/service-resources/service-v1/) for the Service API + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md index 46063a427e75f..db92db3f73189 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md @@ -17,6 +17,10 @@ Commands related to handling kubernetes certificates Commands related to handling kubernetes certificates +``` +kubeadm certs [flags] +``` + ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md index 9be11c4331d77..541d9892a1527 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md @@ -55,7 +55,7 @@ kubeadm config images list [flags] - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md index 56083e9edea28..3a78f26031234 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md @@ -48,7 +48,7 @@ kubeadm config images pull [flags] - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md index ac2897751e849..ad919d2e16ede 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md @@ -55,6 +55,7 @@ kubelet-finalize Updates settings relevant to the kubelet after TLS addon Install required addons for passing conformance tests /coredns Install the CoreDNS addon to a Kubernetes cluster /kube-proxy Install the kube-proxy addon to a Kubernetes cluster +show-join-command Show the join command for control-plane and worker node ``` @@ -138,7 +139,7 @@ kubeadm init [flags] - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md index ec2adcb93c2d0..dafa56360ac4e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md @@ -58,11 +58,18 @@ kubeadm init phase addon all [flags] + + + + + + + - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md index 2293447d0fc4c..3225abfba1d82 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md @@ -37,11 +37,18 @@ kubeadm init phase addon coredns [flags] + + + + + + + - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md index edfddfa4b3e42..bfc3a74a5b91a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md @@ -58,6 +58,13 @@ kubeadm init phase addon kube-proxy [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md index 769dd1903de0d..27b722f289b80 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md @@ -47,6 +47,13 @@ kubeadm init phase bootstrap-token [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md index 18eee93ce36a5..a6723674affc4 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md @@ -65,6 +65,13 @@ kubeadm init phase certs all [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md index f78bf3d7c5f5e..416c59933ee12 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md @@ -48,6 +48,13 @@ kubeadm init phase certs apiserver-etcd-client [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md index 1b6bc016c68f7..e4128aedead89 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md @@ -48,6 +48,13 @@ kubeadm init phase certs apiserver-kubelet-client [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md index 9e22d779d54a9..ff6d9adc00586 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md @@ -69,6 +69,13 @@ kubeadm init phase certs apiserver [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md index 54f9c74f74f07..7f333a5da4766 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md @@ -48,6 +48,13 @@ kubeadm init phase certs ca [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md index f7236ba5c8291..3c72fcdf6a52c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md @@ -48,6 +48,13 @@ kubeadm init phase certs etcd-ca [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md index 0c0389c18597c..708e244f2bbb3 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md @@ -48,6 +48,13 @@ kubeadm init phase certs etcd-healthcheck-client [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md index c2b863f843d48..54c17d5196124 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md @@ -50,6 +50,13 @@ kubeadm init phase certs etcd-peer [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md index 1770f38815b82..96eeba4003f5a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md @@ -50,6 +50,13 @@ kubeadm init phase certs etcd-server [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md index 22cc9f5ddce5b..1c425e7a2f000 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md @@ -48,6 +48,13 @@ kubeadm init phase certs front-proxy-ca [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md index e3d9602901bbb..12867c61a67f5 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md @@ -48,6 +48,13 @@ kubeadm init phase certs front-proxy-client [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md index 6df0ea58f6d2b..e168a6bb87732 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md @@ -101,7 +101,7 @@ kubeadm init phase control-plane all [flags] - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md index 178efcf9ed13f..3f69982f8f19a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md @@ -83,7 +83,7 @@ kubeadm init phase control-plane apiserver [flags] - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md index b5c57f4d2f747..d5c168024f4da 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md @@ -56,6 +56,13 @@ kubeadm init phase etcd local [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md index cd01b778df362..dc6264e2abe60 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md @@ -65,6 +65,13 @@ kubeadm init phase kubeconfig admin [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md index 52de2366f344a..28be5441026d6 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md @@ -65,6 +65,13 @@ kubeadm init phase kubeconfig all [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md index c2a63d91c197e..5c1563f6375fb 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md @@ -65,6 +65,13 @@ kubeadm init phase kubeconfig controller-manager [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md index 4ce731fc17e15..19b8cfe6728d8 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md @@ -67,6 +67,13 @@ kubeadm init phase kubeconfig kubelet [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md index 86588c83caff0..580f99d255ac2 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md @@ -65,6 +65,13 @@ kubeadm init phase kubeconfig scheduler [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md index 680986bebe0b6..62278d5c12319 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md @@ -51,6 +51,13 @@ kubeadm init phase kubelet-finalize all [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md index a6cd628cec53a..93c521157bb5a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md @@ -44,6 +44,13 @@ kubeadm init phase kubelet-finalize experimental-cert-rotation [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md index 156c101fcb917..2dd93c707d6be 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md @@ -51,6 +51,13 @@ kubeadm init phase kubelet-start [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md index 8e88958204f3f..685dfdcab50e3 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md @@ -47,6 +47,13 @@ kubeadm init phase mark-control-plane [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md index 03f02c1251feb..21fc3f7feae26 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md @@ -44,6 +44,13 @@ kubeadm init phase preflight [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_show-join-command.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_show-join-command.md new file mode 100644 index 0000000000000..23abc5671cdc6 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_show-join-command.md @@ -0,0 +1,65 @@ + + + +Show the join command for control-plane and worker node + +### Synopsis + + +Show the join command for control-plane and worker node + +``` +kubeadm init phase show-join-command [flags] +``` + +### Options + +
    --feature-gates string

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)
    UnversionedKubeletConfigMap=true|false (default=true)

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)

    --feature-gates string

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)
    UnversionedKubeletConfigMap=true|false (default=true)

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)

    --feature-gates string

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)
    UnversionedKubeletConfigMap=true|false (default=true)

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)

    Specify a stable IP address or DNS name for the control plane.

    --dry-run

    Don't apply any changes; just output what would be done.

    --feature-gates string

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)
    UnversionedKubeletConfigMap=true|false (default=true)

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    --feature-gates string

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)
    UnversionedKubeletConfigMap=true|false (default=true)

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)

    Specify a stable IP address or DNS name for the control plane.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Specify a stable IP address or DNS name for the control plane.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Specify a stable IP address or DNS name for the control plane.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help
    --feature-gates string

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)
    UnversionedKubeletConfigMap=true|false (default=true)

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)

    --feature-gates string

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)
    UnversionedKubeletConfigMap=true|false (default=true)

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Specify a stable IP address or DNS name for the control plane.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Specify a stable IP address or DNS name for the control plane.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Specify a stable IP address or DNS name for the control plane.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Specify a stable IP address or DNS name for the control plane.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Specify a stable IP address or DNS name for the control plane.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help

    Path to a kubeadm configuration file.

    --dry-run

    Don't apply any changes; just output what would be done.

    -h, --help
    ++++ + + + + + + + + + + +
    -h, --help

    help for show-join-command

    + + + +### Options inherited from parent commands + + ++++ + + + + + + + + + + +
    --rootfs string

    [EXPERIMENTAL] The path to the 'real' host root filesystem.

    + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md index f7717958b6437..9915f522ab954 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md @@ -15,7 +15,7 @@ Upload certificates to kubeadm-certs ### Synopsis -This command is not meant to be run on its own. See list of available subcommands. +Upload control plane certificates to the kubeadm-certs Secret ``` kubeadm init phase upload-certs [flags] @@ -44,6 +44,13 @@ kubeadm init phase upload-certs [flags]

    Path to a kubeadm configuration file.

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md index 78867154bc286..2b15abac96987 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md @@ -37,6 +37,13 @@ kubeadm init phase upload-config all [flags]

    Path to a kubeadm configuration file.

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md index b6fe788708157..d8f466b40943d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md @@ -46,6 +46,13 @@ kubeadm init phase upload-config kubeadm [flags]

    Path to a kubeadm configuration file.

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md index d0288825bb0be..ae2fd63e83895 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md @@ -44,6 +44,13 @@ kubeadm init phase upload-config kubelet [flags]

    Path to a kubeadm configuration file.

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md index b671f2a64eb14..c78cd9c9cccce 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md @@ -114,7 +114,7 @@ kubeadm join [api-server-endpoint] [flags] --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md index e6cdc8095fa05..4a94a75f49d67 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-join all [flags] --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    @@ -51,6 +51,13 @@ kubeadm join phase control-plane-join all [flags]

    Create a new control plane instance on this node

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md index 1eadda3fd744d..637e909c3b914 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-join etcd [flags] --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    @@ -51,6 +51,13 @@ kubeadm join phase control-plane-join etcd [flags]

    Create a new control plane instance on this node

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md index 08ce1ada2a83b..888b17e17ee42 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md @@ -34,7 +34,7 @@ kubeadm join phase control-plane-join mark-control-plane [flags] --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    @@ -44,6 +44,13 @@ kubeadm join phase control-plane-join mark-control-plane [flags]

    Create a new control plane instance on this node

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md index a8b6a3c8ea429..c2e387505ca60 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-join update-status [flags] --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md index 70843944d602d..266906c653d36 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md @@ -55,7 +55,7 @@ kubeadm join phase control-plane-prepare all [api-server-endpoint] [flags] --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    @@ -93,6 +93,13 @@ kubeadm join phase control-plane-prepare all [api-server-endpoint] [flags]

    For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md index b9f2357e03e86..ee1234e5a134e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags] --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    @@ -79,6 +79,13 @@ kubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags]

    For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md index 71689c10872ca..b6f19a68a2304 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md @@ -48,7 +48,7 @@ kubeadm join phase control-plane-prepare control-plane [flags] --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    @@ -58,6 +58,13 @@ kubeadm join phase control-plane-prepare control-plane [flags]

    Create a new control plane instance on this node

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md index 494212f9a1c53..019ea5cb5cfae 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-prepare download-certs [api-server-endpoint] [f --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    @@ -79,6 +79,13 @@ kubeadm join phase control-plane-prepare download-certs [api-server-endpoint] [f

    For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md index 2bb7cbf4aa731..d12f102bbb985 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-prepare kubeconfig [api-server-endpoint] [flags --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    @@ -79,6 +79,13 @@ kubeadm join phase control-plane-prepare kubeconfig [api-server-endpoint] [flags

    For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md index 8f76d4227d2b9..1902c7cc2c0b4 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md @@ -34,7 +34,7 @@ kubeadm join phase kubelet-start [api-server-endpoint] [flags] --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    @@ -72,6 +72,13 @@ kubeadm join phase kubelet-start [api-server-endpoint] [flags]

    For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md index 23000e4fcda89..ecfb735e7efc2 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md @@ -62,7 +62,7 @@ kubeadm join phase preflight [api-server-endpoint] [flags] --config string -

    Path to kubeadm config file.

    +

    Path to a kubeadm configuration file.

    @@ -107,6 +107,13 @@ kubeadm join phase preflight [api-server-endpoint] [flags]

    For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md index 7d632e23d4f85..1d979bfcb962b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md @@ -45,6 +45,13 @@ kubeadm reset [flags]

    The path to the directory where the certificates are stored. If specified, clean this directory.

    + +--cleanup-tmp-dir + + +

    Cleanup the "/etc/kubernetes/tmp" directory

    + + --cri-socket string diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md index 9fadcb29bbf82..f62de85dd351a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md @@ -37,6 +37,13 @@ kubeadm reset phase cleanup-node [flags]

    The path to the directory where the certificates are stored. If specified, clean this directory.

    + +--cleanup-tmp-dir + + +

    Cleanup the "/etc/kubernetes/tmp" directory

    + + --cri-socket string @@ -44,6 +51,13 @@ kubeadm reset phase cleanup-node [flags]

    Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.

    + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md index 14298c2f4891e..dd074f8ecfa60 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md @@ -30,6 +30,13 @@ kubeadm reset phase preflight [flags] + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -f, --force diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md index 3218bb0d9fe4c..54e7cf0e1b2d0 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md @@ -30,6 +30,13 @@ kubeadm reset phase remove-etcd-member [flags] + +--dry-run + + +

    Don't apply any changes; just output what would be done.

    + + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md index 1302e50d38315..2d1d3aa37dc97 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md @@ -76,7 +76,7 @@ kubeadm upgrade apply [version] --feature-gates string -

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)
    UnversionedKubeletConfigMap=true|false (default=true)

    +

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)

    diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md index 28ab989a84490..d235a0652645f 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md @@ -55,7 +55,7 @@ kubeadm upgrade plan [version] [flags] --feature-gates string -

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)
    UnversionedKubeletConfigMap=true|false (default=true)

    +

    A set of key=value pairs that describe feature gates for various features. Options are:
    PublicKeysECDSA=true|false (ALPHA - default=false)
    RootlessControlPlane=true|false (ALPHA - default=false)

    diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md index d4f22871ed13d..107473aeeeedc 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md @@ -244,7 +244,7 @@ it off regardless. Doing so will disable the ability to use the `--discovery-tok * Fetch the `cluster-info` file from the API Server: ```shell -kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/ //" | tee cluster-info.yaml +kubectl -n kube-public get cm cluster-info -o jsonpath='{.data.kubeconfig}' | tee cluster-info.yaml ``` The output is similar to this: diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index 084b54d7f4a20..3c7c0fa8ccbb5 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -83,6 +83,16 @@ namespace (`/apis/GROUP/VERSION/namespaces/NAMESPACE/*`). A namespace-scoped res type will be deleted when its namespace is deleted and access to that resource type is controlled by authorization checks on the namespace scope. +Note: core resources use `/api` instead of `/apis` and omit the GROUP path segment. + +Examples: +* `/api/v1/namespaces` +* `/api/v1/pods` +* `/api/v1/namespaces/my-namespace/pods` +* `/apis/apps/v1/deployments` +* `/apis/apps/v1/namespaces/my-namespace/deployments` +* `/apis/apps/v1/namespaces/my-namespace/deployments/my-deployment` + You can also access collections of resources (for example: listing all Nodes). The following paths are used to retrieve collections and resources: @@ -737,7 +747,7 @@ by default. The `kubectl` tool uses the `--validate` flag to set the level of field validation. Historically `--validate` was used to toggle client-side validation on or off as a boolean flag. Since Kubernetes 1.25, kubectl uses -server-side field validation when sending requests to a serer with this feature +server-side field validation when sending requests to a server with this feature enabled. Validation will fall back to client-side only when it cannot connect to an API server with field validation enabled. It accepts the values `ignore`, `warn`, diff --git a/content/en/docs/reference/using-api/server-side-apply.md b/content/en/docs/reference/using-api/server-side-apply.md index c40b168c94f5f..980ad7020fb69 100644 --- a/content/en/docs/reference/using-api/server-side-apply.md +++ b/content/en/docs/reference/using-api/server-side-apply.md @@ -366,12 +366,26 @@ There are two solutions: First, the user defines a new configuration containing only the `replicas` field: -{{< codenew file="application/ssa/nginx-deployment-replicas-only.yaml" >}} +```yaml +# Save this file as 'nginx-deployment-replicas-only.yaml'. +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment +spec: + replicas: 3 +``` + +{{< note >}} +The YAML file for SSA in this case only contains the fields you want to change. +You are not supposed to provide a fully compliant Deployment manifest if you only +want to modify the `spec.replicas` field using SSA. +{{< /note >}} The user applies that configuration using the field manager name `handover-to-hpa`: ```shell -kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment-replicas-only.yaml \ +kubectl apply -f nginx-deployment-replicas-only.yaml \ --server-side --field-manager=handover-to-hpa \ --validate=false ``` diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md index 91687ff48b55d..20229b8c04d28 100644 --- a/content/en/docs/setup/best-practices/certificates.md +++ b/content/en/docs/setup/best-practices/certificates.md @@ -9,12 +9,12 @@ weight: 50 Kubernetes requires PKI certificates for authentication over TLS. -If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/), the certificates that your cluster requires are automatically generated. -You can also generate your own certificates -- for example, to keep your private keys more secure by not storing them on the API server. +If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/), the certificates +that your cluster requires are automatically generated. +You can also generate your own certificates -- for example, to keep your private keys more secure +by not storing them on the API server. This page explains the certificates that your cluster requires. - - ## How certificates are used by your cluster @@ -33,24 +33,30 @@ Kubernetes requires PKI for the following operations: * Client and server certificates for the [front-proxy](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) {{< note >}} -`front-proxy` certificates are required only if you run kube-proxy to support [an extension API server](/docs/tasks/extend-kubernetes/setup-extension-api-server/). +`front-proxy` certificates are required only if you run kube-proxy to support +[an extension API server](/docs/tasks/extend-kubernetes/setup-extension-api-server/). {{< /note >}} etcd also implements mutual TLS to authenticate clients and peers. ## Where certificates are stored -If you install Kubernetes with kubeadm, most certificates are stored in `/etc/kubernetes/pki`. All paths in this documentation are relative to that directory, with the exception of user account certificates which kubeadm places in `/etc/kubernetes`. +If you install Kubernetes with kubeadm, most certificates are stored in `/etc/kubernetes/pki`. +All paths in this documentation are relative to that directory, with the exception of user account +certificates which kubeadm places in `/etc/kubernetes`. ## Configure certificates manually -If you don't want kubeadm to generate the required certificates, you can create them using a single root CA or by providing all certificates. See [Certificates](/docs/tasks/administer-cluster/certificates/) for details on creating your own certificate authority. -See [Certificate Management with kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) for more on managing certificates. - +If you don't want kubeadm to generate the required certificates, you can create them using a +single root CA or by providing all certificates. See [Certificates](/docs/tasks/administer-cluster/certificates/) +for details on creating your own certificate authority. See +[Certificate Management with kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) +for more on managing certificates. ### Single root CA -You can create a single root CA, controlled by an administrator. This root CA can then create multiple intermediate CAs, and delegate all further creation to Kubernetes itself. +You can create a single root CA, controlled by an administrator. This root CA can then create +multiple intermediate CAs, and delegate all further creation to Kubernetes itself. Required CAs: @@ -60,7 +66,8 @@ Required CAs: | etcd/ca.crt,key | etcd-ca | For all etcd-related functions | | front-proxy-ca.crt,key | kubernetes-front-proxy-ca | For the [front-end proxy](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) | -On top of the above CAs, it is also necessary to get a public/private key pair for service account management, `sa.key` and `sa.pub`. +On top of the above CAs, it is also necessary to get a public/private key pair for service account +management, `sa.key` and `sa.pub`. The following example illustrates the CA key and certificate files shown in the previous table: ``` @@ -71,27 +78,30 @@ The following example illustrates the CA key and certificate files shown in the /etc/kubernetes/pki/front-proxy-ca.crt /etc/kubernetes/pki/front-proxy-ca.key ``` + ### All certificates If you don't wish to copy the CA private keys to your cluster, you can generate all certificates yourself. Required certificates: -| Default CN | Parent CA | O (in Subject) | kind | hosts (SAN) | -|-------------------------------|---------------------------|----------------|----------------------------------------|---------------------------------------------| -| kube-etcd | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | -| kube-etcd-peer | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | -| kube-etcd-healthcheck-client | etcd-ca | | client | | -| kube-apiserver-etcd-client | etcd-ca | system:masters | client | | -| kube-apiserver | kubernetes-ca | | server | ``, ``, ``, `[1]` | -| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | | -| front-proxy-client | kubernetes-front-proxy-ca | | client | | +| Default CN | Parent CA | O (in Subject) | kind | hosts (SAN) | +|-------------------------------|---------------------------|----------------|------------------|-----------------------------------------------------| +| kube-etcd | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | +| kube-etcd-peer | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | +| kube-etcd-healthcheck-client | etcd-ca | | client | | +| kube-apiserver-etcd-client | etcd-ca | system:masters | client | | +| kube-apiserver | kubernetes-ca | | server | ``, ``, ``, `[1]` | +| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | | +| front-proxy-client | kubernetes-front-proxy-ca | | client | | [1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/) the load balancer stable IP and/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`, `kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`) -where `kind` maps to one or more of the [x509 key usage](https://pkg.go.dev/k8s.io/api/certificates/v1beta1#KeyUsage) types: +where `kind` maps to one or more of the x509 key usage, which is also documented in the +`.spec.usages` of a [CertificateSigningRequest](/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1#CertificateSigningRequest) +type: | kind | Key usage | |--------|---------------------------------------------------------------------------------| @@ -99,15 +109,18 @@ where `kind` maps to one or more of the [x509 key usage](https://pkg.go.dev/k8s. | client | digital signature, key encipherment, client auth | {{< note >}} -Hosts/SAN listed above are the recommended ones for getting a working cluster; if required by a specific setup, it is possible to add additional SANs on all the server certificates. +Hosts/SAN listed above are the recommended ones for getting a working cluster; if required by a +specific setup, it is possible to add additional SANs on all the server certificates. {{< /note >}} {{< note >}} For kubeadm users only: -* The scenario where you are copying to your cluster CA certificates without private keys is referred as external CA in the kubeadm documentation. -* If you are comparing the above list with a kubeadm generated PKI, please be aware that `kube-etcd`, `kube-etcd-peer` and `kube-etcd-healthcheck-client` certificates - are not generated in case of external etcd. +* The scenario where you are copying to your cluster CA certificates without private keys is + referred as external CA in the kubeadm documentation. +* If you are comparing the above list with a kubeadm generated PKI, please be aware that + `kube-etcd`, `kube-etcd-peer` and `kube-etcd-healthcheck-client` certificates are not generated + in case of external etcd. {{< /note >}} @@ -116,31 +129,32 @@ For kubeadm users only: Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)). Paths should be specified using the given argument regardless of location. -| Default CN | recommended key path | recommended cert path | command | key argument | cert argument | -|------------------------------|------------------------------|-----------------------------|----------------|------------------------------|-------------------------------------------| -| etcd-ca | etcd/ca.key | etcd/ca.crt | kube-apiserver | | --etcd-cafile | -| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile | -| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file | -| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file, --root-ca-file, --cluster-signing-cert-file | -| kube-apiserver | apiserver.key | apiserver.crt | kube-apiserver | --tls-private-key-file | --tls-cert-file | -| kube-apiserver-kubelet-client| apiserver-kubelet-client.key | apiserver-kubelet-client.crt| kube-apiserver | --kubelet-client-key | --kubelet-client-certificate | -| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file | -| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file | -| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file | -| etcd-ca | etcd/ca.key | etcd/ca.crt | etcd | | --trusted-ca-file, --peer-trusted-ca-file | -| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file | -| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file | -| etcd-ca | | etcd/ca.crt | etcdctl | | --cacert | -| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl | --key | --cert | +| Default CN | recommended key path | recommended cert path | command | key argument | cert argument | +|------------------------------|------------------------------|-----------------------------|-------------------------|------------------------------|-------------------------------------------| +| etcd-ca | etcd/ca.key | etcd/ca.crt | kube-apiserver | | --etcd-cafile | +| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile | +| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file | +| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file, --root-ca-file, --cluster-signing-cert-file | +| kube-apiserver | apiserver.key | apiserver.crt | kube-apiserver | --tls-private-key-file | --tls-cert-file | +| kube-apiserver-kubelet-client| apiserver-kubelet-client.key | apiserver-kubelet-client.crt| kube-apiserver | --kubelet-client-key | --kubelet-client-certificate | +| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file | +| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file | +| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file | +| etcd-ca | etcd/ca.key | etcd/ca.crt | etcd | | --trusted-ca-file, --peer-trusted-ca-file | +| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file | +| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file | +| etcd-ca | | etcd/ca.crt | etcdctl | | --cacert | +| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl | --key | --cert | Same considerations apply for the service account key pair: -| private key path | public key path | command | argument | -|------------------------------|-----------------------------|-------------------------|--------------------------------------| -| sa.key | | kube-controller-manager | --service-account-private-key-file | -| | sa.pub | kube-apiserver | --service-account-key-file | +| private key path | public key path | command | argument | +|-------------------|------------------|-------------------------|--------------------------------------| +| sa.key | | kube-controller-manager | --service-account-private-key-file | +| | sa.pub | kube-apiserver | --service-account-key-file | -The following example illustrates the file paths [from the previous tables](/docs/setup/best-practices/certificates/#certificate-paths) you need to provide if you are generating all of your own keys and certificates: +The following example illustrates the file paths [from the previous tables](#certificate-paths) +you need to provide if you are generating all of your own keys and certificates: ``` /etc/kubernetes/pki/etcd/ca.key @@ -170,15 +184,17 @@ The following example illustrates the file paths [from the previous tables](/doc You must manually configure these administrator account and service accounts: -| filename | credential name | Default CN | O (in Subject) | -|-------------------------|----------------------------|--------------------------------|----------------| -| admin.conf | default-admin | kubernetes-admin | system:masters | +| filename | credential name | Default CN | O (in Subject) | +|-------------------------|----------------------------|-------------------------------------|----------------| +| admin.conf | default-admin | kubernetes-admin | system:masters | | kubelet.conf | default-auth | system:node:`` (see note) | system:nodes | -| controller-manager.conf | default-controller-manager | system:kube-controller-manager | | -| scheduler.conf | default-scheduler | system:kube-scheduler | | +| controller-manager.conf | default-controller-manager | system:kube-controller-manager | | +| scheduler.conf | default-scheduler | system:kube-scheduler | | {{< note >}} -The value of `` for `kubelet.conf` **must** match precisely the value of the node name provided by the kubelet as it registers with the apiserver. For further details, read the [Node Authorization](/docs/reference/access-authn-authz/node/). +The value of `` for `kubelet.conf` **must** match precisely the value of the node name +provided by the kubelet as it registers with the apiserver. For further details, read the +[Node Authorization](/docs/reference/access-authn-authz/node/). {{< /note >}} 1. For each config, generate an x509 cert/key pair with the given CN and O. @@ -196,7 +212,7 @@ These files are used as follows: | filename | command | comment | |-------------------------|-------------------------|-----------------------------------------------------------------------| -| admin.conf | kubectl | Configures administrator user for the cluster | +| admin.conf | kubectl | Configures administrator user for the cluster | | kubelet.conf | kubelet | One required for each node in the cluster. | | controller-manager.conf | kube-controller-manager | Must be added to manifest in `manifests/kube-controller-manager.yaml` | | scheduler.conf | kube-scheduler | Must be added to manifest in `manifests/kube-scheduler.yaml` | diff --git a/content/en/docs/setup/best-practices/cluster-large.md b/content/en/docs/setup/best-practices/cluster-large.md index 55be39a299305..808a1c47510a3 100644 --- a/content/en/docs/setup/best-practices/cluster-large.md +++ b/content/en/docs/setup/best-practices/cluster-large.md @@ -9,13 +9,13 @@ weight: 10 A cluster is a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (physical or virtual machines) running Kubernetes agents, managed by the {{< glossary_tooltip text="control plane" term_id="control-plane" >}}. -Kubernetes {{< param "version" >}} supports clusters with up to 5000 nodes. More specifically, +Kubernetes {{< param "version" >}} supports clusters with up to 5,000 nodes. More specifically, Kubernetes is designed to accommodate configurations that meet *all* of the following criteria: * No more than 110 pods per node -* No more than 5000 nodes -* No more than 150000 total pods -* No more than 300000 total containers +* No more than 5,000 nodes +* No more than 150,000 total pods +* No more than 300,000 total containers You can scale your cluster by adding or removing nodes. The way you do this depends on how your cluster is deployed. @@ -115,15 +115,15 @@ many nodes, consider the following: ## {{% heading "whatsnext" %}} -`VerticalPodAutoscaler` is a custom resource that you can deploy into your cluster +* `VerticalPodAutoscaler` is a custom resource that you can deploy into your cluster to help you manage resource requests and limits for pods. -Visit [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) -to learn more about `VerticalPodAutoscaler` and how you can use it to scale cluster +Learn more about [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) +and how you can use it to scale cluster components, including cluster-critical addons. -The [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme) +* The [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme) integrates with a number of cloud providers to help you run the right number of nodes for the level of resource demand in your cluster. -The [addon resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#readme) -helps you in resizing the addons automatically as your cluster's scale changes. \ No newline at end of file +* The [addon resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#readme) +helps you in resizing the addons automatically as your cluster's scale changes. diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md index 7984655797d5c..8b23806f6c817 100644 --- a/content/en/docs/setup/production-environment/container-runtimes.md +++ b/content/en/docs/setup/production-environment/container-runtimes.md @@ -56,11 +56,7 @@ For more information, see [Network Plugin Requirements](/docs/concepts/extend-ku ### Forwarding IPv4 and letting iptables see bridged traffic -Verify that the `br_netfilter` module is loaded by running `lsmod | grep br_netfilter`. - -To load it explicitly, run `sudo modprobe br_netfilter`. - -In order for a Linux node's iptables to correctly view bridged traffic, verify that `net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config. For example: +Execute the below mentioned instructions: ```bash cat <}} @@ -217,6 +225,13 @@ that the CRI integration plugin is disabled by default. You need CRI support enabled to use containerd with Kubernetes. Make sure that `cri` is not included in the`disabled_plugins` list within `/etc/containerd/config.toml`; if you made changes to that file, also restart `containerd`. + +If you experience container crash loops after the initial cluster installation or after +installing a CNI, the containerd configuration provided with the package might contain +incompatible configuration parameters. Consider resetting the containerd configuration +with `containerd config default > /etc/containerd/config.toml` as specified in +[getting-started.md](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#advanced-topics) +and then set the configuration parameters specified above accordingly. {{< /note >}} If you apply this change, make sure to restart containerd: diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index b40a783264634..01f0d75f7d8f6 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -12,13 +12,15 @@ card: This page shows how to install the `kubeadm` toolbox. -For information on how to create a cluster with kubeadm once you have performed this installation process, see the [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page. +For information on how to create a cluster with kubeadm once you have performed this installation process, +see the [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page. ## {{% heading "prerequisites" %}} -* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager. +* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions + based on Debian and Red Hat, and those distributions without a package manager. * 2 GB or more of RAM per machine (any less will leave little room for your apps). * 2 CPUs or more. * Full network connectivity between all machines in the cluster (public or private network is fine). @@ -26,8 +28,6 @@ For information on how to create a cluster with kubeadm once you have performed * Certain ports are open on your machines. See [here](#check-required-ports) for more details. * Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. - - ## Verify the MAC address and product_uuid are unique for every node {#verify-mac-address} @@ -46,9 +46,9 @@ If you have more than one network adapter, and your Kubernetes components are no route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter. ## Check required ports -These -[required ports](/docs/reference/ports-and-protocols/) -need to be open in order for Kubernetes components to communicate with each other. You can use tools like netcat to check if a port is open. For example: +These [required ports](/docs/reference/networking/ports-and-protocols/) +need to be open in order for Kubernetes components to communicate with each other. +You can use tools like netcat to check if a port is open. For example: ```shell nc 127.0.0.1 6443 diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md index 1baa12b3b7a0e..9509989daf62e 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md @@ -26,15 +26,15 @@ etcd cluster of three members that can be used by kubeadm during cluster creatio ## {{% heading "prerequisites" %}} -* Three hosts that can talk to each other over TCP ports 2379 and 2380. This +- Three hosts that can talk to each other over TCP ports 2379 and 2380. This document assumes these default ports. However, they are configurable through the kubeadm config file. -* Each host must have systemd and a bash compatible shell installed. -* Each host must [have a container runtime, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/). -* Each host should have access to the Kubernetes container image registry (`registry.k8s.io`) or list/pull the required etcd image using -`kubeadm config images list/pull`. This guide will set up etcd instances as -[static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet. -* Some infrastructure to copy files between hosts. For example `ssh` and `scp` +- Each host must have systemd and a bash compatible shell installed. +- Each host must [have a container runtime, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/). +- Each host should have access to the Kubernetes container image registry (`registry.k8s.io`) or list/pull the required etcd image using + `kubeadm config images list/pull`. This guide will set up etcd instances as + [static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet. +- Some infrastructure to copy files between hosts. For example `ssh` and `scp` can satisfy this requirement. @@ -42,7 +42,7 @@ etcd cluster of three members that can be used by kubeadm during cluster creatio ## Setting up the cluster The general approach is to generate all certs on one node and only distribute -the *necessary* files to the other nodes. +the _necessary_ files to the other nodes. {{< note >}} kubeadm contains all the necessary cryptographic machinery to generate @@ -59,242 +59,239 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set 1. Configure the kubelet to be a service manager for etcd. {{< note >}}You must do this on every host where etcd should be running.{{< /note >}} - Since etcd was created first, you must override the service priority by creating a new unit file - that has higher precedence than the kubeadm-provided kubelet unit file. + Since etcd was created first, you must override the service priority by creating a new unit file + that has higher precedence than the kubeadm-provided kubelet unit file. - ```sh - cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf - [Service] - ExecStart= - # Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs". - # Replace the value of "--container-runtime-endpoint" for a different container runtime if needed. - ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock - Restart=always - EOF + ```sh + cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf + [Service] + ExecStart= + # Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs". + # Replace the value of "--container-runtime-endpoint" for a different container runtime if needed. + ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock + Restart=always + EOF - systemctl daemon-reload - systemctl restart kubelet - ``` + systemctl daemon-reload + systemctl restart kubelet + ``` - Check the kubelet status to ensure it is running. + Check the kubelet status to ensure it is running. - ```sh - systemctl status kubelet - ``` + ```sh + systemctl status kubelet + ``` 1. Create configuration files for kubeadm. - Generate one kubeadm configuration file for each host that will have an etcd - member running on it using the following script. - - ```sh - # Update HOST0, HOST1 and HOST2 with the IPs of your hosts - export HOST0=10.0.0.6 - export HOST1=10.0.0.7 - export HOST2=10.0.0.8 - - # Update NAME0, NAME1 and NAME2 with the hostnames of your hosts - export NAME0="infra0" - export NAME1="infra1" - export NAME2="infra2" - - # Create temp directories to store files that will end up on other hosts - mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/ - - HOSTS=(${HOST0} ${HOST1} ${HOST2}) - NAMES=(${NAME0} ${NAME1} ${NAME2}) - - for i in "${!HOSTS[@]}"; do - HOST=${HOSTS[$i]} - NAME=${NAMES[$i]} - cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml - --- - apiVersion: "kubeadm.k8s.io/v1beta3" - kind: InitConfiguration - nodeRegistration: - name: ${NAME} - localAPIEndpoint: - advertiseAddress: ${HOST} - --- - apiVersion: "kubeadm.k8s.io/v1beta3" - kind: ClusterConfiguration - etcd: - local: - serverCertSANs: - - "${HOST}" - peerCertSANs: - - "${HOST}" - extraArgs: - initial-cluster: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380 - initial-cluster-state: new - name: ${NAME} - listen-peer-urls: https://${HOST}:2380 - listen-client-urls: https://${HOST}:2379 - advertise-client-urls: https://${HOST}:2379 - initial-advertise-peer-urls: https://${HOST}:2380 - EOF - done - ``` + Generate one kubeadm configuration file for each host that will have an etcd + member running on it using the following script. + + ```sh + # Update HOST0, HOST1 and HOST2 with the IPs of your hosts + export HOST0=10.0.0.6 + export HOST1=10.0.0.7 + export HOST2=10.0.0.8 + + # Update NAME0, NAME1 and NAME2 with the hostnames of your hosts + export NAME0="infra0" + export NAME1="infra1" + export NAME2="infra2" + + # Create temp directories to store files that will end up on other hosts + mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/ + + HOSTS=(${HOST0} ${HOST1} ${HOST2}) + NAMES=(${NAME0} ${NAME1} ${NAME2}) + + for i in "${!HOSTS[@]}"; do + HOST=${HOSTS[$i]} + NAME=${NAMES[$i]} + cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml + --- + apiVersion: "kubeadm.k8s.io/v1beta3" + kind: InitConfiguration + nodeRegistration: + name: ${NAME} + localAPIEndpoint: + advertiseAddress: ${HOST} + --- + apiVersion: "kubeadm.k8s.io/v1beta3" + kind: ClusterConfiguration + etcd: + local: + serverCertSANs: + - "${HOST}" + peerCertSANs: + - "${HOST}" + extraArgs: + initial-cluster: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380 + initial-cluster-state: new + name: ${NAME} + listen-peer-urls: https://${HOST}:2380 + listen-client-urls: https://${HOST}:2379 + advertise-client-urls: https://${HOST}:2379 + initial-advertise-peer-urls: https://${HOST}:2380 + EOF + done + ``` 1. Generate the certificate authority. - If you already have a CA then the only action that is copying the CA's `crt` and - `key` file to `/etc/kubernetes/pki/etcd/ca.crt` and - `/etc/kubernetes/pki/etcd/ca.key`. After those files have been copied, - proceed to the next step, "Create certificates for each member". + If you already have a CA then the only action that is copying the CA's `crt` and + `key` file to `/etc/kubernetes/pki/etcd/ca.crt` and + `/etc/kubernetes/pki/etcd/ca.key`. After those files have been copied, + proceed to the next step, "Create certificates for each member". - If you do not already have a CA then run this command on `$HOST0` (where you - generated the configuration files for kubeadm). + If you do not already have a CA then run this command on `$HOST0` (where you + generated the configuration files for kubeadm). - ``` - kubeadm init phase certs etcd-ca - ``` + ``` + kubeadm init phase certs etcd-ca + ``` - This creates two files: + This creates two files: - - `/etc/kubernetes/pki/etcd/ca.crt` - - `/etc/kubernetes/pki/etcd/ca.key` + - `/etc/kubernetes/pki/etcd/ca.crt` + - `/etc/kubernetes/pki/etcd/ca.key` 1. Create certificates for each member. - ```sh - kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml - kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml - kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml - kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml - cp -R /etc/kubernetes/pki /tmp/${HOST2}/ - # cleanup non-reusable certificates - find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete - - kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml - kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml - kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml - kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml - cp -R /etc/kubernetes/pki /tmp/${HOST1}/ - find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete - - kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml - kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml - kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml - kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml - # No need to move the certs because they are for HOST0 - - # clean up certs that should not be copied off this host - find /tmp/${HOST2} -name ca.key -type f -delete - find /tmp/${HOST1} -name ca.key -type f -delete - ``` + ```sh + kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml + kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml + kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml + kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml + cp -R /etc/kubernetes/pki /tmp/${HOST2}/ + # cleanup non-reusable certificates + find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete + + kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml + kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml + kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml + kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml + cp -R /etc/kubernetes/pki /tmp/${HOST1}/ + find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete + + kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml + kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml + kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml + kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml + # No need to move the certs because they are for HOST0 + + # clean up certs that should not be copied off this host + find /tmp/${HOST2} -name ca.key -type f -delete + find /tmp/${HOST1} -name ca.key -type f -delete + ``` 1. Copy certificates and kubeadm configs. - The certificates have been generated and now they must be moved to their - respective hosts. + The certificates have been generated and now they must be moved to their + respective hosts. - ```sh - USER=ubuntu - HOST=${HOST1} - scp -r /tmp/${HOST}/* ${USER}@${HOST}: - ssh ${USER}@${HOST} - USER@HOST $ sudo -Es - root@HOST $ chown -R root:root pki - root@HOST $ mv pki /etc/kubernetes/ - ``` + ```sh + USER=ubuntu + HOST=${HOST1} + scp -r /tmp/${HOST}/* ${USER}@${HOST}: + ssh ${USER}@${HOST} + USER@HOST $ sudo -Es + root@HOST $ chown -R root:root pki + root@HOST $ mv pki /etc/kubernetes/ + ``` 1. Ensure all expected files exist. - The complete list of required files on `$HOST0` is: - - ``` - /tmp/${HOST0} - └── kubeadmcfg.yaml - --- - /etc/kubernetes/pki - ├── apiserver-etcd-client.crt - ├── apiserver-etcd-client.key - └── etcd - ├── ca.crt - ├── ca.key - ├── healthcheck-client.crt - ├── healthcheck-client.key - ├── peer.crt - ├── peer.key - ├── server.crt - └── server.key - ``` - - On `$HOST1`: - - ``` - $HOME - └── kubeadmcfg.yaml - --- - /etc/kubernetes/pki - ├── apiserver-etcd-client.crt - ├── apiserver-etcd-client.key - └── etcd - ├── ca.crt - ├── healthcheck-client.crt - ├── healthcheck-client.key - ├── peer.crt - ├── peer.key - ├── server.crt - └── server.key - ``` - - On `$HOST2`: - - ``` - $HOME - └── kubeadmcfg.yaml - --- - /etc/kubernetes/pki - ├── apiserver-etcd-client.crt - ├── apiserver-etcd-client.key - └── etcd - ├── ca.crt - ├── healthcheck-client.crt - ├── healthcheck-client.key - ├── peer.crt - ├── peer.key - ├── server.crt - └── server.key - ``` + The complete list of required files on `$HOST0` is: + + ``` + /tmp/${HOST0} + └── kubeadmcfg.yaml + --- + /etc/kubernetes/pki + ├── apiserver-etcd-client.crt + ├── apiserver-etcd-client.key + └── etcd + ├── ca.crt + ├── ca.key + ├── healthcheck-client.crt + ├── healthcheck-client.key + ├── peer.crt + ├── peer.key + ├── server.crt + └── server.key + ``` + + On `$HOST1`: + + ``` + $HOME + └── kubeadmcfg.yaml + --- + /etc/kubernetes/pki + ├── apiserver-etcd-client.crt + ├── apiserver-etcd-client.key + └── etcd + ├── ca.crt + ├── healthcheck-client.crt + ├── healthcheck-client.key + ├── peer.crt + ├── peer.key + ├── server.crt + └── server.key + ``` + + On `$HOST2`: + + ``` + $HOME + └── kubeadmcfg.yaml + --- + /etc/kubernetes/pki + ├── apiserver-etcd-client.crt + ├── apiserver-etcd-client.key + └── etcd + ├── ca.crt + ├── healthcheck-client.crt + ├── healthcheck-client.key + ├── peer.crt + ├── peer.key + ├── server.crt + └── server.key + ``` 1. Create the static pod manifests. - Now that the certificates and configs are in place it's time to create the - manifests. On each host run the `kubeadm` command to generate a static manifest - for etcd. + Now that the certificates and configs are in place it's time to create the + manifests. On each host run the `kubeadm` command to generate a static manifest + for etcd. - ```sh - root@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml - root@HOST1 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml - root@HOST2 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml - ``` + ```sh + root@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml + root@HOST1 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml + root@HOST2 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml + ``` 1. Optional: Check the cluster health. - ```sh - docker run --rm -it \ - --net host \ - -v /etc/kubernetes:/etc/kubernetes registry.k8s.io/etcd:${ETCD_TAG} etcdctl \ - --cert /etc/kubernetes/pki/etcd/peer.crt \ - --key /etc/kubernetes/pki/etcd/peer.key \ - --cacert /etc/kubernetes/pki/etcd/ca.crt \ - --endpoints https://${HOST0}:2379 endpoint health --cluster - ... - https://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms - https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms - https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms - ``` - - Set `${ETCD_TAG}` to the version tag of your etcd image. For example `3.4.3-0`. To see the etcd image and tag that kubeadm uses execute `kubeadm config images list --kubernetes-version ${K8S_VERSION}`, where `${K8S_VERSION}` is for example `v1.17.0`. - - Set `${HOST0}`to the IP address of the host you are testing. - - + ```sh + docker run --rm -it \ + --net host \ + -v /etc/kubernetes:/etc/kubernetes registry.k8s.io/etcd:${ETCD_TAG} etcdctl \ + --cert /etc/kubernetes/pki/etcd/peer.crt \ + --key /etc/kubernetes/pki/etcd/peer.key \ + --cacert /etc/kubernetes/pki/etcd/ca.crt \ + --endpoints https://${HOST0}:2379 endpoint health --cluster + ... + https://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms + https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms + https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms + ``` + + - Set `${ETCD_TAG}` to the version tag of your etcd image. For example `3.4.3-0`. To see the etcd image and tag that kubeadm uses execute `kubeadm config images list --kubernetes-version ${K8S_VERSION}`, where `${K8S_VERSION}` is for example `v1.17.0`. + - Set `${HOST0}`to the IP address of the host you are testing. ## {{% heading "whatsnext" %}} - Once you have an etcd cluster with 3 working members, you can continue setting up a highly available control plane using the [external etcd method with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/). - diff --git a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md index e1157383f4ec7..16cc1abf021d0 100644 --- a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -37,7 +37,7 @@ Dashboard also provides information on the state of Kubernetes resources in your The Dashboard UI is not deployed by default. To deploy it, run the following command: ``` -kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml ``` ## Accessing the Dashboard UI diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md index 6ca4d396da9d7..ed707fb278ad6 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md @@ -1,6 +1,7 @@ --- title: Access Clusters Using the Kubernetes API content_type: task +weight: 60 --- diff --git a/content/en/docs/tasks/administer-cluster/certificates.md b/content/en/docs/tasks/administer-cluster/certificates.md index dcbd41a7e6c85..3da130ca64a80 100644 --- a/content/en/docs/tasks/administer-cluster/certificates.md +++ b/content/en/docs/tasks/administer-cluster/certificates.md @@ -1,7 +1,7 @@ --- title: Generate Certificates Manually content_type: task -weight: 20 +weight: 30 --- diff --git a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md index a365fd4ffccac..c3194b71805b2 100644 --- a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md +++ b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md @@ -1,6 +1,7 @@ --- title: Change the default StorageClass content_type: task +weight: 90 --- diff --git a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md index 457fbd6332b43..ae6c303757aba 100644 --- a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md +++ b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md @@ -1,6 +1,7 @@ --- title: Change the Reclaim Policy of a PersistentVolume content_type: task +weight: 100 --- diff --git a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md index 17473ac2895ba..f094d7806c12a 100644 --- a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md @@ -1,6 +1,7 @@ --- title: Upgrade A Cluster content_type: task +weight: 350 --- @@ -99,4 +100,4 @@ release with a newer device plugin API version, device plugins must be upgraded both version before the node is upgraded in order to guarantee that device allocations continue to complete successfully during the upgrade. -Refer to [API compatiblity](docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md/#api-compatibility) and [Kubelet Device Manager API Versions](docs/reference/node/device-plugin-api-versions.md) for more details. +Refer to [API compatibility](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#api-compatibility) and [Kubelet Device Manager API Versions](/docs/reference/node/device-plugin-api-versions/) for more details. \ No newline at end of file diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md index 5268eef369264..542a3a57c2938 100644 --- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md +++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -5,6 +5,7 @@ reviewers: - jpbetz title: Operating etcd clusters for Kubernetes content_type: task +weight: 270 --- diff --git a/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md b/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md index 7d0890197bcbb..743e23d0bd1ae 100644 --- a/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md +++ b/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md @@ -5,6 +5,7 @@ reviewers: title: "Migrate Replicated Control Plane To Use Cloud Controller Manager" linkTitle: "Migrate Replicated Control Plane To Use Cloud Controller Manager" content_type: task +weight: 250 --- diff --git a/content/en/docs/tasks/administer-cluster/coredns.md b/content/en/docs/tasks/administer-cluster/coredns.md index 43a75275b85a6..6da1414d138e1 100644 --- a/content/en/docs/tasks/administer-cluster/coredns.md +++ b/content/en/docs/tasks/administer-cluster/coredns.md @@ -4,6 +4,7 @@ reviewers: title: Using CoreDNS for Service Discovery min-kubernetes-server-version: v1.9 content_type: task +weight: 380 --- diff --git a/content/en/docs/tasks/administer-cluster/cpu-management-policies.md b/content/en/docs/tasks/administer-cluster/cpu-management-policies.md index b077415a05aac..a2e3932b3939d 100644 --- a/content/en/docs/tasks/administer-cluster/cpu-management-policies.md +++ b/content/en/docs/tasks/administer-cluster/cpu-management-policies.md @@ -7,6 +7,7 @@ reviewers: content_type: task min-kubernetes-server-version: v1.26 +weight: 140 --- diff --git a/content/en/docs/tasks/administer-cluster/declare-network-policy.md b/content/en/docs/tasks/administer-cluster/declare-network-policy.md index ac9715b9092ca..4f7933624ec87 100644 --- a/content/en/docs/tasks/administer-cluster/declare-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/declare-network-policy.md @@ -5,6 +5,7 @@ reviewers: title: Declare Network Policy min-kubernetes-server-version: v1.8 content_type: task +weight: 180 --- This document helps you get started using the Kubernetes [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) to declare network policies that govern how pods communicate with each other. diff --git a/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md b/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md index a3732c68de6fe..b1939d96793ce 100644 --- a/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md +++ b/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md @@ -5,6 +5,7 @@ reviewers: - wlan0 title: Developing Cloud Controller Manager content_type: concept +weight: 190 --- diff --git a/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md b/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md index ab737115aeb78..c4f9e1fbb2859 100644 --- a/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md +++ b/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md @@ -5,6 +5,7 @@ reviewers: title: Customizing DNS Service content_type: task min-kubernetes-server-version: v1.12 +weight: 160 --- @@ -104,7 +105,7 @@ The Corefile configuration includes the following [plugins](https://coredns.io/p * [errors](https://coredns.io/plugins/errors/): Errors are logged to stdout. * [health](https://coredns.io/plugins/health/): Health of CoreDNS is reported to - `http://localhost:8080/health`. In this extended syntax `lameduck` will make theuprocess + `http://localhost:8080/health`. In this extended syntax `lameduck` will make the process unhealthy then wait for 5 seconds before the process is shut down. * [ready](https://coredns.io/plugins/ready/): An HTTP endpoint on port 8181 will return 200 OK, when all plugins that are able to signal readiness have done so. diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md index 755a6cc717ce9..2e26088a9486f 100644 --- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -5,6 +5,7 @@ reviewers: title: Debugging DNS Resolution content_type: task min-kubernetes-server-version: v1.6 +weight: 170 --- diff --git a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md index bdcb35ada240d..3b37a8934021e 100644 --- a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md +++ b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md @@ -1,6 +1,7 @@ --- title: Autoscale the DNS Service in a Cluster content_type: task +weight: 80 --- diff --git a/content/en/docs/tasks/administer-cluster/enable-disable-api.md b/content/en/docs/tasks/administer-cluster/enable-disable-api.md index a10de2c3b43b9..f8da90bddc446 100644 --- a/content/en/docs/tasks/administer-cluster/enable-disable-api.md +++ b/content/en/docs/tasks/administer-cluster/enable-disable-api.md @@ -1,6 +1,7 @@ --- title: Enable Or Disable A Kubernetes API content_type: task +weight: 200 --- @@ -20,7 +21,7 @@ The `runtime-config` command line argument also supports 2 special keys: - `api/legacy`, representing only legacy APIs. Legacy APIs are any APIs that have been explicitly [deprecated](/docs/reference/using-api/deprecation-policy/). -For example, to turning off all API versions except v1, pass `--runtime-config=api/all=false,api/v1=true` +For example, to turn off all API versions except v1, pass `--runtime-config=api/all=false,api/v1=true` to the `kube-apiserver`. ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/tasks/administer-cluster/encrypt-data.md b/content/en/docs/tasks/administer-cluster/encrypt-data.md index a740b890ac515..c683c5aa9b2c3 100644 --- a/content/en/docs/tasks/administer-cluster/encrypt-data.md +++ b/content/en/docs/tasks/administer-cluster/encrypt-data.md @@ -5,6 +5,7 @@ reviewers: - enj content_type: task min-kubernetes-server-version: 1.13 +weight: 210 --- @@ -34,7 +35,7 @@ encryption configuration file must be the same! Otherwise, the `kube-apiserver` decrypt data stored in the etcd. {{< /caution >}} -## Understanding the encryption at rest configuration. +## Understanding the encryption at rest configuration ```yaml apiVersion: apiserver.config.k8s.io/v1 @@ -92,7 +93,7 @@ the only recourse is to delete that key from the underlying etcd directly. Calls read that resource will fail until it is deleted or a valid decryption key is provided. {{< /caution >}} -### Providers: +### Providers {{< table caption="Providers for Kubernetes encryption at rest" >}} Name | Encryption | Strength | Speed | Key Length | Other Considerations @@ -101,7 +102,7 @@ Name | Encryption | Strength | Speed | Key Length | Other Considerations `secretbox` | XSalsa20 and Poly1305 | Strong | Faster | 32-byte | A newer standard and may not be considered acceptable in environments that require high levels of review. `aesgcm` | AES-GCM with random nonce | Must be rotated every 200k writes | Fastest | 16, 24, or 32-byte | Is not recommended for use except when an automated key rotation scheme is implemented. `aescbc` | AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding | Weak | Fast | 32-byte | Not recommended due to CBC's vulnerability to padding oracle attacks. -`kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding (prior to v1.25), using AES-GCM starting from v1.25, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/) +`kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding (prior to v1.25), using AES-GCM starting from v1.25, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/). Each provider supports multiple keys - the keys are tried in order for decryption, and if the provider is the first provider, the first key is used for encryption. @@ -217,7 +218,9 @@ program to retrieve the contents of your secret data. 1. Using the `etcdctl` command line, read that Secret out of etcd: - `ETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C` + ``` + ETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C + ``` where `[...]` must be the additional arguments for connecting to the etcd server. @@ -312,8 +315,7 @@ resources: secret: ``` -Then run the following command to force decrypt -all Secrets: +Then run the following command to force decrypt all Secrets: ```shell kubectl get secrets --all-namespaces -o json | kubectl replace -f - diff --git a/content/en/docs/tasks/administer-cluster/extended-resource-node.md b/content/en/docs/tasks/administer-cluster/extended-resource-node.md index 797993f116f67..3e9aae76d6918 100644 --- a/content/en/docs/tasks/administer-cluster/extended-resource-node.md +++ b/content/en/docs/tasks/administer-cluster/extended-resource-node.md @@ -1,26 +1,19 @@ --- title: Advertise Extended Resources for a Node content_type: task +weight: 70 --- - This page shows how to specify extended resources for a Node. Extended resources allow cluster administrators to advertise node-level resources that would otherwise be unknown to Kubernetes. - - - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - - ## Get the names of your Nodes @@ -38,7 +31,7 @@ the Kubernetes API server. For example, suppose one of your Nodes has four dongl attached. Here's an example of a PATCH request that advertises four dongle resources for your Node. -```shell +``` PATCH /api/v1/nodes//status HTTP/1.1 Accept: application/json Content-Type: application/json-patch+json @@ -68,9 +61,9 @@ Replace `` with the name of your Node: ```shell curl --header "Content-Type: application/json-patch+json" \ ---request PATCH \ ---data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' \ -http://localhost:8001/api/v1/nodes//status + --request PATCH \ + --data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' \ + http://localhost:8001/api/v1/nodes//status ``` {{< note >}} @@ -99,9 +92,9 @@ Once again, the output shows the dongle resource: ```yaml Capacity: - cpu: 2 - memory: 2049008Ki - example.com/dongle: 4 + cpu: 2 + memory: 2049008Ki + example.com/dongle: 4 ``` Now, application developers can create Pods that request a certain @@ -177,9 +170,9 @@ Replace `` with the name of your Node: ```shell curl --header "Content-Type: application/json-patch+json" \ ---request PATCH \ ---data '[{"op": "remove", "path": "/status/capacity/example.com~1dongle"}]' \ -http://localhost:8001/api/v1/nodes//status + --request PATCH \ + --data '[{"op": "remove", "path": "/status/capacity/example.com~1dongle"}]' \ + http://localhost:8001/api/v1/nodes//status ``` Verify that the dongle advertisement has been removed: @@ -190,20 +183,13 @@ kubectl describe node | grep dongle (you should not see any output) - - - ## {{% heading "whatsnext" %}} - ### For application developers -* [Assign Extended Resources to a Container](/docs/tasks/configure-pod-container/extended-resource/) +- [Assign Extended Resources to a Container](/docs/tasks/configure-pod-container/extended-resource/) ### For cluster administrators -* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) - - - +- [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) +- [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) diff --git a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md index a9aaaacd46adc..6121f87098aff 100644 --- a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md +++ b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md @@ -5,6 +5,7 @@ reviewers: - piosz title: Guaranteed Scheduling For Critical Add-On Pods content_type: concept +weight: 220 --- diff --git a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md index e923345d1ab17..39b8d30d6e6f5 100644 --- a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md +++ b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md @@ -1,6 +1,7 @@ --- title: IP Masquerade Agent User Guide content_type: task +weight: 230 --- diff --git a/content/en/docs/tasks/administer-cluster/kms-provider.md b/content/en/docs/tasks/administer-cluster/kms-provider.md index 5900be0c4ff34..21e89321e6c2b 100644 --- a/content/en/docs/tasks/administer-cluster/kms-provider.md +++ b/content/en/docs/tasks/administer-cluster/kms-provider.md @@ -4,6 +4,7 @@ reviewers: - enj title: Using a KMS provider for data encryption content_type: task +weight: 370 --- This page shows how to configure a Key Management Service (KMS) provider and plugin to enable secret data encryption. Currently there are two KMS API versions. KMS v1 will continue to work while v2 develops in maturity. If you are not sure which KMS API version to pick, choose v1. diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md b/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md index 93942c89187e3..7e15e32ca57f2 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md @@ -1,7 +1,7 @@ --- title: Configuring a cgroup driver content_type: task -weight: 10 +weight: 20 --- diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index 18032fc4b3989..1ad41353c0ad9 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -78,9 +78,12 @@ etcd-ca Dec 28, 2029 23:36 UTC 9y no front-proxy-ca Dec 28, 2029 23:36 UTC 9y no ``` -The command shows expiration/residual time for the client certificates in the `/etc/kubernetes/pki` folder and for the client certificate embedded in the KUBECONFIG files used by kubeadm (`admin.conf`, `controller-manager.conf` and `scheduler.conf`). +The command shows expiration/residual time for the client certificates in the +`/etc/kubernetes/pki` folder and for the client certificate embedded in the kubeconfig files used +by kubeadm (`admin.conf`, `controller-manager.conf` and `scheduler.conf`). -Additionally, kubeadm informs the user if the certificate is externally managed; in this case, the user should take care of managing certificate renewal manually/using other tools. +Additionally, kubeadm informs the user if the certificate is externally managed; in this case, the +user should take care of managing certificate renewal manually/using other tools. {{< warning >}} `kubeadm` cannot manage certificates signed by an external CA. @@ -96,8 +99,10 @@ To repair an expired kubelet client certificate see {{< warning >}} On nodes created with `kubeadm init`, prior to kubeadm version 1.17, there is a -[bug](https://github.com/kubernetes/kubeadm/issues/1753) where you manually have to modify the contents of `kubelet.conf`. After `kubeadm init` finishes, you should update `kubelet.conf` to point to the -rotated kubelet client certificates, by replacing `client-certificate-data` and `client-key-data` with: +[bug](https://github.com/kubernetes/kubeadm/issues/1753) where you manually have to modify the +contents of `kubelet.conf`. After `kubeadm init` finishes, you should update `kubelet.conf` to +point to the rotated kubelet client certificates, by replacing `client-certificate-data` and +`client-key-data` with: ```yaml client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem @@ -107,16 +112,21 @@ client-key: /var/lib/kubelet/pki/kubelet-client-current.pem ## Automatic certificate renewal -kubeadm renews all the certificates during control plane [upgrade](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/). +kubeadm renews all the certificates during control plane +[upgrade](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/). This feature is designed for addressing the simplest use cases; -if you don't have specific requirements on certificate renewal and perform Kubernetes version upgrades regularly (less than 1 year in between each upgrade), kubeadm will take care of keeping your cluster up to date and reasonably secure. +if you don't have specific requirements on certificate renewal and perform Kubernetes version +upgrades regularly (less than 1 year in between each upgrade), kubeadm will take care of keeping +your cluster up to date and reasonably secure. {{< note >}} It is a best practice to upgrade your cluster frequently in order to stay secure. {{< /note >}} -If you have more complex requirements for certificate renewal, you can opt out from the default behavior by passing `--certificate-renewal=false` to `kubeadm upgrade apply` or to `kubeadm upgrade node`. +If you have more complex requirements for certificate renewal, you can opt out from the default +behavior by passing `--certificate-renewal=false` to `kubeadm upgrade apply` or to `kubeadm +upgrade node`. {{< warning >}} Prior to kubeadm version 1.17 there is a [bug](https://github.com/kubernetes/kubeadm/issues/1818) @@ -145,14 +155,18 @@ If you are running an HA cluster, this command needs to be executed on all the c {{< /warning >}} {{< note >}} -`certs renew` uses the existing certificates as the authoritative source for attributes (Common Name, Organization, SAN, etc.) instead of the kubeadm-config ConfigMap. It is strongly recommended to keep them both in sync. +`certs renew` uses the existing certificates as the authoritative source for attributes (Common +Name, Organization, SAN, etc.) instead of the `kubeadm-config` ConfigMap. It is strongly recommended +to keep them both in sync. {{< /note >}} `kubeadm certs renew` provides the following options: -The Kubernetes certificates normally reach their expiration date after one year. +- The Kubernetes certificates normally reach their expiration date after one year. -- `--csr-only` can be used to renew certificates with an external CA by generating certificate signing requests (without actually renewing certificates in place); see next paragraph for more information. +- `--csr-only` can be used to renew certificates with an external CA by generating certificate + signing requests (without actually renewing certificates in place); see next paragraph for more + information. - It's also possible to renew a single certificate instead of all. @@ -161,19 +175,24 @@ The Kubernetes certificates normally reach their expiration date after one year. This section provides more details about how to execute manual certificate renewal using the Kubernetes certificates API. {{< caution >}} -These are advanced topics for users who need to integrate their organization's certificate infrastructure into a kubeadm-built cluster. If the default kubeadm configuration satisfies your needs, you should let kubeadm manage certificates instead. +These are advanced topics for users who need to integrate their organization's certificate +infrastructure into a kubeadm-built cluster. If the default kubeadm configuration satisfies your +needs, you should let kubeadm manage certificates instead. {{< /caution >}} ### Set up a signer The Kubernetes Certificate Authority does not work out of the box. -You can configure an external signer such as [cert-manager](https://cert-manager.io/docs/configuration/ca/), or you can use the built-in signer. +You can configure an external signer such as [cert-manager](https://cert-manager.io/docs/configuration/ca/), +or you can use the built-in signer. The built-in signer is part of [`kube-controller-manager`](/docs/reference/command-line-tools-reference/kube-controller-manager/). -To activate the built-in signer, you must pass the `--cluster-signing-cert-file` and `--cluster-signing-key-file` flags. +To activate the built-in signer, you must pass the `--cluster-signing-cert-file` and +`--cluster-signing-key-file` flags. -If you're creating a new cluster, you can use a kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3): +If you're creating a new cluster, you can use a kubeadm +[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/): ```yaml apiVersion: kubeadm.k8s.io/v1beta3 @@ -186,7 +205,8 @@ controllerManager: ### Create certificate signing requests (CSR) -See [Create CertificateSigningRequest](/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) for creating CSRs with the Kubernetes API. +See [Create CertificateSigningRequest](/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) +for creating CSRs with the Kubernetes API. ## Renew certificates with external CA @@ -194,7 +214,8 @@ This section provide more details about how to execute manual certificate renewa To better integrate with external CAs, kubeadm can also produce certificate signing requests (CSRs). A CSR represents a request to a CA for a signed certificate for a client. -In kubeadm terms, any certificate that would normally be signed by an on-disk CA can be produced as a CSR instead. A CA, however, cannot be produced as a CSR. +In kubeadm terms, any certificate that would normally be signed by an on-disk CA can be produced +as a CSR instead. A CA, however, cannot be produced as a CSR. ### Create certificate signing requests (CSR) @@ -216,7 +237,8 @@ when issuing a certificate. * In `cfssl` you specify [usages in the config file](https://github.com/cloudflare/cfssl/blob/master/doc/cmd/cfssl.txt#L170). -After a certificate is signed using your preferred method, the certificate and the private key must be copied to the PKI directory (by default `/etc/kubernetes/pki`). +After a certificate is signed using your preferred method, the certificate and the private key +must be copied to the PKI directory (by default `/etc/kubernetes/pki`). ## Certificate authority (CA) rotation {#certificate-authority-rotation} @@ -304,8 +326,8 @@ Instead, you can use the [`kubeadm kubeconfig user`](/docs/reference/setup-tools command to generate kubeconfig files for additional users. The command accepts a mixture of command line flags and [kubeadm configuration](/docs/reference/config-api/kubeadm-config.v1beta3/) options. -The generated kubeconfig will be written to stdout and can be piped to a file -using `kubeadm kubeconfig user ... > somefile.conf`. +The generated kubeconfig will be written to stdout and can be piped to a file using +`kubeadm kubeconfig user ... > somefile.conf`. Example configuration file that can be used with `--config`: diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md index 0e5a48b49ec25..ec372fe231b24 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md @@ -3,7 +3,7 @@ reviewers: - sig-cluster-lifecycle title: Reconfiguring a kubeadm cluster content_type: task -weight: 10 +weight: 30 --- diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 9f2c4154b42d3..3df3a729b8727 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -3,7 +3,7 @@ reviewers: - sig-cluster-lifecycle title: Upgrading kubeadm clusters content_type: task -weight: 20 +weight: 40 --- diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md index e40dad68e6377..21c39c84d5f14 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md @@ -2,17 +2,14 @@ title: Upgrading Windows nodes min-kubernetes-server-version: 1.17 content_type: task -weight: 40 +weight: 50 --- {{< feature-state for_k8s_version="v1.18" state="beta" >}} -This page explains how to upgrade a Windows node [created with kubeadm](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes). - - - +This page explains how to upgrade a Windows node created with kubeadm. ## {{% heading "prerequisites" %}} @@ -21,9 +18,6 @@ This page explains how to upgrade a Windows node [created with kubeadm](/docs/ta cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade). You will want to upgrade the control plane nodes before upgrading your Windows nodes. - - - ## Upgrading worker nodes @@ -81,7 +75,8 @@ upgrade the control plane nodes before upgrading your Windows nodes. ``` {{< note >}} -If you are running kube-proxy in a HostProcess container within a Pod, and not as a Windows Service, you can upgrade kube-proxy by applying a newer version of your kube-proxy manifests. +If you are running kube-proxy in a HostProcess container within a Pod, and not as a Windows Service, +you can upgrade kube-proxy by applying a newer version of your kube-proxy manifests. {{< /note >}} ### Uncordon the node @@ -94,6 +89,3 @@ bring the node back online by marking it schedulable: kubectl uncordon ``` - - - diff --git a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md index 091488e792bcb..b16961d46e173 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md @@ -4,6 +4,7 @@ reviewers: - dawnchen title: Set Kubelet parameters via a config file content_type: task +weight: 330 --- @@ -53,7 +54,7 @@ the threshold values respectively. ## Start a Kubelet process configured via the config file {{< note >}} -If you use kubeadm to initialize your cluster, use the kubelet-config while creating your cluster with `kubeadmin init`. +If you use kubeadm to initialize your cluster, use the kubelet-config while creating your cluster with `kubeadm init`. See [configuring kubelet using kubeadm](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/) for details. {{< /note >}} diff --git a/content/en/docs/tasks/administer-cluster/kubelet-credential-provider.md b/content/en/docs/tasks/administer-cluster/kubelet-credential-provider.md index 3da341dbccc0a..ae3d381a86e2c 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-credential-provider.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-credential-provider.md @@ -6,6 +6,7 @@ reviewers: description: Configure the kubelet's image credential provider plugin content_type: task min-kubernetes-server-version: v1.26 +weight: 120 --- {{< feature-state for_k8s_version="v1.26" state="stable" >}} @@ -82,8 +83,8 @@ providers: # # A match exists between an image and a matchImage when all of the below are true: # - Both contain the same number of domain parts and each part matches. - # - The URL path of an imageMatch must be a prefix of the target image URL path. - # - If the imageMatch contains a port, then the port must match in the image as well. + # - The URL path of an matchImages must be a prefix of the target image URL path. + # - If the matchImages contains a port, then the port must match in the image as well. # # Example values of matchImages: # - 123456789.dkr.ecr.us-east-1.amazonaws.com @@ -142,7 +143,7 @@ A match exists between an image name and a `matchImage` entry when all of the be * Both contain the same number of domain parts and each part matches. * The URL path of match image must be a prefix of the target image URL path. -* If the imageMatch contains a port, then the port must match in the image as well. +* If the matchImages contains a port, then the port must match in the image as well. Some example values of `matchImages` patterns are: diff --git a/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md b/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md index 90e4a11f63171..3ed95f98c6ca0 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md @@ -2,6 +2,7 @@ title: Running Kubernetes Node Components as a Non-root User content_type: task min-kubernetes-server-version: 1.22 +weight: 300 --- diff --git a/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md b/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md index c982a9cb7cc40..9bd8e81771a74 100644 --- a/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md +++ b/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md @@ -1,6 +1,7 @@ --- title: Limit Storage Consumption content_type: task +weight: 240 --- diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/_index.md b/content/en/docs/tasks/administer-cluster/manage-resources/_index.md index a98b234728126..797b69e0a3e86 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/_index.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/_index.md @@ -1,4 +1,4 @@ --- title: Manage Memory, CPU, and API Resources -weight: 20 +weight: 40 --- diff --git a/content/en/docs/tasks/administer-cluster/memory-manager.md b/content/en/docs/tasks/administer-cluster/memory-manager.md index 55d61c3313c74..33d7b643fa476 100644 --- a/content/en/docs/tasks/administer-cluster/memory-manager.md +++ b/content/en/docs/tasks/administer-cluster/memory-manager.md @@ -7,6 +7,7 @@ reviewers: content_type: task min-kubernetes-server-version: v1.21 +weight: 410 --- diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md index b10f75dd9ce71..8d46e32ff0c6c 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md @@ -1,6 +1,6 @@ --- title: "Migrating from dockershim" -weight: 10 +weight: 20 content_type: task no_list: true --- @@ -16,12 +16,12 @@ installations. Our [Dockershim Removal FAQ](/blog/2022/02/17/dockershim-faq/) is to understand the problem better. Dockershim was removed from Kubernetes with the release of v1.24. -If you use Docker Engine via dockershim as your container runtime, and wish to upgrade to v1.24, +If you use Docker Engine via dockershim as your container runtime and wish to upgrade to v1.24, it is recommended that you either migrate to another runtime or find an alternative means to obtain Docker Engine support. -Check out [container runtimes](/docs/setup/production-environment/container-runtimes/) +Check out the [container runtimes](/docs/setup/production-environment/container-runtimes/) section to know your options. Make sure to [report issues](https://github.com/kubernetes/kubernetes/issues) you encountered -with the migration. So the issue can be fixed in a timely manner and your cluster would be +with the migration so the issues can be fixed in a timely manner and your cluster would be ready for dockershim removal. Your cluster might have more than one kind of node, although this is not a common @@ -37,11 +37,11 @@ These tasks will help you to migrate: ## {{% heading "whatsnext" %}} * Check out [container runtimes](/docs/setup/production-environment/container-runtimes/) - to understand your options for a container runtime. + to understand your options for an alternative. * There is a [GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917) - to track discussion about the deprecation and removal of dockershim. -* If you found a defect or other technical concern relating to migrating away from dockershim, + to track the discussion about the deprecation and removal of dockershim. +* If you find a defect or other technical concern relating to migrating away from dockershim, you can [report an issue](https://github.com/kubernetes/kubernetes/issues/new/choose) to the Kubernetes project. diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md index 5b6afe04e5abe..671322d569f9b 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md @@ -1,6 +1,6 @@ --- title: "Changing the Container Runtime on a Node from Docker Engine to containerd" -weight: 8 +weight: 10 content_type: task --- diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md index e66e636eca115..267d614ef9dc0 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md @@ -3,7 +3,7 @@ title: Check whether dockershim removal affects you content_type: task reviewers: - SergeyKanzhelev -weight: 20 +weight: 50 --- diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use.md index c4247f085a2b8..8e04dd7a6c5d3 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use.md @@ -3,7 +3,7 @@ title: Find Out What Container Runtime is Used on a Node content_type: task reviewers: - SergeyKanzhelev -weight: 10 +weight: 30 --- diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md index b9bdcd9a2dbbc..9bbba039e0d9c 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md @@ -1,6 +1,6 @@ --- title: "Migrate Docker Engine nodes from dockershim to cri-dockerd" -weight: 9 +weight: 20 content_type: task --- diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md index 496f25fa0c268..ab6f340beab74 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md @@ -3,7 +3,7 @@ title: Migrating telemetry and security agents from dockershim content_type: task reviewers: - SergeyKanzhelev -weight: 70 +weight: 60 --- diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md index 34e2b112efce2..5dd0453648d7e 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md @@ -4,7 +4,7 @@ content_type: task reviewers: - mikebrow - divya-mohan0209 -weight: 10 +weight: 40 --- @@ -129,7 +129,8 @@ cat << EOF | tee /etc/cni/net.d/10-containerd-net.conflist }, { "type": "portmap", - "capabilities": {"portMappings": true} + "capabilities": {"portMappings": true}, + "externalSetMarkChain": "KUBE-MARK-MASQ" } ] } diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md index e22cf651606b5..3fa2f64098cd8 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -4,6 +4,7 @@ reviewers: - janetkuo title: Namespaces Walkthrough content_type: task +weight: 260 --- diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md index cceaf646bc017..6af713b25c756 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces.md +++ b/content/en/docs/tasks/administer-cluster/namespaces.md @@ -4,6 +4,7 @@ reviewers: - janetkuo title: Share a Cluster with Namespaces content_type: task +weight: 340 --- diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/_index.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/_index.md index 31d4f7b5aee4c..1a570a2bc2554 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/_index.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/_index.md @@ -1,4 +1,4 @@ --- title: Install a Network Policy Provider -weight: 30 +weight: 50 --- diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md index 40733c4c96810..0cf26dcf8caff 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md @@ -3,7 +3,7 @@ reviewers: - caseydavenport title: Use Calico for NetworkPolicy content_type: task -weight: 10 +weight: 20 --- diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md index 9a496d39a644a..ebafa8527ab27 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md @@ -4,7 +4,7 @@ reviewers: - aanm title: Use Cilium for NetworkPolicy content_type: task -weight: 20 +weight: 30 --- diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md index 673118e312b51..6ae0a5cd6f017 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md @@ -3,7 +3,7 @@ reviewers: - murali-reddy title: Use Kube-router for NetworkPolicy content_type: task -weight: 30 +weight: 40 --- diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md index 6a57d8cc0b2cf..999d2135c2b81 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md @@ -3,7 +3,7 @@ reviewers: - chrismarino title: Romana for NetworkPolicy content_type: task -weight: 40 +weight: 50 --- diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md index fcbc9c40458f6..631d3e6ba5718 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md @@ -3,7 +3,7 @@ reviewers: - bboreham title: Weave Net for NetworkPolicy content_type: task -weight: 50 +weight: 60 --- diff --git a/content/en/docs/tasks/administer-cluster/nodelocaldns.md b/content/en/docs/tasks/administer-cluster/nodelocaldns.md index 11f044e962b5f..2f0a16d8d7b22 100644 --- a/content/en/docs/tasks/administer-cluster/nodelocaldns.md +++ b/content/en/docs/tasks/administer-cluster/nodelocaldns.md @@ -5,6 +5,7 @@ reviewers: - sftim title: Using NodeLocal DNSCache in Kubernetes Clusters content_type: task +weight: 390 --- diff --git a/content/en/docs/tasks/administer-cluster/quota-api-object.md b/content/en/docs/tasks/administer-cluster/quota-api-object.md index ad38f102d4854..f26ebaf23bde1 100644 --- a/content/en/docs/tasks/administer-cluster/quota-api-object.md +++ b/content/en/docs/tasks/administer-cluster/quota-api-object.md @@ -1,6 +1,7 @@ --- title: Configure Quotas for API Objects content_type: task +weight: 130 --- diff --git a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md index e1effd8f05275..3f3d9e06ba46e 100644 --- a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md +++ b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md @@ -5,6 +5,7 @@ reviewers: title: Reconfigure a Node's Kubelet in a Live Cluster content_type: task min-kubernetes-server-version: v1.11 +weight: 280 --- diff --git a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md index f39122790328f..8a12831e9d0fc 100644 --- a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md +++ b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md @@ -6,6 +6,7 @@ reviewers: title: Reserve Compute Resources for System Daemons content_type: task min-kubernetes-server-version: 1.8 +weight: 290 --- @@ -133,6 +134,7 @@ with `.slice` appended. {{< feature-state for_k8s_version="v1.17" state="stable" >}} **Kubelet Flag**: `--reserved-cpus=0-3` +**KubeletConfiguration Flag**: `reservedSystemCpus: 0-3` `reserved-cpus` is meant to define an explicit CPU set for OS system daemons and kubernetes system daemons. `reserved-cpus` is for systems that do not intend to diff --git a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md index b1a7e565480c4..13264bf6ef602 100644 --- a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md +++ b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md @@ -5,6 +5,7 @@ reviewers: - wlan0 title: Cloud Controller Manager Administration content_type: concept +weight: 110 --- diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md index f2ffbacac4392..456fd02c7d452 100644 --- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md +++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md @@ -7,6 +7,7 @@ reviewers: title: Safely Drain a Node content_type: task min-kubernetes-server-version: 1.5 +weight: 310 --- @@ -65,9 +66,16 @@ kubectl get nodes Next, tell Kubernetes to drain the node: ```shell -kubectl drain +kubectl drain --ignore-daemonsets ``` +If there are pods managed by a DaemonSet, you will need to specify +`--ignore-daemonsets` with `kubectl` to successfully drain the node. The `kubectl drain` subcommand on its own does not actually drain +a node of its DaemonSet pods: +the DaemonSet controller (part of the control plane) immediately replaces missing Pods with +new equivalent Pods. The DaemonSet controller also creates Pods that ignore unschedulable +taints, which allows the new Pods to launch onto a node that you are draining. + Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node). If you leave the node in the cluster during the maintenance operation, you need to run diff --git a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md index d864bb1d32e6f..5ef8b086bed5c 100644 --- a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md +++ b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md @@ -5,6 +5,7 @@ reviewers: - enj title: Securing a Cluster content_type: task +weight: 320 --- diff --git a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md index 367901b390e40..a66ca9319b013 100644 --- a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md +++ b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md @@ -3,6 +3,7 @@ title: Using sysctls in a Kubernetes Cluster reviewers: - sttts content_type: task +weight: 400 --- diff --git a/content/en/docs/tasks/administer-cluster/topology-manager.md b/content/en/docs/tasks/administer-cluster/topology-manager.md index b02b2531b600f..7dac6b425624a 100644 --- a/content/en/docs/tasks/administer-cluster/topology-manager.md +++ b/content/en/docs/tasks/administer-cluster/topology-manager.md @@ -10,6 +10,7 @@ reviewers: content_type: task min-kubernetes-server-version: v1.18 +weight: 150 --- diff --git a/content/en/docs/tasks/administer-cluster/use-cascading-deletion.md b/content/en/docs/tasks/administer-cluster/use-cascading-deletion.md index 15968e0c3ce0f..5a5ad45ebf944 100644 --- a/content/en/docs/tasks/administer-cluster/use-cascading-deletion.md +++ b/content/en/docs/tasks/administer-cluster/use-cascading-deletion.md @@ -1,6 +1,7 @@ --- title: Use Cascading Deletion in a Cluster content_type: task +weight: 360 --- diff --git a/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md b/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md index 19f99ab4c8dcf..e672779f75c13 100644 --- a/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md +++ b/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md @@ -2,6 +2,7 @@ title: Verify Signed Kubernetes Artifacts content_type: task min-kubernetes-server-version: v1.26 +weight: 420 --- diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 37dda8581e057..dfa20164948db 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -42,11 +42,11 @@ characters. ### Use source files -1. Store the credentials in files with the values encoded in base64: +1. Store the credentials in files: ```shell - echo -n 'admin' | base64 > ./username.txt - echo -n 'S!B\*d$zDsb=' | base64 > ./password.txt + echo -n 'admin' > ./username.txt + echo -n 'S!B\*d$zDsb=' > ./password.txt ``` The `-n` flag ensures that the generated files do not have an extra newline character at the end of the text. This is important because when `kubectl` @@ -199,4 +199,4 @@ kubectl delete secret db-user-pass - Read more about the [Secret concept](/docs/concepts/configuration/secret/) - Learn how to [manage Secrets using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/) -- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) \ No newline at end of file +- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) diff --git a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md index 4576b0f02b8a0..c952ab361cbbc 100644 --- a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md +++ b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md @@ -78,9 +78,9 @@ unless the Pod's grace period expires. For more details, see [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/). {{< note >}} -Kubernetes only sends the preStop event when a Pod is *terminated*. -This means that the preStop hook is not invoked when the Pod is *completed*. -This limitation is tracked in [issue #55087](https://github.com/kubernetes/kubernetes/issues/55807). +Kubernetes only sends the preStop event when a Pod or a container in the Pod is *terminated*. +This means that the preStop hook is not invoked when the Pod is *completed*. +About this limitation, please see [Container hooks](/docs/concepts/containers/container-lifecycle-hooks/#container-hooks) for the detail. {{< /note >}} diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 308c078136236..19d0a9cfa33e9 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -388,9 +388,24 @@ to 1 second. Minimum value is 1. * `successThreshold`: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup Probes. Minimum value is 1. -* `failureThreshold`: When a probe fails, Kubernetes will -try `failureThreshold` times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready. -Defaults to 3. Minimum value is 1. +* `failureThreshold`: After a probe fails `failureThreshold` times in a row, Kubernetes + considers that the overall check has failed: the container is _not_ ready / healthy / + live. + For the case of a startup or liveness probe, if at least `failureThreshold` probes have + failed, Kubernetes treats the container as unhealthy and triggers a restart for that + specific container. The kubelet takes the setting of `terminationGracePeriodSeconds` + for that container into account. + For a failed readiness probe, the kubelet continues running the container that failed + checks, and also continues to run more probes; because the check failed, the kubelet + sets the `Ready` [condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions) + on the Pod to `false`. +* `terminationGracePeriodSeconds`: configure a grace period for the kubelet to wait + between triggering a shut down of the failed container, and then forcing the + container runtime to stop that container. + The default is to inherit the Pod-level value for `terminationGracePeriodSeconds` + (30 seconds if not specified), and the minimum value is 1. + See [probe-level `terminationGracePeriodSeconds`](#probe-level-terminationgraceperiodseconds) + for more detail. {{< note >}} Before Kubernetes 1.20, the field `timeoutSeconds` was not respected for exec probes: diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index 77940781f04a7..5b783d2dcca01 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -47,11 +47,12 @@ kubectl get pods/ -o yaml ``` In the output, you see a field `spec.serviceAccountName`. -Kubernetes [automatically](/docs/user-guide/working-with-resources/#resources-are-automatically-modified) +Kubernetes [automatically](/docs/concepts/overview/working-with-objects/object-management/) sets that value if you don't specify it when you create a Pod. An application running inside a Pod can access the Kubernetes API using -automatically mounted service account credentials. See [accessing the Cluster](/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod) to learn more. +automatically mounted service account credentials. +See [accessing the Cluster](/docs/tasks/access-application-cluster/access-cluster/) to learn more. When a Pod authenticates as a ServiceAccount, its level of access depends on the [authorization plugin and policy](/docs/reference/access-authn-authz/authorization/#authorization-modules) @@ -62,7 +63,8 @@ in use. If you don't want the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to automatically mount a ServiceAccount's API credentials, you can opt out of the default behavior. -You can opt out of automounting API credentials on `/var/run/secrets/kubernetes.io/serviceaccount/token` for a service account by setting `automountServiceAccountToken: false` on the ServiceAccount: +You can opt out of automounting API credentials on `/var/run/secrets/kubernetes.io/serviceaccount/token` +for a service account by setting `automountServiceAccountToken: false` on the ServiceAccount: For example: diff --git a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md index 3b6bec6def564..1ee34aa225721 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -45,7 +45,7 @@ restarts. Here is the configuration file for the Pod: The output looks like this: - ```shell + ```console NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s ``` @@ -73,7 +73,7 @@ restarts. Here is the configuration file for the Pod: The output is similar to this: - ```shell + ```console USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379 root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash @@ -91,7 +91,7 @@ restarts. Here is the configuration file for the Pod: 1. In your original terminal, watch for changes to the Redis Pod. Eventually, you will see something like this: - ```shell + ```console NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s redis 0/1 Completed 0 6m diff --git a/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md b/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md index cad26cf29a374..24b8efea5a8cd 100644 --- a/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md +++ b/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md @@ -228,7 +228,7 @@ To run HostProcess containers as a local user; A local usergroup must first be c and the name of that local usergroup must be specified in the `runAsUserName` field in the deployment. Prior to initializing the HostProcess container, a new **ephemeral** local user account to be created and joined to the specified usergroup, from which the container is run. This provides a number a benefits including eliminating the need to manage passwords for local user accounts. -passwords for local user accounts. An initial HostProcess container running as a service account can be used to +An initial HostProcess container running as a service account can be used to prepare the user groups for later HostProcess containers. {{< note >}} @@ -269,4 +269,4 @@ For more information please check out the [windows-host-process-containers-base- - HostProcess containers fail to start with `failed to create user process token: failed to logon user: Access is denied.: unknown` Ensure containerd is running as `LocalSystem` or `LocalService` service accounts. User accounts (even Administrator accounts) do not have permissions to create logon tokens for any of the supported [user accounts](#choosing-a-user-account). - \ No newline at end of file + diff --git a/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md b/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md index 393d546623857..802f38a651a87 100644 --- a/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md +++ b/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md @@ -52,6 +52,9 @@ plugins: # Array of namespaces to exempt. namespaces: [] ``` +{{< note >}} +The above manifest needs to be specified via the `--admission-control-config-file` to kube-apiserver. +{{< /note >}} {{< note >}} `pod-security.admission.config.k8s.io/v1` configuration requires v1.25+. diff --git a/content/en/docs/tasks/configure-pod-container/security-context.md b/content/en/docs/tasks/configure-pod-container/security-context.md index d5399ca9f58d2..7e1a04e9a439e 100644 --- a/content/en/docs/tasks/configure-pod-container/security-context.md +++ b/content/en/docs/tasks/configure-pod-container/security-context.md @@ -470,8 +470,7 @@ The more files and directories in the volume, the longer that relabelling takes. In Kubernetes 1.25, the kubelet loses track of volume labels after restart. In other words, then kubelet may refuse to start Pods with errors similar to "conflicting SELinux labels of volume", while there are no conflicting labels in Pods. Make sure -nodes are -[fully drained](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) +nodes are [fully drained](/docs/tasks/administer-cluster/safely-drain-node/) before restarting kubelet. {{< /note >}} @@ -519,4 +518,5 @@ kubectl delete pod security-context-demo-4 * [AllowPrivilegeEscalation design document](https://git.k8s.io/design-proposals-archive/auth/no-new-privs.md) * For more information about security mechanisms in Linux, see -[Overview of Linux Kernel Security Features](https://www.linux.com/learn/overview-linux-kernel-security-features) (Note: Some information is out of date) + [Overview of Linux Kernel Security Features](https://www.linux.com/learn/overview-linux-kernel-security-features) + (Note: Some information is out of date) diff --git a/content/en/docs/tasks/configure-pod-container/static-pod.md b/content/en/docs/tasks/configure-pod-container/static-pod.md index 23191e1ffe688..99c5b7ee0fc1f 100644 --- a/content/en/docs/tasks/configure-pod-container/static-pod.md +++ b/content/en/docs/tasks/configure-pod-container/static-pod.md @@ -117,7 +117,7 @@ Similar to how [filesystem-hosted manifests](#configuration-files) work, the kub refetches the manifest on a schedule. If there are changes to the list of static Pods, the kubelet applies them. -To use this approach: +To use this approach: 1. Create a YAML file and store it on a web server so that you can pass the URL of that file to the kubelet. @@ -225,6 +225,18 @@ crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 89db4553e1eeb docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106 ``` +Once you identify the right container, you can get the logs for that container with `crictl`: + +```shell +# Run these commands on the node where the container is running +crictl logs +``` +```console +10.240.0.48 - - [16/Nov/2022:12:45:49 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-" +10.240.0.48 - - [16/Nov/2022:12:45:50 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-" +10.240.0.48 - - [16/Nove/2022:12:45:51 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-" +``` +To find more about how to debug using `crictl`, please visit [_Debugging Kubernetes nodes with crictl_](https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/) ## Dynamic addition and removal of static pods @@ -232,7 +244,7 @@ The running kubelet periodically scans the configured directory (`/etc/kubernete ```shell # This assumes you are using filesystem-hosted static Pod configuration -# Run these commands on the node where the kubelet is running +# Run these commands on the node where the container is running # mv /etc/kubernetes/manifests/static-web.yaml /tmp sleep 20 @@ -246,3 +258,12 @@ crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID f427638871c35 docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106 ``` +## {{% heading "whatsnext" %}} + +* [Generate static Pod manifests for control plane components](/docs/reference/setup-tools/kubeadm/implementation-details/#generate-static-pod-manifests-for-control-plane-components) +* [Generate static Pod manifest for local etcd](/docs/reference/setup-tools/kubeadm/implementation-details/#generate-static-pod-manifest-for-local-etcd) +* [Debugging Kubernetes nodes with `crictl`](/docs/tasks/debug/debug-cluster/crictl/) +* [Learn more about `crictl`](https://github.com/kubernetes-sigs/cri-tools). +* [Map `docker` CLI commands to `crictl`](/docs/reference/tools/map-crictl-dockercli/). +* [Set up etcd instances as static pods managed by a kubelet](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) + diff --git a/content/en/docs/tasks/debug/_index.md b/content/en/docs/tasks/debug/_index.md index da024f4af915a..0d990ec949dfd 100644 --- a/content/en/docs/tasks/debug/_index.md +++ b/content/en/docs/tasks/debug/_index.md @@ -43,14 +43,32 @@ and command-line interfaces (CLIs), such as [`kubectl`](/docs/reference/kubectl/ ## Help! My question isn't covered! I need help now! -### Stack Overflow +### Stack Exchange, Stack Overflow, or Server Fault {#stack-exchange} -Someone else from the community may have already asked a similar question or may -be able to help with your problem. The Kubernetes team will also monitor +If you have questions related to *software development* for your containerized app, +you can ask those on [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes). + +If you have Kubernetes questions related to *cluster management* or *configuration*, +you can ask those on +[Server Fault](https://serverfault.com/questions/tagged/kubernetes). + +There are also several more specific Stack Exchange network sites which might +be the right place to ask Kubernetes questions in areas such as +[DevOps](https://devops.stackexchange.com/questions/tagged/kubernetes), +[Software Engineering](https://softwareengineering.stackexchange.com/questions/tagged/kubernetes), +or [InfoSec](https://security.stackexchange.com/questions/tagged/kubernetes). + +Someone else from the community may have already asked a similar question or +may be able to help with your problem. + +The Kubernetes team will also monitor [posts tagged Kubernetes](https://stackoverflow.com/questions/tagged/kubernetes). -If there aren't any existing questions that help, **please [ensure that your question is on-topic on Stack Overflow](https://stackoverflow.com/help/on-topic) -and that you read through the guidance on [how to ask a new question](https://stackoverflow.com/help/how-to-ask)**, -before [asking a new one](https://stackoverflow.com/questions/ask?tags=kubernetes)! +If there aren't any existing questions that help, **please ensure that your question +is [on-topic on Stack Overflow](https://stackoverflow.com/help/on-topic), +[Server Fault](https://serverfault.com/help/on-topic), or the Stack Exchange +Network site you're asking on**, and read through the guidance on +[how to ask a new question](https://stackoverflow.com/help/how-to-ask), +before asking a new one! ### Slack diff --git a/content/en/docs/tasks/debug/debug-cluster/_index.md b/content/en/docs/tasks/debug/debug-cluster/_index.md index 29fb9a06ae71e..3278fdfa7d4ce 100644 --- a/content/en/docs/tasks/debug/debug-cluster/_index.md +++ b/content/en/docs/tasks/debug/debug-cluster/_index.md @@ -323,6 +323,7 @@ This is an incomplete list of things that could go wrong, and how to adjust your [monitoring resource usage](/docs/tasks/debug/debug-cluster/resource-usage-monitoring/) * Use Node Problem Detector to [monitor node health](/docs/tasks/debug/debug-cluster/monitor-node-health/) +* Use `kubectl debug node` to [debug Kubernetes nodes](/docs/tasks/debug/debug-cluster/kubectl-node-debug) * Use `crictl` to [debug Kubernetes nodes](/docs/tasks/debug/debug-cluster/crictl/) * Get more information about [Kubernetes auditing](/docs/tasks/debug/debug-cluster/audit/) * Use `telepresence` to [develop and debug services locally](/docs/tasks/debug/debug-cluster/local-debugging/) diff --git a/content/en/docs/tasks/debug/debug-cluster/kubectl-node-debug.md b/content/en/docs/tasks/debug/debug-cluster/kubectl-node-debug.md new file mode 100644 index 0000000000000..98d1a7182cd45 --- /dev/null +++ b/content/en/docs/tasks/debug/debug-cluster/kubectl-node-debug.md @@ -0,0 +1,109 @@ +--- +title: Debugging Kubernetes Nodes With Kubectl +content_type: task +min-kubernetes-server-version: 1.20 +--- + + +This page shows how to debug a [node](/docs/concepts/architecture/nodes/) +running on the Kubernetes cluster using `kubectl debug` command. + +## {{% heading "prerequisites" %}} + + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +You need to have permission to create Pods and to assign those new Pods to arbitrary nodes. +You also need to be authorized to create Pods that access filesystems from the host. + + + + +## Debugging a Node using `kubectl debug node` + +Use the `kubectl debug node` command to deploy a Pod to a Node that you want to troubleshoot. +This command is helpful in scenarios where you can't access your Node by using an SSH connection. +When the Pod is created, the Pod opens an interactive shell on the Node. +To create an interactive shell on a Node named “mynode”, run: + +```shell +kubectl debug node/mynode -it --image=ubuntu +``` + +```console +Creating debugging pod node-debugger-mynode-pdx84 with container debugger on node mynode. +If you don't see a command prompt, try pressing enter. +root@mynode:/# +``` + +The debug command helps to gather information and troubleshoot issues. Commands +that you might use include `ip`, `ifconfig`, `nc`, `ping`, and `ps` and so on. You can also +install other tools, such as `mtr`, `tcpdump`, and `curl`, from the respective package manager. + +{{< note >}} + +The debug commands may differ based on the image the debugging pod is using and +these commands might need to be installed. + +{{< /note >}} + +The debugging Pod can access the root filesystem of the Node, mounted at `/host` in the Pod. +If you run your kubelet in a filesystem namespace, +the debugging Pod sees the root for that namespace, not for the entire node. For a typical Linux node, +you can look at the following paths to find relevant logs: + +`/host/var/log/kubelet.log` +: Logs from the `kubelet`, responsible for running containers on the node. + +`/host/var/log/kube-proxy.log` +: Logs from `kube-proxy`, which is responsible for directing traffic to Service endpoints. + +`/host/var/log/containerd.log` +: Logs from the `containerd` process running on the node. + +`/host/var/log/syslog` +: Shows general messages and information regarding the system. + +`/host/var/log/kern.log` +: Shows kernel logs. + +When creating a debugging session on a Node, keep in mind that: + +* `kubectl debug` automatically generates the name of the new pod, based on + the name of the node. +* The root filesystem of the Node will be mounted at `/host`. +* Although the container runs in the host IPC, Network, and PID namespaces, + the pod isn't privileged. This means that reading some process information might fail + because access to that information is restricted to superusers. For example, `chroot /host` will fail. + If you need a privileged pod, create it manually. + +## {{% heading "cleanup" %}} + +When you finish using the debugging Pod, delete it: + +```shell +kubectl get pods +``` + +```none +NAME READY STATUS RESTARTS AGE +node-debugger-mynode-pdx84 0/1 Completed 0 8m1s +``` + +```shell +# Change the pod name accordingly +kubectl delete pod node-debugger-mynode-pdx84 --now +``` + +```none +pod "node-debugger-mynode-pdx84" deleted +``` + +{{< note >}} + +The `kubectl debug node` command won't work if the Node is down (disconnected +from the network, or kubelet dies and won't restart, etc.). +Check [debugging a down/unreachable node ](/docs/tasks/debug/debug-cluster/#example-debugging-a-down-unreachable-node) +in that case. + +{{< /note >}} \ No newline at end of file diff --git a/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md b/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md index 34b4e0ed7d5fc..8592ada9c2e20 100644 --- a/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md +++ b/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md @@ -12,8 +12,8 @@ weight: 20 *Node Problem Detector* is a daemon for monitoring and reporting about a node's health. You can run Node Problem Detector as a `DaemonSet` or as a standalone daemon. Node Problem Detector collects information about node problems from various daemons -and reports these conditions to the API server as [NodeCondition](/docs/concepts/architecture/nodes/#condition) -and [Event](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#event-v1-core). +and reports these conditions to the API server as Node [Condition](/docs/concepts/architecture/nodes/#condition)s +or as [Event](/docs/reference/kubernetes-api/cluster-resources/event-v1)s. To learn how to install and use Node Problem Detector, see [Node Problem Detector project documentation](https://github.com/kubernetes/node-problem-detector). @@ -26,16 +26,13 @@ To learn how to install and use Node Problem Detector, see ## Limitations -* Node Problem Detector only supports file based kernel log. - Log tools such as `journald` are not supported. - * Node Problem Detector uses the kernel log format for reporting kernel issues. To learn how to extend the kernel log format, see [Add support for another log format](#support-other-log-format). ## Enabling Node Problem Detector Some cloud providers enable Node Problem Detector as an {{< glossary_tooltip text="Addon" term_id="addons" >}}. -You can also enable Node Problem Detector with `kubectl` or by creating an Addon pod. +You can also enable Node Problem Detector with `kubectl` or by creating an Addon DaemonSet. ### Using kubectl to enable Node Problem Detector {#using-kubectl} @@ -68,7 +65,7 @@ directory `/etc/kubernetes/addons/node-problem-detector` on a control plane node ## Overwrite the configuration -The [default configuration](https://github.com/kubernetes/node-problem-detector/tree/v0.1/config) +The [default configuration](https://github.com/kubernetes/node-problem-detector/tree/v0.8.12/config) is embedded when building the Docker image of Node Problem Detector. However, you can use a [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/) @@ -100,54 +97,59 @@ This approach only applies to a Node Problem Detector started with `kubectl`. Overwriting a configuration is not supported if a Node Problem Detector runs as a cluster Addon. The Addon manager does not support `ConfigMap`. -## Kernel Monitor +## Problem Daemons + +A problem daemon is a sub-daemon of the Node Problem Detector. It monitors specific kinds of node +problems and reports them to the Node Problem Detector. +There are several types of supported problem daemons. -*Kernel Monitor* is a system log monitor daemon supported in the Node Problem Detector. -Kernel monitor watches the kernel log and detects known kernel issues following predefined rules. +- A `SystemLogMonitor` type of daemon monitors the system logs and reports problems and metrics + according to predefined rules. You can customize the configurations for different log sources + such as [filelog](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/kernel-monitor-filelog.json), + [kmsg](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/kernel-monitor.json), + [kernel](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/kernel-monitor-counter.json), + [abrt](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/abrt-adaptor.json), + and [systemd](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/systemd-monitor-counter.json). -The Kernel Monitor matches kernel issues according to a set of predefined rule list in -[`config/kernel-monitor.json`](https://github.com/kubernetes/node-problem-detector/blob/v0.1/config/kernel-monitor.json). The rule list is extensible. You can expand the rule list by overwriting the -configuration. +- A `SystemStatsMonitor` type of daemon collects various health-related system stats as metrics. + You can customize its behavior by updating its + [configuration file](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/system-stats-monitor.json). -### Add new NodeConditions +- A `CustomPluginMonitor` type of daemon invokes and checks various node problems by running + user-defined scripts. You can use different custom plugin monitors to monitor different + problems and customize the daemon behavior by updating the + [configuration file](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/custom-plugin-monitor.json). -To support a new `NodeCondition`, create a condition definition within the `conditions` field in -`config/kernel-monitor.json`, for example: +- A `HealthChecker` type of daemon checks the health of the kubelet and container runtime on a node. -```json -{ - "type": "NodeConditionType", - "reason": "CamelCaseDefaultNodeConditionReason", - "message": "arbitrary default node condition message" -} -``` +### Adding support for other log format {#support-other-log-format} -### Detect new problems +The system log monitor currently supports file-based logs, journald, and kmsg. +Additional sources can be added by implementing a new +[log watcher](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/pkg/systemlogmonitor/logwatchers/types/log_watcher.go). -To detect new problems, you can extend the `rules` field in `config/kernel-monitor.json` -with a new rule definition: +### Adding custom plugin monitors -```json -{ - "type": "temporary/permanent", - "condition": "NodeConditionOfPermanentIssue", - "reason": "CamelCaseShortReason", - "message": "regexp matching the issue in the kernel log" -} -``` +You can extend the Node Problem Detector to execute any monitor scripts written in any language by +developing a custom plugin. The monitor scripts must conform to the plugin protocol in exit code +and standard output. For more information, please refer to the +[plugin interface proposal](https://docs.google.com/document/d/1jK_5YloSYtboj-DtfjmYKxfNnUxCAvohLnsH5aGCAYQ/edit#). -### Configure path for the kernel log device {#kernel-log-device-path} +## Exporter -Check your kernel log path location in your operating system (OS) distribution. -The Linux kernel [log device](https://www.kernel.org/doc/Documentation/ABI/testing/dev-kmsg) is usually presented as `/dev/kmsg`. However, the log path location varies by OS distribution. -The `log` field in `config/kernel-monitor.json` represents the log path inside the container. -You can configure the `log` field to match the device path as seen by the Node Problem Detector. +An exporter reports the node problems and/or metrics to certain backends. +The following exporters are supported: -### Add support for another log format {#support-other-log-format} +- **Kubernetes exporter**: this exporter reports node problems to the Kubernetes API server. + Temporary problems are reported as Events and permanent problems are reported as Node Conditions. -Kernel monitor uses the -[`Translator`](https://github.com/kubernetes/node-problem-detector/blob/v0.1/pkg/kernelmonitor/translator/translator.go) plugin to translate the internal data structure of the kernel log. -You can implement a new translator for a new log format. +- **Prometheus exporter**: this exporter reports node problems and metrics locally as Prometheus + (or OpenMetrics) metrics. You can specify the IP address and port for the exporter using command + line arguments. + +- **Stackdriver exporter**: this exporter reports node problems and metrics to the Stackdriver + Monitoring API. The exporting behavior can be customized using a + [configuration file](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/exporter/stackdriver-exporter.json). @@ -160,4 +162,5 @@ Usually this is fine, because: * The kernel log grows relatively slowly. * A resource limit is set for the Node Problem Detector. * Even under high load, the resource usage is acceptable. For more information, see the Node Problem Detector -[benchmark result](https://github.com/kubernetes/node-problem-detector/issues/2#issuecomment-220255629). + [benchmark result](https://github.com/kubernetes/node-problem-detector/issues/2#issuecomment-220255629). + diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md index 99fd7f4823df0..f472a94962616 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md @@ -78,8 +78,8 @@ Removing an old version: If this occurs, switch back to using `served:true` on the old version, migrate the remaining clients to the new version and repeat this step. 1. Ensure the [upgrade of existing objects to the new stored version](#upgrade-existing-objects-to-a-new-stored-version) step has been completed. - 1. Verify that the `storage` is set to `true` for the new version in the `spec.versions` list in the CustomResourceDefinition. - 1. Verify that the old version is no longer listed in the CustomResourceDefinition `status.storedVersions`. + 1. Verify that the `storage` is set to `true` for the new version in the `spec.versions` list in the CustomResourceDefinition. + 1. Verify that the old version is no longer listed in the CustomResourceDefinition `status.storedVersions`. 1. Remove the old version from the CustomResourceDefinition `spec.versions` list. 1. Drop conversion support for the old version in conversion webhooks. @@ -356,7 +356,7 @@ spec: ### Version removal -An older API version cannot be dropped from a CustomResourceDefinition manifest until existing persisted data has been migrated to the newer API version for all clusters that served the older version of the custom resource, and the old version is removed from the `status.storedVersions` of the CustomResourceDefinition. +An older API version cannot be dropped from a CustomResourceDefinition manifest until existing stored data has been migrated to the newer API version for all clusters that served the older version of the custom resource, and the old version is removed from the `status.storedVersions` of the CustomResourceDefinition. ```yaml apiVersion: apiextensions.k8s.io/v1 @@ -1021,18 +1021,29 @@ Example of a response from a webhook indicating a conversion request failed, wit ## Writing, reading, and updating versioned CustomResourceDefinition objects -When an object is written, it is persisted at the version designated as the +When an object is written, it is stored at the version designated as the storage version at the time of the write. If the storage version changes, existing objects are never converted automatically. However, newly-created or updated objects are written at the new storage version. It is possible for an object to have been written at a version that is no longer served. -When you read an object, you specify the version as part of the path. If you -specify a version that is different from the object's persisted version, -Kubernetes returns the object to you at the version you requested, but the -persisted object is neither changed on disk, nor converted in any way -(other than changing the `apiVersion` string) while serving the request. +When you read an object, you specify the version as part of the path. You can request an object at any version that is currently served. +If you specify a version that is different from the object's stored version, +Kubernetes returns the object to you at the version you requested, but the +stored object is not changed on disk. + +What happens to the object that is being returned while serving the read +request depends on what is specified in the CRD's `spec.conversion`: +- if the default `strategy` value `None` is specified, the only modifications + to the object are changing the `apiVersion` string and perhaps [pruning + unknown fields](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#field-pruning) + (depending on the configuration). Note that this is unlikely to lead to good + results if the schemas differ between the storage and requested version. + In particular, you should not use this strategy if the same data is + represented in different fields between versions. +- if [webhook conversion](#webhook-conversion) is specified, then this + mechanism controls the conversion. If you update an existing object, it is rewritten at the version that is currently the storage version. This is the only way that objects can change from @@ -1040,23 +1051,24 @@ one version to another. To illustrate this, consider the following hypothetical series of events: -1. The storage version is `v1beta1`. You create an object. It is persisted in - storage at version `v1beta1` -2. You add version `v1` to your CustomResourceDefinition and designate it as - the storage version. -3. You read your object at version `v1beta1`, then you read the object again at - version `v1`. Both returned objects are identical except for the apiVersion - field. -4. You create a new object. It is persisted in storage at version `v1`. You now - have two objects, one of which is at `v1beta1`, and the other of which is at - `v1`. -5. You update the first object. It is now persisted at version `v1` since that - is the current storage version. +1. The storage version is `v1beta1`. You create an object. It is stored at version `v1beta1` +2. You add version `v1` to your CustomResourceDefinition and designate it as + the storage version. Here the schemas for `v1` and `v1beta1` are identical, + which is typically the case when promoting an API to stable in the + Kubernetes ecosystem. +3. You read your object at version `v1beta1`, then you read the object again at + version `v1`. Both returned objects are identical except for the apiVersion + field. +4. You create a new object. It is stored at version `v1`. You now + have two objects, one of which is at `v1beta1`, and the other of which is at + `v1`. +5. You update the first object. It is now stored at version `v1` since that + is the current storage version. ### Previous storage versions The API server records each version which has ever been marked as the storage -version in the status field `storedVersions`. Objects may have been persisted +version in the status field `storedVersions`. Objects may have been stored at any version that has ever been designated as a storage version. No objects can exist in storage at a version that has never been a storage version. @@ -1067,19 +1079,19 @@ procedure. *Option 1:* Use the Storage Version Migrator -1. Run the [storage Version migrator](https://github.com/kubernetes-sigs/kube-storage-version-migrator) -2. Remove the old version from the CustomResourceDefinition `status.storedVersions` field. +1. Run the [storage Version migrator](https://github.com/kubernetes-sigs/kube-storage-version-migrator) +2. Remove the old version from the CustomResourceDefinition `status.storedVersions` field. *Option 2:* Manually upgrade the existing objects to a new stored version The following is an example procedure to upgrade from `v1beta1` to `v1`. -1. Set `v1` as the storage in the CustomResourceDefinition file and apply it - using kubectl. The `storedVersions` is now `v1beta1, v1`. -2. Write an upgrade procedure to list all existing objects and write them with - the same content. This forces the backend to write objects in the current - storage version, which is `v1`. -3. Remove `v1beta1` from the CustomResourceDefinition `status.storedVersions` field. +1. Set `v1` as the storage in the CustomResourceDefinition file and apply it + using kubectl. The `storedVersions` is now `v1beta1, v1`. +2. Write an upgrade procedure to list all existing objects and write them with + the same content. This forces the backend to write objects in the current + storage version, which is `v1`. +3. Remove `v1beta1` from the CustomResourceDefinition `status.storedVersions` field. {{< note >}} The flag `--subresource` is used with the kubectl get, patch, edit, and replace commands to diff --git a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md index 5d3a861911c3b..729d9d4a4f7e2 100644 --- a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -9,18 +9,7 @@ weight: 10 -You can use a {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} to run {{< glossary_tooltip text="Jobs" term_id="job" >}} -on a time-based schedule. -These automated jobs run like [Cron](https://en.wikipedia.org/wiki/Cron) tasks on a Linux or UNIX system. - -Cron jobs are useful for creating periodic and recurring tasks, like running backups or sending emails. -Cron jobs can also schedule individual tasks for a specific time, such as if you want to schedule a job for a low activity period. - -Cron jobs have limitations and idiosyncrasies. -For example, in certain circumstances, a single cron job can create multiple jobs. -Therefore, jobs should be idempotent. - -For more limitations, see [CronJobs](/docs/concepts/workloads/controllers/cron-jobs). +This page shows how to run automated tasks using Kubernetes {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} object. ## {{% heading "prerequisites" %}} @@ -123,97 +112,3 @@ kubectl delete cronjob hello Deleting the cron job removes all the jobs and pods it created and stops it from creating additional jobs. You can read more about removing jobs in [garbage collection](/docs/concepts/architecture/garbage-collection/). - -## Writing a CronJob Spec {#writing-a-cron-job-spec} - -As with all other Kubernetes objects, a CronJob must have `apiVersion`, `kind`, and `metadata` fields. -For more information about working with Kubernetes objects and their -{{< glossary_tooltip text="manifests" term_id="manifest" >}}, see the -[managing resources](/docs/concepts/cluster-administration/manage-deployment/), -and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents. - -Each manifest for a CronJob also needs a [`.spec`](/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status) section. - -{{< note >}} -If you modify a CronJob, the changes you make will apply to new jobs that start to run after your modification -is complete. Jobs (and their Pods) that have already started continue to run without changes. -That is, the CronJob does _not_ update existing jobs, even if those remain running. -{{< /note >}} - -### Schedule - -The `.spec.schedule` is a required field of the `.spec`. -It takes a [Cron](https://en.wikipedia.org/wiki/Cron) format string, such as `0 * * * *` or `@hourly`, -as schedule time of its jobs to be created and executed. - -The format also includes extended "Vixie cron" step values. As explained in the -[FreeBSD manual](https://www.freebsd.org/cgi/man.cgi?crontab%285%29): - -> Step values can be used in conjunction with ranges. Following a range -> with `/` specifies skips of the number's value through the -> range. For example, `0-23/2` can be used in the hours field to specify -> command execution every other hour (the alternative in the V7 standard is -> `0,2,4,6,8,10,12,14,16,18,20,22`). Steps are also permitted after an -> asterisk, so if you want to say "every two hours", just use `*/2`. - -{{< note >}} -A question mark (`?`) in the schedule has the same meaning as an asterisk `*`, that is, -it stands for any of available value for a given field. -{{< /note >}} - -### Job Template - -The `.spec.jobTemplate` is the template for the job, and it is required. -It has exactly the same schema as a [Job](/docs/concepts/workloads/controllers/job/), except that -it is nested and does not have an `apiVersion` or `kind`. -For information about writing a job `.spec`, see [Writing a Job Spec](/docs/concepts/workloads/controllers/job/#writing-a-job-spec). - -### Starting Deadline - -The `.spec.startingDeadlineSeconds` field is optional. -It stands for the deadline in seconds for starting the job if it misses its scheduled time for any reason. -After the deadline, the cron job does not start the job. -Jobs that do not meet their deadline in this way count as failed jobs. -If this field is not specified, the jobs have no deadline. - -If the `.spec.startingDeadlineSeconds` field is set (not null), the CronJob -controller measures the time between when a job is expected to be created and -now. If the difference is higher than that limit, it will skip this execution. - -For example, if it is set to `200`, it allows a job to be created for up to 200 -seconds after the actual schedule. - -### Concurrency Policy - -The `.spec.concurrencyPolicy` field is also optional. -It specifies how to treat concurrent executions of a job that is created by this cron job. -The spec may specify only one of the following concurrency policies: - -* `Allow` (default): The cron job allows concurrently running jobs -* `Forbid`: The cron job does not allow concurrent runs; if it is time for a new job run and the - previous job run hasn't finished yet, the cron job skips the new job run -* `Replace`: If it is time for a new job run and the previous job run hasn't finished yet, the - cron job replaces the currently running job run with a new job run - -Note that concurrency policy only applies to the jobs created by the same cron job. -If there are multiple cron jobs, their respective jobs are always allowed to run concurrently. - -### Suspend - -The `.spec.suspend` field is also optional. -If it is set to `true`, all subsequent executions are suspended. -This setting does not apply to already started executions. -Defaults to false. - -{{< caution >}} -Executions that are suspended during their scheduled time count as missed jobs. -When `.spec.suspend` changes from `true` to `false` on an existing cron job without a -[starting deadline](#starting-deadline), the missed jobs are scheduled immediately. -{{< /caution >}} - -### Jobs History Limits - -The `.spec.successfulJobsHistoryLimit` and `.spec.failedJobsHistoryLimit` fields are optional. -These fields specify how many completed and failed jobs should be kept. -By default, they are set to 3 and 1 respectively. Setting a limit to `0` corresponds to keeping -none of the corresponding kind of jobs after they finish. diff --git a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md index 643b57cc3b2f0..2b54f2f409482 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md @@ -315,7 +315,7 @@ kind: Deployment metadata: annotations: # ... - # The annotation contains the updated image to nginx 1.11.9, + # The annotation contains the updated image to nginx 1.16.1, # but does not contain the updated replicas to 2 kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1","kind":"Deployment", @@ -513,7 +513,7 @@ kind: Deployment metadata: annotations: # ... - # The annotation contains the updated image to nginx 1.11.9, + # The annotation contains the updated image to nginx 1.16.1, # but does not contain the updated replicas to 2 kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1","kind":"Deployment", diff --git a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md index 2c9c94c70740a..4bd40719ac201 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md @@ -434,7 +434,7 @@ kubectl patch deployment patch-demo --patch '{"spec": {"template": {"spec": {"co The flag `--subresource=[subresource-name]` is used with kubectl commands like get, patch, edit and replace to fetch and update `status` and `scale` subresources of the resources (applicable for kubectl version v1.24 or more). This flag is used with all the API resources -(built-in and CRs) which has `status` or `scale` subresource. Deployment is one of the +(built-in and CRs) that have `status` or `scale` subresource. Deployment is one of the examples which supports these subresources. Here's a manifest for a Deployment that has two replicas: diff --git a/content/en/docs/tasks/run-application/access-api-from-pod.md b/content/en/docs/tasks/run-application/access-api-from-pod.md index d56f624cd561b..41d6ea478e579 100644 --- a/content/en/docs/tasks/run-application/access-api-from-pod.md +++ b/content/en/docs/tasks/run-application/access-api-from-pod.md @@ -42,10 +42,18 @@ securely with the API server. ### Directly accessing the REST API -While running in a Pod, the Kubernetes apiserver is accessible via a Service named -`kubernetes` in the `default` namespace. Therefore, Pods can use the -`kubernetes.default.svc` hostname to query the API server. Official client libraries -do this automatically. +While running in a Pod, your container can create an HTTPS URL for the Kubernetes API +server by fetching the `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT_HTTPS` +environment variables. The API server's in-cluster address is also published to a +Service named `kubernetes` in the `default` namespace so that pods may reference +`kubernetes.default.svc` as a DNS name for the local API server. + +{{< note >}} +Kubernetes does not guarantee that the API server has a valid certificate for +the hostname `kubernetes.default.svc`; +however, the control plane **is** expected to present a valid certificate for the +hostname or IP address that `$KUBERNETES_SERVICE_HOST` represents. +{{< /note >}} The recommended way to authenticate to the API server is with a [service account](/docs/tasks/configure-pod-container/configure-service-account/) diff --git a/content/en/docs/tasks/tools/included/_index.md b/content/en/docs/tasks/tools/included/_index.md index 2da0437b8235a..3313378500fa4 100644 --- a/content/en/docs/tasks/tools/included/_index.md +++ b/content/en/docs/tasks/tools/included/_index.md @@ -3,4 +3,8 @@ title: "Tools Included" description: "Snippets to be included in the main kubectl-installs-*.md pages." headless: true toc_hide: true +_build: + list: never + render: never + publishResources: false --- \ No newline at end of file diff --git a/content/en/docs/tasks/tools/included/kubectl-convert-overview.md b/content/en/docs/tasks/tools/included/kubectl-convert-overview.md index b1799d52ea212..681741645265a 100644 --- a/content/en/docs/tasks/tools/included/kubectl-convert-overview.md +++ b/content/en/docs/tasks/tools/included/kubectl-convert-overview.md @@ -4,6 +4,10 @@ description: >- A kubectl plugin that allows you to convert manifests from one version of a Kubernetes API to a different version. headless: true +_build: + list: never + render: never + publishResources: false --- A plugin for Kubernetes command-line tool `kubectl`, which allows you to convert manifests between different API diff --git a/content/en/docs/tasks/tools/included/kubectl-whats-next.md b/content/en/docs/tasks/tools/included/kubectl-whats-next.md index 4b0da49bbcd97..ea77a0a607975 100644 --- a/content/en/docs/tasks/tools/included/kubectl-whats-next.md +++ b/content/en/docs/tasks/tools/included/kubectl-whats-next.md @@ -2,6 +2,10 @@ title: "What's next?" description: "What's next after installing kubectl." headless: true +_build: + list: never + render: never + publishResources: false --- * [Install Minikube](https://minikube.sigs.k8s.io/docs/start/) diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md index 2f4a759e4e613..3c0a77b70e0ba 100644 --- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md +++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md @@ -2,6 +2,10 @@ title: "bash auto-completion on Linux" description: "Some optional configuration for bash auto-completion on Linux." headless: true +_build: + list: never + render: never + publishResources: false --- ### Introduction diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md index 47243c575ac61..04db11388510b 100644 --- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md +++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md @@ -2,6 +2,10 @@ title: "bash auto-completion on macOS" description: "Some optional configuration for bash auto-completion on macOS." headless: true +_build: + list: never + render: never + publishResources: false --- ### Introduction @@ -51,8 +55,7 @@ brew install bash-completion@2 As stated in the output of this command, add the following to your `~/.bash_profile` file: ```bash -export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" -[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh" +brew_etc="$(brew --prefix)/etc" && [[ -r "${brew_etc}/profile.d/bash_completion.sh" ]] && . "${brew_etc}/profile.d/bash_completion.sh" ``` Reload your shell and verify that bash-completion v2 is correctly installed with `type _init_completion`. diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-fish.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-fish.md index a64d0e184c223..b98460c554ca3 100644 --- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-fish.md +++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-fish.md @@ -2,8 +2,16 @@ title: "fish auto-completion" description: "Optional configuration to enable fish shell auto-completion." headless: true +_build: + list: never + render: never + publishResources: false --- +{{< note >}} +Autocomplete for Fish requires kubectl 1.23 or later. +{{< /note >}} + The kubectl completion script for Fish can be generated with the command `kubectl completion fish`. Sourcing the completion script in your shell enables kubectl autocompletion. To do so in all your shell sessions, add the following line to your `~/.config/fish/config.fish` file: diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-pwsh.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-pwsh.md index 12e5d60c5d29b..66acd343b0c20 100644 --- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-pwsh.md +++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-pwsh.md @@ -2,6 +2,10 @@ title: "PowerShell auto-completion" description: "Some optional configuration for powershell auto-completion." headless: true +_build: + list: never + render: never + publishResources: false --- The kubectl completion script for PowerShell can be generated with the command `kubectl completion powershell`. diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md index 176bdeeeb12eb..dd6c4fd48ff95 100644 --- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md +++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md @@ -2,6 +2,10 @@ title: "zsh auto-completion" description: "Some optional configuration for zsh auto-completion." headless: true +_build: + list: never + render: never + publishResources: false --- The kubectl completion script for Zsh can be generated with the command `kubectl completion zsh`. Sourcing the completion script in your shell enables kubectl autocompletion. diff --git a/content/en/docs/tasks/tools/included/verify-kubectl.md b/content/en/docs/tasks/tools/included/verify-kubectl.md index fbd92e4cb6795..78246912657e6 100644 --- a/content/en/docs/tasks/tools/included/verify-kubectl.md +++ b/content/en/docs/tasks/tools/included/verify-kubectl.md @@ -2,6 +2,10 @@ title: "verify kubectl install" description: "How to verify kubectl." headless: true +_build: + list: never + render: never + publishResources: false --- In order for kubectl to find and access a Kubernetes cluster, it needs a diff --git a/content/en/docs/tasks/tools/install-kubectl-windows.md b/content/en/docs/tasks/tools/install-kubectl-windows.md index 240e3807a7cb1..0e7bc7c53e070 100644 --- a/content/en/docs/tasks/tools/install-kubectl-windows.md +++ b/content/en/docs/tasks/tools/install-kubectl-windows.md @@ -56,7 +56,7 @@ The following methods exist for installing kubectl on Windows: - Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result: ```powershell - $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256) + $(Get-FileHash -Algorithm SHA256 .\kubectl.exe).Hash -eq $(Get-Content .\kubectl.exe.sha256) ``` 1. Append or prepend the `kubectl` binary folder to your `PATH` environment variable. diff --git a/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html index fd3db09a42996..153137dc91a04 100644 --- a/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html +++ b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html @@ -17,9 +17,6 @@
    -
    - To interact with the Terminal, please use the desktop/tablet version -
    diff --git a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html index 91cb4cad8bf43..8fb8cde395a61 100644 --- a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html @@ -16,9 +16,6 @@
    -
    - The screen is too narrow to interact with the Terminal, please use a desktop/tablet. -
    diff --git a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html index 07771deb76622..7a0a679b53081 100644 --- a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -70,7 +70,7 @@

    Cluster Diagram

    The Control Plane is responsible for managing the cluster. The Control Plane coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates.

    -

    A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes control plane. The node should also have tools for handling container operations, such as containerd or Docker. A Kubernetes cluster that handles production traffic should have a minimum of three nodes because if one node goes down, both an etcd member and a control plane instance are lost, and redundancy is compromised. You can mitigate this risk by adding more control plane nodes.

    +

    A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes control plane. The node should also have tools for handling container operations, such as containerd or Docker. A Kubernetes cluster that handles production traffic should have a minimum of three nodes because if one node goes down, both an etcd member and a control plane instance are lost, and redundancy is compromised. You can mitigate this risk by adding more control plane nodes.

    diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html index 9154fac086db5..5cbc82084d2a3 100644 --- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html @@ -25,10 +25,6 @@
    -
    - To interact with the Terminal, please use the desktop/tablet version -
    -
    diff --git a/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html b/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html index 9e4f35f08107b..03561cc6a55c4 100644 --- a/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html @@ -18,10 +18,6 @@
    -
    - To interact with the Terminal, please use the desktop/tablet version -
    -
    diff --git a/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html b/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html index bec8c37da165c..4cd3a26f2b69d 100644 --- a/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html @@ -16,9 +16,6 @@
    -
    - To interact with the Terminal, please use the desktop/tablet version -
    diff --git a/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html b/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html index 890fe67ac52a7..68e4c21d14028 100644 --- a/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html @@ -16,9 +16,6 @@
    -
    - To interact with the Terminal, please use the desktop/tablet version -
    diff --git a/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html b/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html index 4ce801579fe8f..83a216e30ad25 100644 --- a/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html @@ -16,9 +16,6 @@
    -
    - To interact with the Terminal, please use the desktop/tablet version -
    @@ -37,4 +34,4 @@ - + diff --git a/content/en/docs/tutorials/security/apparmor.md b/content/en/docs/tutorials/security/apparmor.md index 07b9fae3e8a6f..55d632ddb2665 100644 --- a/content/en/docs/tutorials/security/apparmor.md +++ b/content/en/docs/tutorials/security/apparmor.md @@ -3,7 +3,7 @@ reviewers: - stclair title: Restrict a Container's Access to Resources with AppArmor content_type: tutorial -weight: 10 +weight: 30 --- diff --git a/content/en/docs/tutorials/security/cluster-level-pss.md b/content/en/docs/tutorials/security/cluster-level-pss.md index 1748ebb19c754..07273c3be8ee9 100644 --- a/content/en/docs/tutorials/security/cluster-level-pss.md +++ b/content/en/docs/tutorials/security/cluster-level-pss.md @@ -41,56 +41,55 @@ that are most appropriate for your configuration, do the following: 1. Create a cluster with no Pod Security Standards applied: - ```shell - kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.24.0 - ``` + ```shell + kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.24.0 + ``` The output is similar to this: - ``` - Creating cluster "psa-wo-cluster-pss" ... - ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 - ✓ Preparing nodes 📦 - ✓ Writing configuration 📜 - ✓ Starting control-plane 🕹️ - ✓ Installing CNI 🔌 - ✓ Installing StorageClass 💾 - Set kubectl context to "kind-psa-wo-cluster-pss" - You can now use your cluster with: - - kubectl cluster-info --context kind-psa-wo-cluster-pss - - Thanks for using kind! 😊 - - ``` + ``` + Creating cluster "psa-wo-cluster-pss" ... + ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 + ✓ Preparing nodes 📦 + ✓ Writing configuration 📜 + ✓ Starting control-plane 🕹️ + ✓ Installing CNI 🔌 + ✓ Installing StorageClass 💾 + Set kubectl context to "kind-psa-wo-cluster-pss" + You can now use your cluster with: + + kubectl cluster-info --context kind-psa-wo-cluster-pss + + Thanks for using kind! 😊 + ``` 1. Set the kubectl context to the new cluster: - ```shell - kubectl cluster-info --context kind-psa-wo-cluster-pss - ``` + ```shell + kubectl cluster-info --context kind-psa-wo-cluster-pss + ``` The output is similar to this: - ``` - Kubernetes control plane is running at https://127.0.0.1:61350 + ``` + Kubernetes control plane is running at https://127.0.0.1:61350 - CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy - - To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. - ``` - -1. Get a list of namespaces in the cluster: - - ```shell - kubectl get ns - ``` - The output is similar to this: - ``` - NAME STATUS AGE - default Active 9m30s - kube-node-lease Active 9m32s - kube-public Active 9m32s - kube-system Active 9m32s - local-path-storage Active 9m26s - ``` + CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy + + To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. + ``` + +1. Get a list of namespaces in the cluster: + + ```shell + kubectl get ns + ``` + The output is similar to this: + ``` + NAME STATUS AGE + default Active 9m30s + kube-node-lease Active 9m32s + kube-public Active 9m32s + kube-system Active 9m32s + local-path-storage Active 9m26s + ``` 1. Use `--dry-run=server` to understand what happens when different Pod Security Standards are applied: @@ -100,7 +99,7 @@ that are most appropriate for your configuration, do the following: kubectl label --dry-run=server --overwrite ns --all \ pod-security.kubernetes.io/enforce=privileged ``` - The output is similar to this: + The output is similar to this: ``` namespace/default labeled namespace/kube-node-lease labeled @@ -113,7 +112,7 @@ that are most appropriate for your configuration, do the following: kubectl label --dry-run=server --overwrite ns --all \ pod-security.kubernetes.io/enforce=baseline ``` - The output is similar to this: + The output is similar to this: ``` namespace/default labeled namespace/kube-node-lease labeled @@ -127,11 +126,11 @@ that are most appropriate for your configuration, do the following: ``` 3. Restricted - ```shell + ```shell kubectl label --dry-run=server --overwrite ns --all \ pod-security.kubernetes.io/enforce=restricted ``` - The output is similar to this: + The output is similar to this: ``` namespace/default labeled namespace/kube-node-lease labeled @@ -179,72 +178,72 @@ following: 1. Create a configuration file that can be consumed by the Pod Security Admission Controller to implement these Pod Security Standards: - ``` - mkdir -p /tmp/pss - cat < /tmp/pss/cluster-level-pss.yaml - apiVersion: apiserver.config.k8s.io/v1 - kind: AdmissionConfiguration - plugins: - - name: PodSecurity - configuration: - apiVersion: pod-security.admission.config.k8s.io/v1 - kind: PodSecurityConfiguration - defaults: - enforce: "baseline" - enforce-version: "latest" - audit: "restricted" - audit-version: "latest" - warn: "restricted" - warn-version: "latest" - exemptions: - usernames: [] - runtimeClasses: [] - namespaces: [kube-system] - EOF - ``` - - {{< note >}} - `pod-security.admission.config.k8s.io/v1` configuration requires v1.25+. - For v1.23 and v1.24, use [v1beta1](https://v1-24.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/). - For v1.22, use [v1alpha1](https://v1-22.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/). - {{< /note >}} + ``` + mkdir -p /tmp/pss + cat < /tmp/pss/cluster-level-pss.yaml + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: PodSecurity + configuration: + apiVersion: pod-security.admission.config.k8s.io/v1 + kind: PodSecurityConfiguration + defaults: + enforce: "baseline" + enforce-version: "latest" + audit: "restricted" + audit-version: "latest" + warn: "restricted" + warn-version: "latest" + exemptions: + usernames: [] + runtimeClasses: [] + namespaces: [kube-system] + EOF + ``` + + {{< note >}} + `pod-security.admission.config.k8s.io/v1` configuration requires v1.25+. + For v1.23 and v1.24, use [v1beta1](https://v1-24.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/). + For v1.22, use [v1alpha1](https://v1-22.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/). + {{< /note >}} 1. Configure the API server to consume this file during cluster creation: - ``` - cat < /tmp/pss/cluster-config.yaml - kind: Cluster - apiVersion: kind.x-k8s.io/v1alpha4 - nodes: - - role: control-plane - kubeadmConfigPatches: - - | - kind: ClusterConfiguration - apiServer: - extraArgs: - admission-control-config-file: /etc/config/cluster-level-pss.yaml - extraVolumes: - - name: accf - hostPath: /etc/config - mountPath: /etc/config - readOnly: false - pathType: "DirectoryOrCreate" - extraMounts: - - hostPath: /tmp/pss - containerPath: /etc/config - # optional: if set, the mount is read-only. - # default false - readOnly: false - # optional: if set, the mount needs SELinux relabeling. - # default false - selinuxRelabel: false - # optional: set propagation mode (None, HostToContainer or Bidirectional) - # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation - # default None - propagation: None - EOF - ``` + ``` + cat < /tmp/pss/cluster-config.yaml + kind: Cluster + apiVersion: kind.x-k8s.io/v1alpha4 + nodes: + - role: control-plane + kubeadmConfigPatches: + - | + kind: ClusterConfiguration + apiServer: + extraArgs: + admission-control-config-file: /etc/config/cluster-level-pss.yaml + extraVolumes: + - name: accf + hostPath: /etc/config + mountPath: /etc/config + readOnly: false + pathType: "DirectoryOrCreate" + extraMounts: + - hostPath: /tmp/pss + containerPath: /etc/config + # optional: if set, the mount is read-only. + # default false + readOnly: false + # optional: if set, the mount needs SELinux relabeling. + # default false + selinuxRelabel: false + # optional: set propagation mode (None, HostToContainer or Bidirectional) + # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation + # default None + propagation: None + EOF + ``` {{}} If you use Docker Desktop with KinD on macOS, you can @@ -256,56 +255,57 @@ following: these Pod Security Standards: ```shell - kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml + kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml ``` The output is similar to this: ``` - Creating cluster "psa-with-cluster-pss" ... - ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 - ✓ Preparing nodes 📦 - ✓ Writing configuration 📜 - ✓ Starting control-plane 🕹️ - ✓ Installing CNI 🔌 - ✓ Installing StorageClass 💾 - Set kubectl context to "kind-psa-with-cluster-pss" - You can now use your cluster with: + Creating cluster "psa-with-cluster-pss" ... + ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 + ✓ Preparing nodes 📦 + ✓ Writing configuration 📜 + ✓ Starting control-plane 🕹️ + ✓ Installing CNI 🔌 + ✓ Installing StorageClass 💾 + Set kubectl context to "kind-psa-with-cluster-pss" + You can now use your cluster with: - kubectl cluster-info --context kind-psa-with-cluster-pss + kubectl cluster-info --context kind-psa-with-cluster-pss - Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂 - ``` + Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂 + ``` -1. Point kubectl to the cluster +1. Point kubectl to the cluster: ```shell - kubectl cluster-info --context kind-psa-with-cluster-pss - ``` + kubectl cluster-info --context kind-psa-with-cluster-pss + ``` The output is similar to this: - ``` - Kubernetes control plane is running at https://127.0.0.1:63855 - CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy + ``` + Kubernetes control plane is running at https://127.0.0.1:63855 + + CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy - To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. - ``` + To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. + ``` 1. Create the following Pod specification for a minimal configuration in the default namespace: - ``` - cat < /tmp/pss/nginx-pod.yaml - apiVersion: v1 - kind: Pod - metadata: - name: nginx - spec: - containers: - - image: nginx - name: nginx - ports: - - containerPort: 80 - EOF - ``` + ``` + cat < /tmp/pss/nginx-pod.yaml + apiVersion: v1 + kind: Pod + metadata: + name: nginx + spec: + containers: + - image: nginx + name: nginx + ports: + - containerPort: 80 + EOF + ``` 1. Create the Pod in the cluster: ```shell - kubectl apply -f /tmp/pss/nginx-pod.yaml + kubectl apply -f /tmp/pss/nginx-pod.yaml ``` The output is similar to this: ``` @@ -315,9 +315,14 @@ following: ## Clean up -Run `kind delete cluster --name psa-with-cluster-pss` and -`kind delete cluster --name psa-wo-cluster-pss` to delete the clusters you -created. +Now delete the clusters which you created above by running the following command: + +```shell +kind delete cluster --name psa-with-cluster-pss +``` +```shell +kind delete cluster --name psa-wo-cluster-pss +``` ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/tutorials/security/ns-level-pss.md b/content/en/docs/tutorials/security/ns-level-pss.md index d35df5904a5a9..64aaf64832a56 100644 --- a/content/en/docs/tutorials/security/ns-level-pss.md +++ b/content/en/docs/tutorials/security/ns-level-pss.md @@ -1,7 +1,7 @@ --- title: Apply Pod Security Standards at the Namespace Level content_type: tutorial -weight: 10 +weight: 20 --- {{% alert title="Note" %}} @@ -155,7 +155,11 @@ with no warnings. ## Clean up -Run `kind delete cluster --name psa-ns-level` to delete the cluster created. +Now delete the cluster which you created above by running the following command: + +```shell +kind delete cluster --name psa-ns-level +``` ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/tutorials/security/seccomp.md b/content/en/docs/tutorials/security/seccomp.md index 6187d198f1971..3a445afacfe41 100644 --- a/content/en/docs/tutorials/security/seccomp.md +++ b/content/en/docs/tutorials/security/seccomp.md @@ -5,7 +5,7 @@ reviewers: - saschagrunert title: Restrict a Container's Syscalls with seccomp content_type: tutorial -weight: 20 +weight: 40 min-kubernetes-server-version: v1.22 --- @@ -265,6 +265,44 @@ docker exec -it kind-worker bash -c \ } ``` +## Create Pod that uses the container runtime default seccomp profile + +Most container runtimes provide a sane set of default syscalls that are allowed +or not. You can adopt these defaults for your workload by setting the seccomp +type in the security context of a pod or container to `RuntimeDefault`. + +{{< note >}} +If you have the `SeccompDefault` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +enabled, then Pods use the `RuntimeDefault` seccomp profile whenever +no other seccomp profile is specified. Otherwise, the default is `Unconfined`. +{{< /note >}} + +Here's a manifest for a Pod that requests the `RuntimeDefault` seccomp profile +for all its containers: + +{{< codenew file="pods/security/seccomp/ga/default-pod.yaml" >}} + +Create that Pod: +```shell +kubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/default-pod.yaml +``` + +```shell +kubectl get pod default-pod +``` + +The Pod should be showing as having started successfully: +``` +NAME READY STATUS RESTARTS AGE +default-pod 1/1 Running 0 20s +``` + +Finally, now that you saw that work OK, clean up: + +```shell +kubectl delete pod default-pod --wait --now +``` + ## Create a Pod with a seccomp profile for syscall auditing To start off, apply the `audit.json` profile, which will log all syscalls of the @@ -493,43 +531,6 @@ kubectl delete service fine-pod --wait kubectl delete pod fine-pod --wait --now ``` -## Create Pod that uses the container runtime default seccomp profile - -Most container runtimes provide a sane set of default syscalls that are allowed -or not. You can adopt these defaults for your workload by setting the seccomp -type in the security context of a pod or container to `RuntimeDefault`. - -{{< note >}} -If you have the `SeccompDefault` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) enabled, then Pods use the `RuntimeDefault` seccomp profile whenever -no other seccomp profile is specified. Otherwise, the default is `Unconfined`. -{{< /note >}} - -Here's a manifest for a Pod that requests the `RuntimeDefault` seccomp profile -for all its containers: - -{{< codenew file="pods/security/seccomp/ga/default-pod.yaml" >}} - -Create that Pod: -```shell -kubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/default-pod.yaml -``` - -```shell -kubectl get pod default-pod -``` - -The Pod should be showing as having started successfully: -``` -NAME READY STATUS RESTARTS AGE -default-pod 1/1 Running 0 20s -``` - -Finally, now that you saw that work OK, clean up: - -```shell -kubectl delete pod default-pod --wait --now -``` - ## {{% heading "whatsnext" %}} You can learn more about Linux seccomp: diff --git a/content/en/docs/tutorials/services/connect-applications-service.md b/content/en/docs/tutorials/services/connect-applications-service.md index dfa3023063920..be8202cd97891 100644 --- a/content/en/docs/tutorials/services/connect-applications-service.md +++ b/content/en/docs/tutorials/services/connect-applications-service.md @@ -15,7 +15,12 @@ weight: 20 Now that you have a continuously running, replicated application you can expose it on a network. -Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model. +Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. +Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly +create links between pods or map container ports to host ports. This means that containers within +a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other +without NAT. The rest of this document elaborates on how you can run reliable services on such a +networking model. This tutorial uses a simple nginx web server to demonstrate the concept. @@ -49,16 +54,32 @@ kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs [map[ip:10.244.2.5]] ``` -You should be able to ssh into any node in your cluster and use a tool such as `curl` to make queries against both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same `containerPort`, and access them from any other pod or node in your cluster using the assigned IP address for the Service. If you want to arrange for a specific port on the host Node to be forwarded to backing Pods, you can - but the networking model should mean that you do not need to do so. +You should be able to ssh into any node in your cluster and use a tool such as `curl` +to make queries against both IPs. Note that the containers are *not* using port 80 on +the node, nor are there any special NAT rules to route traffic to the pod. This means +you can run multiple nginx pods on the same node all using the same `containerPort`, +and access them from any other pod or node in your cluster using the assigned IP +address for the Service. If you want to arrange for a specific port on the host +Node to be forwarded to backing Pods, you can - but the networking model should +mean that you do not need to do so. - -You can read more about the [Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) if you're curious. +You can read more about the +[Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) +if you're curious. ## Creating a Service -So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the Deployment will create new ones, with different IPs. This is the problem a Service solves. +So we have pods running nginx in a flat, cluster wide, address space. In theory, +you could talk to these pods directly, but what happens when a node dies? The pods +die with it, and the Deployment will create new ones, with different IPs. This is +the problem a Service solves. -A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service. +A Kubernetes Service is an abstraction which defines a logical set of Pods running +somewhere in your cluster, that all provide the same functionality. When created, +each Service is assigned a unique IP address (also called clusterIP). This address +is tied to the lifespan of the Service, and will not change while the Service is alive. +Pods can be configured to talk to the Service, and know that communication to the +Service will be automatically load-balanced out to some pod that is a member of the Service. You can create a Service for your 2 nginx replicas with `kubectl expose`: @@ -112,8 +133,12 @@ Labels: run=my-nginx Annotations: Selector: run=my-nginx Type: ClusterIP +IP Family Policy: SingleStack +IP Families: IPv4 IP: 10.0.162.149 +IPs: 10.0.162.149 Port: 80/TCP +TargetPort: 80/TCP Endpoints: 10.244.2.5:80,10.244.3.4:80 Session Affinity: None Events: @@ -136,10 +161,12 @@ about the [service proxy](/docs/concepts/services-networking/service/#virtual-ip Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [CoreDNS cluster addon](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/coredns). + {{< note >}} -If the service environment variables are not desired (because possible clashing with expected program ones, -too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks` -flag to `false` on the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core). +If the service environment variables are not desired (because possible clashing +with expected program ones, too many variables to process, only using DNS, etc) +you can disable this mode by setting the `enableServiceLinks` flag to `false` on +the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core). {{< /note >}} @@ -193,7 +220,8 @@ KUBERNETES_SERVICE_PORT_HTTPS=443 ### DNS -Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster: +Kubernetes offers a DNS cluster addon Service that automatically assigns dns names +to other Services. You can check if it's running on your cluster: ```shell kubectl get services kube-dns --namespace=kube-system @@ -204,7 +232,13 @@ kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 8m ``` The rest of this section will assume you have a Service with a long lived IP -(my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`). If CoreDNS isn't running, you can enable it referring to the [CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes) or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns). Let's run another curl application to test this: +(my-nginx), and a DNS server that has assigned a name to that IP. Here we use +the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the +Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`). +If CoreDNS isn't running, you can enable it referring to the +[CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes) +or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns). +Let's run another curl application to test this: ```shell kubectl run curl --image=radial/busyboxplus:curl -i --tty @@ -227,13 +261,18 @@ Address 1: 10.0.162.149 ## Securing the Service -Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need: +Till now we have only accessed the nginx server from within the cluster. Before +exposing the Service to the internet, you want to make sure the communication +channel is secure. For this, you will need: * Self signed certificates for https (unless you already have an identity certificate) * An nginx server configured to use the certificates * A [secret](/docs/concepts/configuration/secret/) that makes the certificates accessible to pods -You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short: +You can acquire all these from the +[nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/). +This requires having go and make tools installed. If you don't want to install those, +then follow the manual steps later. In short: ```shell make keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt @@ -272,7 +311,9 @@ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -ou cat /d/tmp/nginx.crt | base64 cat /d/tmp/nginx.key | base64 ``` -Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line. + +Use the output from the previous commands to create a yaml file as follows. +The base64 encoded value should all be on a single line. ```yaml apiVersion: "v1" @@ -296,7 +337,8 @@ NAME TYPE DATA AGE nginxsecret kubernetes.io/tls 2 1m ``` -Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443): +Now modify your nginx replicas to start an https server using the certificate +in the secret, and the Service, to expose both ports (80 and 443): {{< codenew file="service/networking/nginx-secure-app.yaml" >}} @@ -327,9 +369,12 @@ node $ curl -k https://10.244.3.5

    Welcome to nginx!

    ``` -Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, -so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup. -Let's test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service): +Note how we supplied the `-k` parameter to curl in the last step, this is because +we don't know anything about the pods running nginx at certificate generation time, +so we have to tell curl to ignore the CName mismatch. By creating a Service we +linked the CName used in the certificate with the actual DNS name used by pods +during Service lookup. Let's test this from a pod (the same secret is being reused +for simplicity, the pod only needs nginx.crt to access the Service): {{< codenew file="service/networking/curlpod.yaml" >}} @@ -391,7 +436,8 @@ $ curl https://: -k

    Welcome to nginx!

    ``` -Let's now recreate the Service to use a cloud load balancer. Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: +Let's now recreate the Service to use a cloud load balancer. +Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: ```shell kubectl edit svc my-nginx @@ -407,8 +453,8 @@ curl https:// -k Welcome to nginx! ``` -The IP address in the `EXTERNAL-IP` column is the one that is available on the public internet. The `CLUSTER-IP` is only available inside your -cluster/private cloud network. +The IP address in the `EXTERNAL-IP` column is the one that is available on the public internet. +The `CLUSTER-IP` is only available inside your cluster/private cloud network. Note that on AWS, type `LoadBalancer` creates an ELB, which uses a (long) hostname, not an IP. It's too long to fit in the standard `kubectl get svc` diff --git a/content/en/examples/admin/sched/my-scheduler.yaml b/content/en/examples/admin/sched/my-scheduler.yaml index 5addf9e0e6ad3..fa1c65bf9a462 100644 --- a/content/en/examples/admin/sched/my-scheduler.yaml +++ b/content/en/examples/admin/sched/my-scheduler.yaml @@ -30,6 +30,20 @@ roleRef: name: system:volume-scheduler apiGroup: rbac.authorization.k8s.io --- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: my-scheduler-extension-apiserver-authentication-reader + namespace: kube-system +roleRef: + kind: Role + name: extension-apiserver-authentication-reader + apiGroup: rbac.authorization.k8s.io +subjects: +- kind: ServiceAccount + name: my-scheduler + namespace: kube-system +--- apiVersion: v1 kind: ConfigMap metadata: diff --git a/content/en/examples/application/php-apache.yaml b/content/en/examples/application/php-apache.yaml index d29d2b91593f3..a194dce6f958a 100644 --- a/content/en/examples/application/php-apache.yaml +++ b/content/en/examples/application/php-apache.yaml @@ -6,7 +6,6 @@ spec: selector: matchLabels: run: php-apache - replicas: 1 template: metadata: labels: diff --git a/content/en/examples/application/ssa/nginx-deployment-replicas-only.yaml b/content/en/examples/application/ssa/nginx-deployment-replicas-only.yaml deleted file mode 100644 index 0848ba0e218d0..0000000000000 --- a/content/en/examples/application/ssa/nginx-deployment-replicas-only.yaml +++ /dev/null @@ -1,6 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: nginx-deployment -spec: - replicas: 3 diff --git a/content/en/examples/concepts/policy/limit-range/problematic-limit-range.yaml b/content/en/examples/concepts/policy/limit-range/problematic-limit-range.yaml index 5fbfe632c0a77..ee89cb79faf4f 100644 --- a/content/en/examples/concepts/policy/limit-range/problematic-limit-range.yaml +++ b/content/en/examples/concepts/policy/limit-range/problematic-limit-range.yaml @@ -12,3 +12,4 @@ spec: cpu: "1" min: cpu: 100m + type: Container diff --git a/content/en/examples/examples_test.go b/content/en/examples/examples_test.go index 495d8435884ad..670131237dad9 100644 --- a/content/en/examples/examples_test.go +++ b/content/en/examples/examples_test.go @@ -46,6 +46,9 @@ import ( api "k8s.io/kubernetes/pkg/apis/core" "k8s.io/kubernetes/pkg/apis/core/validation" + // "k8s.io/kubernetes/pkg/apis/flowcontrol" + // flowcontrol_validation "k8s.io/kubernetes/pkg/apis/flowcontrol/validation" + "k8s.io/kubernetes/pkg/apis/networking" networking_validation "k8s.io/kubernetes/pkg/apis/networking/validation" @@ -152,9 +155,17 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { AllowDownwardAPIHugePages: true, AllowInvalidPodDeletionCost: false, AllowIndivisibleHugePagesValues: true, - AllowWindowsHostProcessField: true, AllowExpandedDNSConfig: true, } + netValidationOptions := networking_validation.NetworkPolicyValidationOptions{ + AllowInvalidLabelValueInSelector: false, + } + pdbValidationOptions := policy_validation.PodDisruptionBudgetValidationOptions{ + AllowInvalidLabelValueInSelector: false, + } + clusterroleValidationOptions := rbac_validation.ClusterRoleValidationOptions{ + AllowInvalidLabelValueInSelector: false, + } // Enable CustomPodDNS for testing // feature.DefaultFeatureGate.Set("CustomPodDNS=true") @@ -245,11 +256,31 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { t.Namespace = api.NamespaceDefault } errors = apps_validation.ValidateStatefulSet(t, podValidationOptions) + case *apps.DaemonSet: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = apps_validation.ValidateDaemonSet(t, podValidationOptions) + case *apps.Deployment: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = apps_validation.ValidateDeployment(t, podValidationOptions) + case *apps.ReplicaSet: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = apps_validation.ValidateReplicaSet(t, podValidationOptions) case *autoscaling.HorizontalPodAutoscaler: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } errors = autoscaling_validation.ValidateHorizontalPodAutoscaler(t) + case *batch.CronJob: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = batch_validation.ValidateCronJobCreate(t, podValidationOptions) case *batch.Job: if t.Namespace == "" { t.Namespace = api.NamespaceDefault @@ -261,58 +292,31 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { t.ObjectMeta.Name = "skip-for-good" } errors = job.Strategy.Validate(nil, t) - case *apps.DaemonSet: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = apps_validation.ValidateDaemonSet(t, podValidationOptions) - case *apps.Deployment: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = apps_validation.ValidateDeployment(t, podValidationOptions) + // case *flowcontrol.FlowSchema: + // TODO: This is still failing + // errors = flowcontrol_validation.ValidateFlowSchema(t) case *networking.Ingress: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } errors = networking_validation.ValidateIngressCreate(t) case *networking.IngressClass: - /* - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - gv := schema.GroupVersion{ - Group: networking.GroupName, - Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version, - } - */ errors = networking_validation.ValidateIngressClass(t) - - case *policy.PodSecurityPolicy: - errors = policy_validation.ValidatePodSecurityPolicy(t) - case *apps.ReplicaSet: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = apps_validation.ValidateReplicaSet(t, podValidationOptions) - case *batch.CronJob: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = batch_validation.ValidateCronJobCreate(t, podValidationOptions) case *networking.NetworkPolicy: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = networking_validation.ValidateNetworkPolicy(t) + errors = networking_validation.ValidateNetworkPolicy(t, netValidationOptions) + case *policy.PodSecurityPolicy: + errors = policy_validation.ValidatePodSecurityPolicy(t) case *policy.PodDisruptionBudget: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = policy_validation.ValidatePodDisruptionBudget(t) + errors = policy_validation.ValidatePodDisruptionBudget(t, pdbValidationOptions) case *rbac.ClusterRole: // clusterole does not accept namespace - errors = rbac_validation.ValidateClusterRole(t) + errors = rbac_validation.ValidateClusterRole(t, clusterroleValidationOptions) case *rbac.ClusterRoleBinding: // clusterolebinding does not accept namespace errors = rbac_validation.ValidateClusterRoleBinding(t) @@ -383,6 +387,14 @@ func TestExampleObjectSchemas(t *testing.T) { // Please help maintain the alphabeta order in the map cases := map[string]map[string][]runtime.Object{ + "access": { + "endpoints-aggregated": {&rbac.ClusterRole{}}, + }, + "access/certificate-signing-request": { + "clusterrole-approve": {&rbac.ClusterRole{}}, + "clusterrole-create": {&rbac.ClusterRole{}}, + "clusterrole-sign": {&rbac.ClusterRole{}}, + }, "admin": { "namespace-dev": {&api.Namespace{}}, "namespace-prod": {&api.Namespace{}}, @@ -396,6 +408,7 @@ func TestExampleObjectSchemas(t *testing.T) { "dns-horizontal-autoscaler": {&api.ServiceAccount{}, &rbac.ClusterRole{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}}, "dnsutils": {&api.Pod{}}, }, + // TODO: "admin/konnectivity" is not include yet. "admin/logging": { "fluentd-sidecar-config": {&api.ConfigMap{}}, "two-files-counter-pod": {&api.Pod{}}, @@ -474,10 +487,6 @@ func TestExampleObjectSchemas(t *testing.T) { "application/hpa": { "php-apache": {&autoscaling.HorizontalPodAutoscaler{}}, }, - "application/nginx": { - "nginx-deployment": {&apps.Deployment{}}, - "nginx-svc": {&api.Service{}}, - }, "application/job": { "cronjob": {&batch.CronJob{}}, "job-tmpl": {&batch.Job{}}, @@ -492,6 +501,10 @@ func TestExampleObjectSchemas(t *testing.T) { "redis-pod": {&api.Pod{}}, "redis-service": {&api.Service{}}, }, + "application/mongodb": { + "mongo-deployment": {&apps.Deployment{}}, + "mongo-service": {&api.Service{}}, + }, "application/mysql": { "mysql-configmap": {&api.ConfigMap{}}, "mysql-deployment": {&api.Service{}, &apps.Deployment{}}, @@ -499,6 +512,14 @@ func TestExampleObjectSchemas(t *testing.T) { "mysql-services": {&api.Service{}, &api.Service{}}, "mysql-statefulset": {&apps.StatefulSet{}}, }, + "application/nginx": { + "nginx-deployment": {&apps.Deployment{}}, + "nginx-svc": {&api.Service{}}, + }, + "application/ssa": { + "nginx-deployment": {&apps.Deployment{}}, + "nginx-deployment-no-replicas": {&apps.Deployment{}}, + }, "application/web": { "web": {&api.Service{}, &apps.StatefulSet{}}, "web-parallel": {&api.Service{}, &apps.StatefulSet{}}, @@ -510,9 +531,15 @@ func TestExampleObjectSchemas(t *testing.T) { "application/zookeeper": { "zookeeper": {&api.Service{}, &api.Service{}, &policy.PodDisruptionBudget{}, &apps.StatefulSet{}}, }, + "concepts/policy/limit-range": { + "example-conflict-with-limitrange-cpu": {&api.Pod{}}, + "problematic-limit-range": {&api.LimitRange{}}, + "example-no-conflict-with-limitrange-cpu": {&api.Pod{}}, + }, "configmap": { "configmaps": {&api.ConfigMap{}, &api.ConfigMap{}}, "configmap-multikeys": {&api.ConfigMap{}}, + "configure-pod": {&api.Pod{}}, }, "controllers": { "daemonset": {&apps.DaemonSet{}}, @@ -558,7 +585,9 @@ func TestExampleObjectSchemas(t *testing.T) { "pod-with-affinity-anti-affinity": {&api.Pod{}}, "pod-with-node-affinity": {&api.Pod{}}, "pod-with-pod-affinity": {&api.Pod{}}, + "pod-with-scheduling-gates": {&api.Pod{}}, "pod-with-toleration": {&api.Pod{}}, + "pod-without-scheduling-gates": {&api.Pod{}}, "private-reg-pod": {&api.Pod{}}, "share-process-namespace": {&api.Pod{}}, "simple-pod": {&api.Pod{}}, @@ -624,6 +653,11 @@ func TestExampleObjectSchemas(t *testing.T) { "pv-volume": {&api.PersistentVolume{}}, "redis": {&api.Pod{}}, }, + "pods/topology-spread-constraints": { + "one-constraint": {&api.Pod{}}, + "one-constraint-with-nodeaffinity": {&api.Pod{}}, + "two-constraints": {&api.Pod{}}, + }, "policy": { "baseline-psp": {&policy.PodSecurityPolicy{}}, "example-psp": {&policy.PodSecurityPolicy{}}, @@ -633,6 +667,19 @@ func TestExampleObjectSchemas(t *testing.T) { "zookeeper-pod-disruption-budget-maxunavailable": {&policy.PodDisruptionBudget{}}, "zookeeper-pod-disruption-budget-minavailable": {&policy.PodDisruptionBudget{}}, }, + /* TODO: This doesn't work yet. + "priority-and-fairness": { + "health-for-strangers": {&flowcontrol.FlowSchema{}}, + }, + */ + "secret/serviceaccount": { + "mysecretname": {&api.Secret{}}, + }, + "security": { + "podsecurity-baseline": {&api.Namespace{}}, + "podsecurity-privileged": {&api.Namespace{}}, + "podsecurity-restricted": {&api.Namespace{}}, + }, "service": { "nginx-service": {&api.Service{}}, "load-balancer-example": {&apps.Deployment{}}, @@ -664,6 +711,7 @@ func TestExampleObjectSchemas(t *testing.T) { "name-virtual-host-ingress-no-third-host": {&networking.Ingress{}}, "namespaced-params": {&networking.IngressClass{}}, "networkpolicy": {&networking.NetworkPolicy{}}, + "networkpolicy-multiport-egress": {&networking.NetworkPolicy{}}, "network-policy-allow-all-egress": {&networking.NetworkPolicy{}}, "network-policy-allow-all-ingress": {&networking.NetworkPolicy{}}, "network-policy-default-deny-egress": {&networking.NetworkPolicy{}}, diff --git a/content/en/examples/priority-and-fairness/health-for-strangers.yaml b/content/en/examples/priority-and-fairness/health-for-strangers.yaml index c57e2cae37245..5b44c8c987d48 100644 --- a/content/en/examples/priority-and-fairness/health-for-strangers.yaml +++ b/content/en/examples/priority-and-fairness/health-for-strangers.yaml @@ -7,14 +7,14 @@ spec: priorityLevelConfiguration: name: exempt rules: - - nonResourceRules: - - nonResourceURLs: - - "/healthz" - - "/livez" - - "/readyz" - verbs: - - "*" - subjects: - - kind: Group - group: - name: system:unauthenticated + - nonResourceRules: + - nonResourceURLs: + - "/healthz" + - "/livez" + - "/readyz" + verbs: + - "*" + subjects: + - kind: Group + group: + name: "system:unauthenticated" diff --git a/content/en/examples/service/networking/networkpolicy-multiport-egress.yaml b/content/en/examples/service/networking/networkpolicy-multiport-egress.yaml new file mode 100644 index 0000000000000..f4c914bbec7d0 --- /dev/null +++ b/content/en/examples/service/networking/networkpolicy-multiport-egress.yaml @@ -0,0 +1,20 @@ +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: multi-port-egress + namespace: default +spec: + podSelector: + matchLabels: + role: db + policyTypes: + - Egress + egress: + - to: + - ipBlock: + cidr: 10.0.0.0/24 + ports: + - protocol: TCP + port: 32000 + endPort: 32768 + diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md index d3eead78be039..dc80e19baf225 100644 --- a/content/en/releases/patch-releases.md +++ b/content/en/releases/patch-releases.md @@ -78,9 +78,9 @@ releases may also occur in between these. | Monthly Patch Release | Cherry Pick Deadline | Target date | | --------------------- | -------------------- | ----------- | -| December 2022 | 2022-12-02 | 2022-12-08 | -| January 2023 | 2023-01-13 | 2023-01-18 | | February 2023 | 2023-02-10 | 2023-02-15 | +| March 2023 | 2023-03-10 | 2023-03-15 | +| April 2023 | 2023-04-07 | 2023-04-12 | ## Detailed Release History for Active Branches diff --git a/content/es/docs/concepts/configuration/configmap.md b/content/es/docs/concepts/configuration/configmap.md index ce16f99aca605..d3fb00cf9037c 100644 --- a/content/es/docs/concepts/configuration/configmap.md +++ b/content/es/docs/concepts/configuration/configmap.md @@ -204,7 +204,7 @@ Cuando un ConfigMap está siendo utilizado en un {{< glossary_tooltip text="volu El {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} comprueba si el ConfigMap montado está actualizado cada periodo de sincronización. Sin embargo, el {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} utiliza su caché local para obtener el valor actual del ConfigMap. El tipo de caché es configurable usando el campo `ConfigMapAndSecretChangeDetectionStrategy` en el -[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go). +[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubelet/config/v1beta1/types.go). Un ConfigMap puede ser propagado por vista (default), ttl-based, o simplemente redirigiendo todas las consultas directamente a la API. Como resultado, el retraso total desde el momento que el ConfigMap es actualizado hasta el momento diff --git a/content/es/docs/concepts/configuration/secret.md b/content/es/docs/concepts/configuration/secret.md index 969078a67a4e6..1025ebd78519d 100644 --- a/content/es/docs/concepts/configuration/secret.md +++ b/content/es/docs/concepts/configuration/secret.md @@ -520,7 +520,7 @@ Cuando se actualiza un Secret que ya se está consumiendo en un volumen, las cla Kubelet está verificando si el Secret montado esta actualizado en cada sincronización periódica. Sin embargo, está usando su caché local para obtener el valor actual del Secret. El tipo de caché es configurable usando el (campo `ConfigMapAndSecretChangeDetectionStrategy` en -[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)). +[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubelet/config/v1beta1/types.go)). Puede ser propagado por el reloj (default), ttl-based, o simplemente redirigiendo todas las solicitudes a kube-apiserver directamente. Como resultado, el retraso total desde el momento en que se actualiza el Secret hasta el momento en que se proyectan las nuevas claves en el Pod puede ser tan largo como el periodo de sincronización de kubelet + retraso de diff --git a/content/es/docs/concepts/security/overview.md b/content/es/docs/concepts/security/overview.md index 9bee65b8c6407..d07fa1e46452b 100644 --- a/content/es/docs/concepts/security/overview.md +++ b/content/es/docs/concepts/security/overview.md @@ -52,6 +52,7 @@ Proveedor IaaS | Link | Alibaba Cloud | https://www.alibabacloud.com/trust-center | Amazon Web Services | https://aws.amazon.com/security/ | Google Cloud Platform | https://cloud.google.com/security/ | +Huawei Cloud | https://www.huaweicloud.com/securecenter/overallsafety.html | IBM Cloud | https://www.ibm.com/cloud/security | Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security | Oracle Cloud Infrastructure | https://www.oracle.com/security/ | diff --git a/content/es/docs/concepts/storage/storage-capacity.md b/content/es/docs/concepts/storage/storage-capacity.md index e7292328446fb..2df4481a5bb55 100644 --- a/content/es/docs/concepts/storage/storage-capacity.md +++ b/content/es/docs/concepts/storage/storage-capacity.md @@ -46,7 +46,7 @@ En ese caso, el planificador sólo considera los nodos para el Pod que tienen su Para los volúmenes con el modo de enlace de volumen `Immediate`, el controlador de almacenamiento decide dónde crear el volumen, independientemente de los pods que usarán el volumen. Luego, el planificador programa los pods en los nodos donde el volumen está disponible después de que se haya creado. -Para los [volúmenes efímeros de CSI](/docs/concepts/storage/volumes/#csi), +Para los [volúmenes efímeros de CSI](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes), la planificación siempre ocurre sin considerar la capacidad de almacenamiento. Esto se basa en la suposición de que este tipo de volumen solo lo utilizan controladores CSI especiales que son locales a un nodo y no necesitan allí recursos importantes. ## Replanificación diff --git a/content/fr/docs/concepts/architecture/nodes.md b/content/fr/docs/concepts/architecture/nodes.md index 8fba2050530a9..e64b7c28ae664 100644 --- a/content/fr/docs/concepts/architecture/nodes.md +++ b/content/fr/docs/concepts/architecture/nodes.md @@ -13,7 +13,7 @@ Un nœud est une machine de travail dans Kubernetes, connue auparavant sous le n Un nœud peut être une machine virtuelle ou une machine physique, selon le cluster. Chaque nœud contient les services nécessaires à l'exécution de [pods](/docs/concepts/workloads/pods/pod/) et est géré par les composants du master. Les services sur un nœud incluent le [container runtime](/docs/concepts/overview/components/#node-components), kubelet et kube-proxy. -Consultez la section [Le Nœud Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) dans le document de conception de l'architecture pour plus de détails. +Consultez la section [Le Nœud Kubernetes](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) dans le document de conception de l'architecture pour plus de détails. diff --git a/content/fr/docs/concepts/configuration/secret.md b/content/fr/docs/concepts/configuration/secret.md index e381b3d531cc9..bad71e79d7c47 100644 --- a/content/fr/docs/concepts/configuration/secret.md +++ b/content/fr/docs/concepts/configuration/secret.md @@ -563,7 +563,7 @@ Le programme dans un conteneur est responsable de la lecture des secrets des fic Lorsqu'un secret déjà consommé dans un volume est mis à jour, les clés projetées sont finalement mises à jour également. Kubelet vérifie si le secret monté est récent à chaque synchronisation périodique. Cependant, il utilise son cache local pour obtenir la valeur actuelle du Secret. -Le type de cache est configurable à l'aide de le champ `ConfigMapAndSecretChangeDetectionStrategy` dans la structure [KubeletConfiguration](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go). +Le type de cache est configurable à l'aide de le champ `ConfigMapAndSecretChangeDetectionStrategy` dans la structure [KubeletConfiguration](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubelet/config/v1beta1/types.go). Il peut être soit propagé via watch (par défaut), basé sur ttl, ou simplement redirigé toutes les requêtes vers directement kube-apiserver. Par conséquent, le délai total entre le moment où le secret est mis à jour et le moment où de nouvelles clés sont projetées sur le pod peut être aussi long que la période de synchronisation du kubelet + le délai de propagation du cache, où le délai de propagation du cache dépend du type de cache choisi (cela équivaut au delai de propagation du watch, ttl du cache, ou bien zéro). diff --git a/content/fr/docs/concepts/storage/persistent-volumes.md b/content/fr/docs/concepts/storage/persistent-volumes.md index f4fc315a20f70..ad88ec14caadb 100644 --- a/content/fr/docs/concepts/storage/persistent-volumes.md +++ b/content/fr/docs/concepts/storage/persistent-volumes.md @@ -203,7 +203,7 @@ Cependant, le chemin particulier spécifié dans la partie `volumes` du template ### Redimensionnement des PVC -{{< feature-state for_k8s_version="v1.11" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} La prise en charge du redimensionnement des PersistentVolumeClaims (PVCs) est désormais activée par défaut. Vous pouvez redimensionner les types de volumes suivants: diff --git a/content/fr/docs/concepts/workloads/controllers/replicaset.md b/content/fr/docs/concepts/workloads/controllers/replicaset.md index 820c8bb42d317..3204d352777e5 100644 --- a/content/fr/docs/concepts/workloads/controllers/replicaset.md +++ b/content/fr/docs/concepts/workloads/controllers/replicaset.md @@ -258,7 +258,7 @@ curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/repli ### Supprimer juste un ReplicaSet -Vous pouvez supprimer un ReplicaSet sans affecter ses pods à l’aide de [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) avec l'option `--cascade=false`. +Vous pouvez supprimer un ReplicaSet sans affecter ses pods à l’aide de [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) avec l'option `--cascade=orphan`. Lorsque vous utilisez l'API REST ou la bibliothèque `client-go`, vous devez définir `propagationPolicy` sur `Orphan`. Par exemple : ```shell diff --git a/content/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 9983bde03c73f..19149f43a4e5e 100644 --- a/content/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -1,6 +1,6 @@ --- -title: Création d'un Cluster a master unique avec kubeadm -description: Création d'un Cluster a master unique avec kubeadm +title: Création d'un Cluster à master unique avec kubeadm +description: Création d'un Cluster à master unique avec kubeadm content_type: task weight: 30 --- @@ -9,7 +9,7 @@ weight: 30 **kubeadm** vous aide à démarrer un cluster Kubernetes minimum, viable et conforme aux meilleures pratiques. Avec kubeadm, votre cluster -doit passer les [tests de Conformance Kubernetes](https://kubernetes.io/blog/2017/10/software-conformance-certification). +doit passer les [tests de Conformité Kubernetes](https://kubernetes.io/blog/2017/10/software-conformance-certification). Kubeadm prend également en charge d'autres fonctions du cycle de vie, telles que les mises à niveau, la rétrogradation et la gestion des [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/). @@ -676,7 +676,7 @@ si le master est irrécupérable, votre cluster peut perdre ses données et peut partir de zéro. L'ajout du support HA (plusieurs serveurs etcd, plusieurs API servers, etc.) à kubeadm est encore en cours de developpement. -   Contournement: régulièrement [sauvegarder etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). +   Contournement: régulièrement [sauvegarder etcd](https://etcd.io/docs/v3.5/op-guide/recovery/). le répertoire des données etcd configuré par kubeadm se trouve dans `/var/lib/etcd` sur le master. ## Diagnostic {#troubleshooting} diff --git a/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md index 297cbd700ea21..2d9af18838152 100644 --- a/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md +++ b/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -97,7 +97,7 @@ resources: cpu: 500m ``` -Utilisez `kubectl top` pour récupérer les métriques du pod : +Utilisez `kubectl top` pour récupérer les métriques du Pod : ```shell kubectl top pod cpu-demo --namespace=cpu-example diff --git a/content/fr/includes/partner-script.js b/content/fr/includes/partner-script.js deleted file mode 100644 index 78103493f61fc..0000000000000 --- a/content/fr/includes/partner-script.js +++ /dev/null @@ -1,1609 +0,0 @@ -;(function () { - var partners = [ - { - type: 0, - name: 'Sysdig', - logo: 'sys_dig', - link: 'https://sysdig.com/blog/monitoring-kubernetes-with-sysdig-cloud/', - blurb: "Sysdig est la société de renseignements sur les conteneurs. Sysdig a créé la seule plate-forme unifiée pour la surveillance, la sécurité et le dépannage dans une architecture compatible avec les microservices. " - }, - { - type: 0, - name: 'Puppet', - logo: 'puppet', - link: 'https://puppet.com/blog/announcing-kream-and-new-kubernetes-helm-and-docker-modules', - blurb: "Nous avons développé des outils et des produits pour que votre adoption de Kubernetes soit aussi efficace que possible, et qu'elle couvre l'ensemble du cycle de vos flux de travail, du développement à la production. Et maintenant, Puppet Pipelines for Containers est votre tableau de bord complet DevOps pour Kubernetes. " - }, - { - type: 0, - name: 'Citrix', - logo: 'citrix', - link: 'https://www.citrix.com/networking/microservices.html', - blurb: "Netscaler CPX offre aux développeurs d'applications toutes les fonctionnalités dont ils ont besoin pour équilibrer leurs microservices et leurs applications conteneurisées avec Kubernetes." - }, - { - type: 0, - name: 'Cockroach Labs', - logo: 'cockroach_labs', - link: 'https://www.cockroachlabs.com/blog/running-cockroachdb-on-kubernetes/', - blurb: 'CockroachDB est une base de données SQL distribuée dont le modèle de réplication et de capacité de survie intégré se combine à Kubernetes pour simplifier réellement les données.' - }, - { - type: 2, - name: 'Weaveworks', - logo: 'weave_works', - link: ' https://weave.works/kubernetes', - blurb: 'Weaveworks permet aux développeurs et aux équipes de développement / développement de connecter, déployer, sécuriser, gérer et dépanner facilement les microservices dans Kubernetes.' - }, - { - type: 0, - name: 'Intel', - logo: 'intel', - link: 'https://tectonic.com/press/intel-coreos-collaborate-on-openstack-with-kubernetes.html', - blurb: "Activer GIFEE (l'infrastructure de Google pour tous les autres), pour exécuter les déploiements OpenStack sur Kubernetes." - }, - { - type: 3, - name: 'Platform9', - logo: 'platform9', - link: 'https://platform9.com/products/kubernetes/', - blurb: "Platform9 est la société open source en tant que service qui exploite tout le bien de Kubernetes et le fournit sous forme de service géré." - }, - { - type: 0, - name: 'Datadog', - logo: 'datadog', - link: 'http://docs.datadoghq.com/integrations/kubernetes/', - blurb: 'Observabilité totale pour les infrastructures et applications dynamiques. Inclut des alertes de précision, des analyses et des intégrations profondes de Kubernetes. ' - }, - { - type: 0, - name: 'AppFormix', - logo: 'appformix', - link: 'http://www.appformix.com/solutions/appformix-for-kubernetes/', - blurb: "AppFormix est un service d'optimisation des performances d'infrastructure cloud aidant les entreprises à rationaliser leurs opérations cloud sur n'importe quel cloud Kubernetes. " - }, - { - type: 0, - name: 'Crunchy', - logo: 'crunchy', - link: 'http://info.crunchydata.com/blog/advanced-crunchy-containers-for-postgresql', - blurb: 'Crunchy PostgreSQL Container Suite est un ensemble de conteneurs permettant de gérer PostgreSQL avec des microservices DBA exploitant Kubernetes et Helm.' - }, - { - type: 0, - name: 'Aqua', - logo: 'aqua', - link: 'http://blog.aquasec.com/security-best-practices-for-kubernetes-deployment', - blurb: "Sécurité complète et automatisée pour vos conteneurs s'exécutant sur Kubernetes." - }, - { - type: 0, - name: 'Distelli', - logo: 'distelli', - link: 'https://www.distelli.com/', - blurb: "Pipeline de vos référentiels sources vers vos clusters Kubernetes sur n'importe quel cloud." - }, - { - type: 0, - name: 'Nuage networks', - logo: 'nuagenetworks', - link: 'https://github.com/nuagenetworks/nuage-kubernetes', - blurb: "La plate-forme Nuage SDN fournit une mise en réseau à base de règles entre les pods Kubernetes et les environnements autres que Kubernetes avec une surveillance de la visibilité et de la sécurité." - }, - { - type: 0, - name: 'Sematext', - logo: 'sematext', - link: 'https://sematext.com/kubernetes/', - blurb: 'Journalisation et surveillance: collecte et traitement automatiques des métriques, des événements et des journaux pour les pods à découverte automatique et les noeuds Kubernetes.' - }, - { - type: 0, - name: 'Diamanti', - logo: 'diamanti', - link: 'https://www.diamanti.com/products/', - blurb: "Diamanti déploie des conteneurs à performances garanties en utilisant Kubernetes dans la première appliance hyperconvergée spécialement conçue pour les applications conteneurisées." - }, - { - type: 0, - name: 'Aporeto', - logo: 'aporeto', - link: 'https://aporeto.com/trireme', - blurb: "Aporeto sécurise par défaut les applications natives en nuage sans affecter la vélocité des développeurs et fonctionne à toute échelle, sur n'importe quel nuage." - }, - { - type: 2, - name: 'Giant Swarm', - logo: 'giantswarm', - link: 'https://giantswarm.io', - blurb: "Giant Swarm vous permet de créer et d'utiliser simplement et rapidement des clusters Kubernetes à la demande, sur site ou dans le cloud. Contactez Garm Swarm pour en savoir plus sur le meilleur moyen d'exécuter des applications natives en nuage où que vous soyez." - }, - { - type: 3, - name: 'Giant Swarm', - logo: 'giantswarm', - link: 'https://giantswarm.io/product/', - blurb: "Giant Swarm vous permet de créer et d'utiliser simplement et rapidement des clusters Kubernetes à la demande, sur site ou dans le cloud. Contactez Garm Swarm pour en savoir plus sur le meilleur moyen d'exécuter des applications natives en nuage où que vous soyez." - }, - { - type: 3, - name: 'Hasura', - logo: 'hasura', - link: 'https://hasura.io', - blurb: "Hasura est un PaaS basé sur Kubernetes et un BaaS basé sur Postgres qui accélère le développement d'applications avec des composants prêts à l'emploi." - }, - { - type: 3, - name: 'Mirantis', - logo: 'mirantis', - link: 'https://www.mirantis.com/software/kubernetes/', - blurb: 'Mirantis - Plateforme Cloud Mirantis' - }, - { - type: 2, - name: 'Mirantis', - logo: 'mirantis', - link: 'https://content.mirantis.com/Containerizing-OpenStack-on-Kubernetes-Video-Landing-Page.html', - blurb: "Mirantis construit et gère des clouds privés avec des logiciels open source tels que OpenStack, déployés sous forme de conteneurs orchestrés par Kubernetes." - }, - { - type: 0, - name: 'Kubernetic', - logo: 'kubernetic', - link: 'https://kubernetic.com/', - blurb: 'Kubernetic est un client Kubernetes Desktop qui simplifie et démocratise la gestion de clusters pour DevOps.' - }, - { - type: 1, - name: 'Reactive Ops', - logo: 'reactive_ops', - link: 'https://www.reactiveops.com/the-kubernetes-experts/', - blurb: "ReactiveOps a écrit l'automatisation des meilleures pratiques pour l'infrastructure sous forme de code sur GCP & AWS utilisant Kubernetes, vous aidant ainsi à construire et à maintenir une infrastructure de classe mondiale pour une fraction du prix d'une embauche interne." - }, - { - type: 2, - name: 'Livewyer', - logo: 'livewyer', - link: 'https://livewyer.io/services/kubernetes-experts/', - blurb: "Les experts de Kubernetes qui implémentent des applications intégrées et permettent aux équipes informatiques de tirer le meilleur parti de la technologie conteneurisée." - }, - { - type: 2, - name: 'Samsung SDS', - logo: 'samsung_sds', - link: 'http://www.samsungsdsa.com/cloud-infrastructure_kubernetes', - blurb: "L'équipe Cloud Native Computing de Samsung SDS propose des conseils d'experts couvrant tous les aspects techniques liés à la création de services destinés à un cluster Kubernetes." - }, - { - type: 2, - name: 'Container Solutions', - logo: 'container_solutions', - link: 'http://container-solutions.com/resources/kubernetes/', - blurb: 'Container Solutions est une société de conseil en logiciels haut de gamme qui se concentre sur les infrastructures programmables. Elle offre notre expertise en développement, stratégie et opérations logicielles pour vous aider à innover à grande vitesse et à grande échelle.' - }, - { - type: 4, - name: 'Container Solutions', - logo: 'container_solutions', - link: 'http://container-solutions.com/resources/kubernetes/', - blurb: 'Container Solutions est une société de conseil en logiciels haut de gamme qui se concentre sur les infrastructures programmables. Elle offre notre expertise en développement, stratégie et opérations logicielles pour vous aider à innover à grande vitesse et à grande échelle.' - }, - { - type: 2, - name: 'Jetstack', - logo: 'jetstack', - link: 'https://www.jetstack.io/', - blurb: "Jetstack est une organisation entièrement centrée sur Kubernetes. Ils vous aideront à tirer le meilleur parti de Kubernetes grâce à des services professionnels spécialisés et à des outils open source. Entrez en contact et accélérez votre projet." - }, - { - type: 0, - name: 'Tigera', - logo: 'tigera', - link: 'http://docs.projectcalico.org/latest/getting-started/kubernetes/', - blurb: "Tigera crée des solutions de réseautage en nuage natif hautes performances et basées sur des règles pour Kubernetes." - }, - { - type: 1, - name: 'Harbur', - logo: 'harbur', - link: 'https://harbur.io/', - blurb: "Basé à Barcelone, Harbur est un cabinet de conseil qui aide les entreprises à déployer des solutions d'auto-guérison basées sur les technologies de conteneur" - }, - { - type: 0, - name: 'Spotinst', - logo: 'spotinst', - link: 'http://blog.spotinst.com/2016/08/04/elastigroup-kubernetes-minions-steroids/', - blurb: "Votre Kubernetes à 80% de moins. Exécutez des charges de travail K8s sur des instances ponctuelles avec une disponibilité totale pour économiser au moins 80% de la mise à l'échelle automatique de vos Kubernetes avec une efficacité maximale dans des environnements hétérogènes." - }, - { - type: 2, - name: 'InwinSTACK', - logo: 'inwinstack', - link: 'http://www.inwinstack.com/index.php/en/solutions-en/', - blurb: "Notre service de conteneur exploite l'infrastructure basée sur OpenStack et son moteur Magnum d'orchestration de conteneur pour gérer les clusters Kubernetes." - }, - { - type: 4, - name: 'InwinSTACK', - logo: 'inwinstack', - link: 'http://www.inwinstack.com/index.php/en/solutions-en/', - blurb: "Notre service de conteneur exploite l'infrastructure basée sur OpenStack et son moteur Magnum d'orchestration de conteneur pour gérer les clusters Kubernetes." - }, - { - type: 3, - name: 'InwinSTACK', - logo: 'inwinstack', - link: 'https://github.com/inwinstack/kube-ansible', - blurb: 'inwinSTACK - être-ansible' - }, - { - type: 1, - name: 'Semantix', - logo: 'semantix', - link: 'http://www.semantix.com.br/', - blurb: "Semantix est une entreprise qui travaille avec l’analyse de données et les systèmes distribués. Kubernetes est utilisé pour orchestrer des services pour nos clients." - }, - { - type: 0, - name: 'ASM Technologies Limited', - logo: 'asm', - link: 'http://www.asmtech.com/', - blurb: "Notre portefeuille de chaînes logistiques technologiques permet à vos logiciels d'être accessibles, viables et disponibles plus efficacement." - }, - { - type: 1, - name: 'InfraCloud Technologies', - logo: 'infracloud', - link: 'http://blog.infracloud.io/state-of-kubernetes/', - blurb: "InfraCloud Technologies est une société de conseil en logiciels qui fournit des services dans les conteneurs, le cloud et le développement." - }, - { - type: 0, - name: 'SignalFx', - logo: 'signalfx', - link: 'https://github.com/signalfx/integrations/tree/master/kubernetes', - blurb: "Obtenez une visibilité en temps réel sur les métriques et les alertes les plus intelligentes pour les architectures actuelles, y compris une intégration poussée avec Kubernetes" - }, - { - type: 0, - name: 'NATS', - logo: 'nats', - link: 'https://github.com/pires/kubernetes-nats-cluster', - blurb: "NATS est un système de messagerie natif en nuage simple, sécurisé et évolutif." - }, - { - type: 2, - name: 'RX-M', - logo: 'rxm', - link: 'http://rx-m.com/training/kubernetes-training/', - blurb: 'Services de formation et de conseil Kubernetes Dev, DevOps et Production neutres sur le marché.' - }, - { - type: 4, - name: 'RX-M', - logo: 'rxm', - link: 'http://rx-m.com/training/kubernetes-training/', - blurb: 'Services de formation et de conseil Kubernetes Dev, DevOps et Production neutres sur le marché.' - }, - { - type: 1, - name: 'Emerging Technology Advisors', - logo: 'eta', - link: 'https://www.emergingtechnologyadvisors.com/services/kubernetes.html', - blurb: "ETA aide les entreprises à concevoir, mettre en œuvre et gérer des applications évolutives utilisant Kubernetes sur un cloud public ou privé." - }, - { - type: 0, - name: 'CloudPlex.io', - logo: 'cloudplex', - link: 'http://www.cloudplex.io', - blurb: "CloudPlex permet aux équipes d'exploitation de déployer, d'orchestrer, de gérer et de surveiller de manière visuelle l'infrastructure, les applications et les services dans un cloud public ou privé." - }, - { - type: 2, - name: 'Kumina', - logo: 'kumina', - link: 'https://www.kumina.nl/managed_kubernetes', - blurb: "Kumina combine la puissance de Kubernetes à plus de 10 ans d'expérience dans les opérations informatiques. Nous créons, construisons et prenons en charge des solutions Kubernetes entièrement gérées sur votre choix d’infrastructure. Nous fournissons également des services de conseil et de formation." - }, - { - type: 0, - name: 'CA Technologies', - logo: 'ca', - link: 'https://docops.ca.com/ca-continuous-delivery-director/integrations/en/plug-ins/kubernetes-plug-in', - blurb: "Le plug-in Kubernetes de CA Continuous Delivery Director orchestre le déploiement d'applications conteneurisées dans un pipeline de version de bout en bout." - }, - { - type: 0, - name: 'CoScale', - logo: 'coscale', - link: 'http://www.coscale.com/blog/how-to-monitor-your-kubernetes-cluster', - blurb: "Surveillance complète de la pile de conteneurs et de microservices orchestrés par Kubernetes. Propulsé par la détection des anomalies pour trouver les problèmes plus rapidement." - }, - { - type: 2, - name: 'Supergiant.io', - logo: 'supergiant', - link: 'https://supergiant.io/blog/supergiant-packing-algorithm-unique-save-money', - blurb: 'Supergiant autoscales hardware pour Kubernetes. Open-source, il facilite le déploiement, la gestion et la montée en charge des applications haute disponibilité, distribuées et à haute disponibilité. ' - }, - { - type: 0, - name: 'Avi Networks', - logo: 'avinetworks', - link: 'https://kb.avinetworks.com/avi-vantage-openshift-installation-guide/', - blurb: "La structure des services applicatifs élastiques d'Avis fournit un réseau L4-7 évolutif, riche en fonctionnalités et intégré pour les environnements K8S." - }, - { - type: 1, - name: 'Codecrux web technologies pvt ltd', - logo: 'codecrux', - link: 'http://codecrux.com/kubernetes/', - blurb: "Chez CodeCrux, nous aidons votre organisation à tirer le meilleur parti de Containers et de Kubernetes, quel que soit le stade où vous vous trouvez" - }, - { - type: 0, - name: 'Greenqloud', - logo: 'qstack', - link: 'https://www.qstack.com/application-orchestration/', - blurb: "Qstack fournit des clusters Kubernetes sur site auto-réparables avec une interface utilisateur intuitive pour la gestion de l'infrastructure et de Kubernetes." - }, - { - type: 1, - name: 'StackOverdrive.io', - logo: 'stackoverdrive', - link: 'http://www.stackoverdrive.net/kubernetes-consulting/', - blurb: "StackOverdrive aide les organisations de toutes tailles à tirer parti de Kubernetes pour l’orchestration et la gestion par conteneur." - }, - { - type: 0, - name: 'StackIQ, Inc.', - logo: 'stackiq', - link: 'https://www.stackiq.com/kubernetes/', - blurb: "Avec Stacki et la palette Stacki pour Kubernetes, vous pouvez passer du métal nu aux conteneurs en un seul passage très rapidement et facilement." - }, - { - type: 0, - name: 'Cobe', - logo: 'cobe', - link: 'https://cobe.io/product-page/', - blurb: 'Gérez les clusters Kubernetes avec un modèle direct et interrogeable qui capture toutes les relations et les données de performance dans un contexte entièrement visualisé.' - }, - { - type: 0, - name: 'Datawire', - logo: 'datawire', - link: 'http://www.datawire.io', - blurb: "Les outils open source de Datawires permettent à vos développeurs de microservices d’être extrêmement productifs sur Kubernetes, tout en laissant les opérateurs dormir la nuit." - }, - { - type: 0, - name: 'Mashape, Inc.', - logo: 'kong', - link: 'https://getkong.org/install/kubernetes/', - blurb: "Kong est une couche d'API open source évolutive qui s'exécute devant toute API RESTful et peut être provisionnée à un cluster Kubernetes." - }, - { - type: 0, - name: 'F5 Networks', - logo: 'f5networks', - link: 'http://github.com/f5networks', - blurb: "Nous avons une intégration de LB dans Kubernetes." - }, - { - type: 1, - name: 'Lovable Tech', - logo: 'lovable', - link: 'http://lovable.tech/', - blurb: "Des ingénieurs, des concepteurs et des consultants stratégiques de classe mondiale vous aident à expédier une technologie Web et mobile attrayante." - }, - { - type: 0, - name: 'StackState', - logo: 'stackstate', - link: 'http://stackstate.com/platform/container-monitoring', - blurb: "Analyse opérationnelle entre les équipes et les outils. Inclut la visualisation de la topologie, l'analyse des causes premières et la détection des anomalies pour Kubernetes." - }, - { - type: 1, - name: 'INEXCCO INC', - logo: 'inexcco', - link: 'https://www.inexcco.com/', - blurb: "Fort talent pour DevOps et Cloud travaillant avec plusieurs clients sur des implémentations de kubernetes et de helm." - }, - { - type: 2, - name: 'Bitnami', - logo: 'bitnami', - link: 'http://bitnami.com/kubernetes', - blurb: "Bitnami propose à Kubernetes un catalogue d'applications et de blocs de construction d'applications fiables, à jour et faciles à utiliser." - }, - { - type: 1, - name: 'Nebulaworks', - logo: 'nebulaworks', - link: 'http://www.nebulaworks.com/container-platforms', - blurb: "Nebulaworks fournit des services destinés à aider l'entreprise à adopter des plates-formes de conteneurs modernes et des processus optimisés pour permettre l'innovation à grande échelle." - }, - { - type: 1, - name: 'EASYNUBE', - logo: 'easynube', - link: 'http://easynube.co.uk/devopsnube/', - blurb: "EasyNube fournit l'architecture, la mise en œuvre et la gestion d'applications évolutives à l'aide de Kubernetes et Openshift." - }, - { - type: 1, - name: 'Opcito Technologies', - logo: 'opcito', - link: 'http://www.opcito.com/kubernetes/', - blurb: "Opcito est une société de conseil en logiciels qui utilise Kubernetes pour aider les organisations à concevoir, concevoir et déployer des applications hautement évolutives." - }, - { - type: 0, - name: 'code by Dell EMC', - logo: 'codedellemc', - link: 'https://blog.codedellemc.com', - blurb: "Respecté en tant que chef de file de la persistance du stockage pour les applications conteneurisées. Contribution importante au K8 et à l'écosystème." - }, - { - type: 0, - name: 'Instana', - logo: 'instana', - link: 'https://www.instana.com/supported-technologies/', - blurb: "Instana surveille les performances des applications, de l'infrastructure, des conteneurs et des services déployés sur un cluster Kubernetes." - }, - { - type: 0, - name: 'Netsil', - logo: 'netsil', - link: 'https://netsil.com/kubernetes/', - blurb: "Générez une carte de topologie d'application découverte automatiquement en temps réel! Surveillez les pods et les espaces de noms Kubernetes sans aucune instrumentation de code." - }, - { - type: 2, - name: 'Treasure Data', - logo: 'treasuredata', - link: 'https://fluentd.treasuredata.com/kubernetes-logging/', - blurb: "Fluentd Enterprise apporte une journalisation intelligente et sécurisée à Kubernetes, ainsi que des intégrations avec des serveurs tels que Splunk, Kafka ou AWS S3." - }, - { - type: 2, - name: 'Kenzan', - logo: 'Kenzan', - link: 'http://kenzan.com/?ref=kubernetes', - blurb: "Nous fournissons des services de conseil personnalisés en nous basant sur Kubernetes. Cela concerne le développement de la plate-forme, les pipelines de distribution et le développement d'applications au sein de Kubernetes." - }, - { - type: 2, - name: 'New Context', - logo: 'newcontext', - link: 'https://www.newcontext.com/devsecops-infrastructure-automation-orchestration/', - blurb: "Nouveau contexte construit et optimise les implémentations et les migrations Kubernetes sécurisées, de la conception initiale à l'automatisation et à la gestion de l'infrastructure." - }, - { - type: 2, - name: 'Banzai', - logo: 'banzai', - link: 'https://banzaicloud.com/platform/', - blurb: "Banzai Cloud apporte le cloud natif à l'entreprise et simplifie la transition vers les microservices sur Kubernetes." - }, - { - type: 3, - name: 'Kublr', - logo: 'kublr', - link: 'http://kublr.com', - blurb: "Kublr - Accélérez et contrôlez le déploiement, la mise à l'échelle, la surveillance et la gestion de vos applications conteneurisées." - }, - { - type: 1, - name: 'ControlPlane', - logo: 'controlplane', - link: 'https://control-plane.io', - blurb: "Nous sommes un cabinet de conseil basé à Londres, spécialisé dans la sécurité et la livraison continue. Nous offrons des services de conseil et de formation." - }, - { - type: 3, - name: 'Nirmata', - logo: 'nirmata', - link: 'https://www.nirmata.com/', - blurb: 'Nirmata - Nirmata Managed Kubernetes' - }, - { - type: 2, - name: 'Nirmata', - logo: 'nirmata', - link: 'https://www.nirmata.com/', - blurb: "Nirmata est une plate-forme logicielle qui aide les équipes de DevOps à fournir des solutions de gestion de conteneurs basées sur Kubernetes, de qualité professionnelle et indépendantes des fournisseurs de cloud." - }, - { - type: 3, - name: 'TenxCloud', - logo: 'tenxcloud', - link: 'https://tenxcloud.com', - blurb: 'TenxCloud - Moteur de conteneur TenxCloud (TCE)' - }, - { - type: 2, - name: 'TenxCloud', - logo: 'tenxcloud', - link: 'https://www.tenxcloud.com/', - blurb: "Fondé en octobre 2014, TenxCloud est l'un des principaux fournisseurs de services d'informatique en nuage de conteneurs en Chine, couvrant notamment la plate-forme cloud PaaS pour conteneurs, la gestion de micro-services, DevOps, les tests de développement, AIOps, etc. Fournir des produits et des solutions PaaS de cloud privé aux clients des secteurs de la finance, de l’énergie, des opérateurs, de la fabrication, de l’éducation et autres." - }, - { - type: 0, - name: 'Twistlock', - logo: 'twistlock', - link: 'https://www.twistlock.com/', - blurb: "La sécurité à l'échelle Kubernetes: Twistlock vous permet de déployer sans crainte, en vous assurant que vos images et vos conteneurs sont exempts de vulnérabilités et protégés au moment de l'exécution." - }, - { - type: 0, - name: 'Endocode AG', - logo: 'endocode', - link: 'https://endocode.com/kubernetes/', - blurb: 'Endocode pratique et enseigne la méthode open source. Noyau à cluster - Dev to Ops. Nous proposons des formations, des services et une assistance Kubernetes. ' - }, - { - type: 2, - name: 'Accenture', - logo: 'accenture', - link: 'https://www.accenture.com/us-en/service-application-containers', - blurb: 'Architecture, mise en œuvre et exploitation de solutions Kubernetes de classe mondiale pour les clients cloud.' - }, - { - type: 1, - name: 'Biarca', - logo: 'biarca', - link: 'http://biarca.io/', - blurb: "Biarca est un fournisseur de services cloud et des domaines d’intervention clés. Les domaines d’intervention clés de Biarca incluent les services d’adoption en nuage, les services d’infrastructure, les services DevOps et les services d’application. Biarca s'appuie sur Kubernetes pour fournir des solutions conteneurisées." - }, - { - type: 2, - name: 'Claranet', - logo: 'claranet', - link: 'http://www.claranet.co.uk/hosting/google-cloud-platform-consulting-managed-services', - blurb: "Claranet aide les utilisateurs à migrer vers le cloud et à tirer pleinement parti du nouveau monde qu’il offre. Nous consultons, concevons, construisons et gérons de manière proactive l'infrastructure et les outils d'automatisation appropriés pour permettre aux clients d'atteindre cet objectif." - }, - { - type: 1, - name: 'CloudKite', - logo: 'cloudkite', - link: 'https://cloudkite.io/', - blurb: "CloudKite.io aide les entreprises à créer et à maintenir des logiciels hautement automatisés, résilients et extrêmement performants sur Kubernetes." - }, - { - type: 2, - name: 'CloudOps', - logo: 'CloudOps', - link: 'https://www.cloudops.com/services/docker-and-kubernetes-workshops/', - blurb: "CloudOps vous met au contact de l'écosystème K8s via un atelier / laboratoire. Obtenez des K8 prêts à l'emploi dans les nuages ​​de votre choix avec nos services gérés." - }, - { - type: 2, - name: 'Ghostcloud', - logo: 'ghostcloud', - link: 'https://www.ghostcloud.cn/ecos-kubernetes', - blurb: "EcOS est un PaaS / CaaS de niveau entreprise basé sur Docker et Kubernetes, ce qui facilite la configuration, le déploiement et la gestion des applications conteneurisées." - }, - { - type: 3, - name: 'Ghostcloud', - logo: 'ghostcloud', - link: 'https://www.ghostcloud.cn/ecos-kubernetes', - blurb: "EcOS est un PaaS / CaaS de niveau entreprise basé sur Docker et Kubernetes, ce qui facilite la configuration, le déploiement et la gestion des applications conteneurisées." - }, - { - type: 2, - name: 'Contino', - logo: 'contino', - link: 'https://www.contino.io/', - blurb: "Nous aidons les entreprises à adopter DevOps, les conteneurs et le cloud computing. Contino est un cabinet de conseil mondial qui permet aux organisations réglementées d’accélérer l’innovation en adoptant des approches modernes de la fourniture de logiciels." - }, - { - type: 2, - name: 'Booz Allen Hamilton', - logo: 'boozallenhamilton', - link: 'https://www.boozallen.com/', - blurb: "Booz Allen collabore avec des clients des secteurs public et privé pour résoudre leurs problèmes les plus difficiles en combinant conseil, analyse, opérations de mission, technologie, livraison de systèmes, cybersécurité, ingénierie et expertise en innovation." - }, - { - type: 1, - name: 'BigBinary', - logo: 'bigbinary', - link: 'http://blog.bigbinary.com/categories/Kubernetes', - blurb: "Fournisseur de solutions numériques pour les clients fédéraux et commerciaux, comprenant DevSecOps, des plates-formes cloud, une stratégie de transformation, des solutions cognitives et l'UX." - }, - { - type: 0, - name: 'CloudPerceptions', - logo: 'cloudperceptions', - link: 'https://www.meetup.com/Triangle-Kubernetes-Meetup/files/', - blurb: "Solution de sécurité des conteneurs pour les petites et moyennes entreprises qui envisagent d'exécuter Kubernetes sur une infrastructure partagée." - }, - { - type: 2, - name: 'Creationline, Inc.', - logo: 'creationline', - link: 'https://www.creationline.com/ci', - blurb: 'Solution totale pour la gestion des ressources informatiques par conteneur.' - }, - { - type: 0, - name: 'DataCore Software', - logo: 'datacore', - link: 'https://www.datacore.com/solutions/virtualization/containerization', - blurb: "DataCore fournit à Kubernetes un stockage de blocs universel hautement disponible et hautement performant, ce qui améliore radicalement la vitesse de déploiement." - }, - { - type: 0, - name: 'Elastifile', - logo: 'elastifile', - link: 'https://www.elastifile.com/stateful-containers', - blurb: "La structure de données multi-cloud d’Elastifile offre un stockage persistant défini par logiciel et hautement évolutif, conçu pour le logiciel Kubernetes." - }, - { - type: 0, - name: 'GitLab', - logo: 'gitlab', - link: 'https://about.gitlab.com/2016/11/14/idea-to-production/', - blurb: "Avec GitLab et Kubernetes, vous pouvez déployer un pipeline CI / CD complet avec plusieurs environnements, des déploiements automatiques et une surveillance automatique." - }, - { - type: 0, - name: 'Gravitational, Inc.', - logo: 'gravitational', - link: 'https://gravitational.com/telekube/', - blurb: "Telekube associe Kubernetes à Teleport, notre serveur SSH moderne, afin que les opérateurs puissent gérer à distance une multitude de déploiements d'applications K8." - }, - { - type: 0, - name: 'Hitachi Data Systems', - logo: 'hitachi', - link: 'https://www.hds.com/en-us/products-solutions/application-solutions/unified-compute-platform-with-kubernetes-orchestration.html', - blurb: "Créez les applications dont vous avez besoin pour conduire votre entreprise - DÉVELOPPEZ ET DÉPLOYEZ DES APPLICATIONS PLUS RAPIDEMENT ET PLUS FIABLES." - }, - { - type: 1, - name: 'Infosys Technologies', - logo: 'infosys', - link: 'https://www.infosys.com', - blurb: "Monolithique à microservices sur openshift est une offre que nous développons dans le cadre de la pratique open source." - }, - { - type: 0, - name: 'JFrog', - logo: 'jfrog', - link: 'https://www.jfrog.com/use-cases/12584/', - blurb: "Vous pouvez utiliser Artifactory pour stocker et gérer toutes les images de conteneur de votre application, les déployer sur Kubernetes et configurer un pipeline de construction, de test et de déploiement à l'aide de Jenkins et d'Artifactory. Une fois qu'une image est prête à être déployée, Artifactory peut déclencher un déploiement de mise à jour propagée dans un cluster Kubernetes sans interruption - automatiquement!" - }, - { - type: 0, - name: 'Navops by Univa', - logo: 'navops', - link: 'https://www.navops.io', - blurb: "Navops est une suite de produits qui permet aux entreprises de tirer pleinement parti de Kubernetes et permet de gérer rapidement et efficacement des conteneurs à grande échelle." - }, - { - type: 0, - name: 'NeuVector', - logo: 'neuvector', - link: 'http://neuvector.com/solutions-for-kubernetes-security/', - blurb: "NeuVector fournit une solution de sécurité réseau intelligente pour les conteneurs et les applications, intégrée et optimisée pour Kubernetes." - }, - { - type: 1, - name: 'OpsZero', - logo: 'opszero', - link: 'https://www.opszero.com/kubernetes.html', - blurb: 'opsZero fournit DevOps pour les startups. Nous construisons et entretenons votre infrastructure Kubernetes et Cloud pour accélérer votre cycle de publication. ' - }, - { - type: 1, - name: 'Shiwaforce.com Ltd.', - logo: 'shiwaforce', - link: 'https://www.shiwaforce.com/en/', - blurb: "Shiwaforce.com est le partenaire agile de la transformation numérique. Nos solutions suivent les changements de l'entreprise rapidement, facilement et à moindre coût." - }, - { - type: 1, - name: 'SoftServe', - logo: 'softserve', - link: 'https://www.softserveinc.com/en-us/blogs/kubernetes-travis-ci/', - blurb: "SoftServe permet à ses clients d’adopter des modèles de conception d’applications modernes et de bénéficier de grappes Kubernetes entièrement intégrées, hautement disponibles et économiques, à n’importe quelle échelle." - }, - { - type: 1, - name: 'Solinea', - logo: 'solinea', - link: 'https://www.solinea.com/cloud-consulting-services/container-microservices-offerings', - blurb: "Solinea est un cabinet de conseil en transformation numérique qui permet aux entreprises de créer des solutions innovantes en adoptant l'informatique en nuage native." - }, - { - type: 1, - name: 'Sphere Software, LLC', - logo: 'spheresoftware', - link: 'https://sphereinc.com/kubernetes/', - blurb: "L'équipe d'experts de Sphere Software permet aux clients de concevoir et de mettre en œuvre des applications évolutives à l'aide de Kubernetes dans Google Cloud, AWS et Azure." - }, - { - type: 1, - name: 'Altoros', - logo: 'altoros', - link: 'https://www.altoros.com/container-orchestration-tools-enablement.html', - blurb: "Déploiement et configuration de Kubernetes, Optimisation de solutions existantes, formation des développeurs à l'utilisation de Kubernetes, assistance." - }, - { - type: 0, - name: 'Cloudbase Solutions', - logo: 'cloudbase', - link: 'https://cloudbase.it/kubernetes', - blurb: "Cloudbase Solutions assure l'interopérabilité multi-cloud de Kubernetes pour les déploiements Windows et Linux basés sur des technologies open source." - }, - { - type: 0, - name: 'Codefresh', - logo: 'codefresh', - link: 'https://codefresh.io/kubernetes-deploy/', - blurb: 'Codefresh est une plate-forme complète DevOps conçue pour les conteneurs et Kubernetes. Avec les pipelines CI / CD, la gestion des images et des intégrations profondes dans Kubernetes et Helm. ' - }, - { - type: 0, - name: 'NetApp', - logo: 'netapp', - link: 'http://netapp.io/2016/12/23/introducing-trident-dynamic-persistent-volume-provisioner-kubernetes/', - blurb: "Provisionnement dynamique et prise en charge du stockage persistant." - }, - { - type: 0, - name: 'OpenEBS', - logo: 'OpenEBS', - link: 'https://openebs.io/', - blurb: "OpenEBS est un stockage conteneurisé de conteneurs étroitement intégré à Kubernetes et basé sur le stockage en bloc distribué et la conteneurisation du contrôle du stockage. OpenEBS dérive de l’intention des K8 et d’autres codes YAML ou JSON, tels que les SLA de qualité de service par conteneur, les stratégies de réplication et de hiérarchisation, etc. OpenEBS est conforme à l'API EBS." - }, - { - type: 3, - name: 'Google Kubernetes Engine', - logo: 'google', - link: 'https://cloud.google.com/kubernetes-engine/', - blurb: "Google - Moteur Google Kubernetes" - }, - { - type: 1, - name: 'Superorbital', - logo: 'superorbital', - link: 'https://superorbit.al/workshops/kubernetes/', - blurb: "Aider les entreprises à naviguer dans les eaux Cloud Native grâce au conseil et à la formation Kubernetes." - }, - { - type: 3, - name: 'Apprenda', - logo: 'apprenda', - link: 'https://apprenda.com/kismatic/', - blurb: 'Apprenda - Kismatic Enterprise Toolkit (KET)' - }, - { - type: 3, - name: 'Red Hat', - logo: 'redhat', - link: 'https://www.openshift.com', - blurb: "Red Hat - OpenShift Online et OpenShift Container Platform" - }, - { - type: 3, - name: 'Rancher', - logo: 'rancher', - link: 'http://rancher.com/kubernetes/', - blurb: 'Rancher Inc. - Rancher Kubernetes' - }, - { - type: 3, - name: 'Canonical', - logo: 'canonical', - link: 'https://www.ubuntu.com/kubernetes', - blurb: "La distribution canonique de Kubernetes vous permet d’exploiter à la demande des grappes Kubernetes sur n’importe quel infrastructure de cloud public ou privée majeure." - }, - { - type: 2, - name: 'Canonical', - logo: 'canonical', - link: 'https://www.ubuntu.com/kubernetes', - blurb: 'Canonical Ltd. - Distribution canonique de Kubernetes' - }, - { - type: 3, - name: 'Cisco', - logo: 'cisco', - link: 'https://www.cisco.com', - blurb: 'Cisco Systems - Plateforme de conteneur Cisco' - }, - { - type: 3, - name: 'Cloud Foundry', - logo: 'cff', - link: 'https://www.cloudfoundry.org/container-runtime/', - blurb: "Cloud Foundry - Durée d'exécution du conteneur Cloud Foundry" - }, - { - type: 3, - name: 'IBM', - logo: 'ibm', - link: 'https://www.ibm.com/cloud/container-service', - blurb: 'IBM - Service IBM Cloud Kubernetes' - }, - { - type: 2, - name: 'IBM', - logo: 'ibm', - link: 'https://www.ibm.com/cloud/container-service/', - blurb: "Le service de conteneur IBM Cloud combine Docker et Kubernetes pour fournir des outils puissants, des expériences utilisateur intuitives, ainsi qu'une sécurité et une isolation intégrées pour permettre la livraison rapide d'applications tout en tirant parti des services de cloud computing, notamment des capacités cognitives de Watson." - }, - { - type: 3, - name: 'Samsung', - logo: 'samsung_sds', - link: 'https://github.com/samsung-cnct/kraken', - blurb: "Samsung SDS - Kraken" - }, - { - type: 3, - name: 'IBM', - logo: 'ibm', - link: 'https://www.ibm.com/cloud-computing/products/ibm-cloud-private/', - blurb: 'IBM - IBM Cloud Private' - }, - { - type: 3, - name: 'Kinvolk', - logo: 'kinvolk', - link: 'https://github.com/kinvolk/kube-spawn', - blurb: "Kinvolk - cube-spawn" - }, - { - type: 3, - name: 'Heptio', - logo: 'heptio', - link: 'https://aws.amazon.com/quickstart/architecture/heptio-kubernetes', - blurb: 'Heptio - AWS-Quickstart' - }, - { - type: 2, - name: 'Heptio', - logo: 'heptio', - link: 'http://heptio.com', - blurb: "Heptio aide les entreprises de toutes tailles à se rapprocher de la communauté dynamique de Kubernetes." - }, - { - type: 3, - name: 'StackPointCloud', - logo: 'stackpoint', - link: 'https://stackpoint.io', - blurb: 'StackPointCloud - StackPointCloud' - }, - { - type: 2, - name: 'StackPointCloud', - logo: 'stackpoint', - link: 'https://stackpoint.io', - blurb: 'StackPointCloud propose une large gamme de plans de support pour les clusters Kubernetes gérés construits via son plan de contrôle universel pour Kubernetes Anywhere.' - }, - { - type: 3, - name: 'Caicloud', - logo: 'caicloud', - link: 'https://caicloud.io/products/compass', - blurb: 'Caicloud - Compass' - }, - { - type: 2, - name: 'Caicloud', - logo: 'caicloud', - link: 'https://caicloud.io/', - blurb: "Fondée par d'anciens membres de Googlers et les premiers contributeurs de Kubernetes, Caicloud s'appuie sur Kubernetes pour fournir des produits de conteneur qui ont servi avec succès les entreprises Fortune 500, et utilise également Kubernetes comme véhicule pour offrir une expérience d'apprentissage en profondeur ultra-rapide." - }, - { - type: 3, - name: 'Alibaba', - logo: 'alibaba', - link: 'https://www.aliyun.com/product/containerservice?spm=5176.8142029.388261.219.3836dbccRpJ5e9', - blurb: 'Alibaba Cloud - Alibaba Cloud Container Service' - }, - { - type: 3, - name: 'Tencent', - logo: 'tencent', - link: 'https://cloud.tencent.com/product/ccs?lang=en', - blurb: 'Tencent Cloud - Tencent Cloud Container Service' - }, - { - type: 3, - name: 'Huawei', - logo: 'huawei', - link: 'http://www.huaweicloud.com/product/cce.html', - blurb: 'Huawei - Huawei Cloud Container Engine' - }, - { - type: 2, - name: 'Huawei', - logo: 'huawei', - link: 'http://developer.huawei.com/ict/en/site-paas', - blurb: "FusionStage est un produit Platform as a Service de niveau entreprise, dont le cœur est basé sur la technologie de conteneur open source traditionnelle, notamment Kubernetes et Docker." - }, - { - type: 3, - name: 'Google', - logo: 'google', - link: 'https://github.com/kubernetes/kubernetes/tree/master/cluster', - blurb: "Google - kube-up.sh sur Google Compute Engine" - }, - { - type: 3, - name: 'Poseidon', - logo: 'poseidon', - link: 'https://typhoon.psdn.io/', - blurb: 'Poséidon - Typhon' - }, - { - type: 3, - name: 'Netease', - logo: 'netease', - link: 'https://www.163yun.com/product/container-service-dedicated', - blurb: 'Netease - Netease Container Service Dedicated' - }, - { - type: 2, - name: 'Loodse', - logo: 'loodse', - link: 'https://loodse.com', - blurb: "Loodse propose des formations et des conseils sur Kubernetes, et organise régulièrement des événements liés à l’Europe." - }, - { - type: 4, - name: 'Loodse', - logo: 'loodse', - link: 'https://loodse.com', - blurb: "Loodse propose des formations et des conseils sur Kubernetes, et organise régulièrement des événements liés à l’Europe." - }, - { - type: 4, - name: 'LF Training', - logo: 'lf-training', - link: 'https://training.linuxfoundation.org/', - blurb: "Le programme de formation de la Linux Foundation associe les connaissances de base étendues aux possibilités de mise en réseau dont les participants ont besoin pour réussir dans leur carrière." - }, - { - type: 3, - name: 'Loodse', - logo: 'loodse', - link: 'https://loodse.com', - blurb: 'Pilots - Moteur de conteneur Kubermatic' - }, - { - type: 1, - name: 'LTI', - logo: 'lti', - link: 'https://www.lntinfotech.com/', - blurb: "LTI aide les entreprises à concevoir, développer et prendre en charge des applications natives de cloud évolutives utilisant Docker et Kubernetes pour un cloud privé ou public." - }, - { - type: 3, - name: 'Microsoft', - logo: 'microsoft', - link: 'https://github.com/Azure/acs-engine', - blurb: 'Microsoft - Azure acs-engine' - }, - { - type: 3, - name: 'Microsoft', - logo: 'microsoft', - link: 'https://docs.microsoft.com/en-us/azure/aks/', - blurb: 'Microsoft - Azure Container Service AKS' - }, - { - type: 3, - name: 'Oracle', - logo: 'oracle', - link: 'http://www.wercker.com/product', - blurb: 'Oracle - Oracle Container Engine' - }, - { - type: 3, - name: 'Oracle', - logo: 'oracle', - link: 'https://github.com/oracle/terraform-kubernetes-installer', - blurb: "Oracle - Programme d'installation Oracle Terraform Kubernetes" - }, - { - type: 3, - name: 'Mesosphere', - logo: 'mesosphere', - link: 'https://mesosphere.com/kubernetes/', - blurb: 'Mésosphère - Kubernetes sur DC / OS' - }, - { - type: 3, - name: 'Appscode', - logo: 'appscode', - link: 'https://appscode.com/products/cloud-deployment/', - blurb: 'Appscode - Pharmer' - }, - { - type: 3, - name: 'SAP', - logo: 'sap', - link: 'https://cloudplatform.sap.com/index.html', - blurb: 'SAP - Cloud Platform - Gardener (pas encore publié)' - }, - { - type: 3, - name: 'Oracle', - logo: 'oracle', - link: 'https://www.oracle.com/linux/index.html', - blurb: 'Oracle - Oracle Linux Container Services à utiliser avec Kubernetes' - }, - { - type: 3, - name: 'CoreOS', - logo: 'coreos', - link: 'https://github.com/kubernetes-incubator/bootkube', - blurb: 'CoreOS - bootkube' - }, - { - type: 2, - name: 'CoreOS', - logo: 'coreos', - link: 'https://coreos.com/', - blurb: 'Tectonic est le produit Kubernetes destiné aux entreprises, conçu par CoreOS. Il ajoute des fonctionnalités clés pour vous permettre de gérer, mettre à jour et contrôler les clusters en production. ' - }, - { - type: 3, - name: 'Weaveworks', - logo: 'weave_works', - link: '/docs/setup/independent/create-cluster-kubeadm/', - blurb: Weaveworks - kubeadm - }, - { - type: 3, - name: 'Joyent', - logo: 'joyent', - link: 'https://github.com/joyent/triton-kubernetes', - blurb: 'Joyent - Triton Kubernetes' - }, - { - type: 3, - name: 'Wise2c', - logo: 'wise2c', - link: 'http://www.wise2c.com/solution', - blurb: "Technologie Wise2C - WiseCloud" - }, - { - type: 2, - name: 'Wise2c', - logo: 'wise2c', - link: 'http://www.wise2c.com', - blurb: "Utilisation de Kubernetes pour fournir au secteur financier une solution de diffusion continue informatique et de gestion de conteneur de niveau entreprise." - }, - { - type: 3, - name: 'Docker', - logo: 'docker', - link: 'https://www.docker.com/enterprise-edition', - blurb: 'Docker - Docker Enterprise Edition' - }, - { - type: 3, - name: 'Daocloud', - logo: 'daocloud', - link: 'http://www.daocloud.io/dce', - blurb: 'DaoCloud - DaoCloud Enterprise' - }, - { - type: 2, - name: 'Daocloud', - logo: 'daocloud', - link: 'http://www.daocloud.io/dce', - blurb: "Nous fournissons une plate-forme d’application native en nuage de niveau entreprise prenant en charge Kubernetes et Docker Swarm." - }, - { - type: 4, - name: 'Daocloud', - logo: 'daocloud', - link: 'http://www.daocloud.io/dce', - blurb: "Nous fournissons une plate-forme d’application native en nuage de niveau entreprise prenant en charge Kubernetes et Docker Swarm." - }, - { - type: 3, - name: 'SUSE', - logo: 'suse', - link: 'https://www.suse.com/products/caas-platform/', - blurb: 'SUSE - Plateforme SUSE CaaS (conteneur en tant que service)' - }, - { - type: 3, - name: 'Pivotal', - logo: 'pivotal', - link: 'https://cloud.vmware.com/pivotal-container-service', - blurb: 'Pivotal / VMware - Service de conteneur Pivotal (PKS)' - }, - { - type: 3, - name: 'VMware', - logo: 'vmware', - link: 'https://cloud.vmware.com/pivotal-container-service', - blurb: 'Pivotal / VMware - Service de conteneur Pivotal (PKS)' - }, - { - type: 3, - name: 'Alauda', - logo: 'alauda', - link: 'http://www.alauda.cn/product/detail/id/68.html', - blurb: 'Alauda - Alauda EE' - }, - { - type: 4, - name: 'Alauda', - logo: 'alauda', - link: 'http://www.alauda.cn/product/detail/id/68.html', - blurb: "Alauda fournit aux offres Kubernetes-Centric Enterprise Platform-as-a-Service un objectif précis: fournir des fonctionnalités Cloud Native et les meilleures pratiques DevOps aux clients professionnels de tous les secteurs en Chine." - }, - { - type: 2, - name: 'Alauda', - logo: 'alauda', - link: 'www.alauda.io', - blurb: "Alauda fournit aux offres Kubernetes-Centric Enterprise Platform-as-a-Service un objectif précis: fournir des fonctionnalités Cloud Native et les meilleures pratiques DevOps aux clients professionnels de tous les secteurs en Chine." - }, - { - type: 3, - name: 'EasyStack', - logo: 'easystack', - link: 'https://easystack.cn/eks/', - blurb: 'EasyStack - Service EasyStack Kubernetes (ECS)' - }, - { - type: 3, - name: 'CoreOS', - logo: 'coreos', - link: 'https://coreos.com/tectonic/', - blurb: 'CoreOS - Tectonique' - }, - { - type: 0, - name: 'GoPaddle', - logo: 'gopaddle', - link: 'https://gopaddle.io', - blurb: "goPaddle est une plate-forme DevOps pour les développeurs Kubernetes. Il simplifie la création et la maintenance du service Kubernetes grâce à la conversion de source en image, à la gestion des versions et des versions, à la gestion d'équipe, aux contrôles d'accès et aux journaux d'audit, à la fourniture en un seul clic de grappes Kubernetes sur plusieurs clouds à partir d'une console unique." - }, - { - type: 0, - name: 'Vexxhost', - logo: 'vexxhost', - link: 'https://vexxhost.com/public-cloud/container-services/kubernetes/', - blurb: "VEXXHOST offre un service de gestion de conteneurs haute performance optimisé par Kubernetes et OpenStack Magnum." - }, - { - type: 1, - name: 'Component Soft', - logo: 'componentsoft', - link: 'https://www.componentsoft.eu/?p=3925', - blurb: "Component Soft propose des formations, des conseils et une assistance autour des technologies de cloud ouvert telles que Kubernetes, Docker, Openstack et Ceph." - }, - { - type: 0, - name: 'Datera', - logo: 'datera', - link: 'http://www.datera.io/kubernetes/', - blurb: "Datera fournit un stockage de blocs élastiques autogéré de haute performance avec un provisionnement en libre-service pour déployer Kubernetes à grande échelle." - }, - { - type: 0, - name: 'Containership', - logo: 'containership', - link: 'https://containership.io/', - blurb: "Containership est une offre kubernetes gérée indépendamment du cloud qui prend en charge le provisionnement automatique de plus de 14 fournisseurs de cloud." - }, - { - type: 0, - name: 'Pure Storage', - logo: 'pure_storage', - link: 'https://hub.docker.com/r/purestorage/k8s/', - blurb: "Notre pilote flexvol et notre provisioning dynamique permettent aux périphériques de stockage FlashArray / Flashblade d'être utilisés en tant que stockage persistant de première classe à partir de Kubernetes." - }, - { - type: 0, - name: 'Elastisys', - logo: 'elastisys', - link: 'https://elastisys.com/kubernetes/', - blurb: "Mise à l'échelle automatique prédictive - détecte les variations de charge de travail récurrentes, les pics de trafic irréguliers, etc. Utilise les K8 dans n’importe quel cloud public ou privé." - }, - { - type: 0, - name: 'Portworx', - logo: 'portworx', - link: 'https://portworx.com/use-case/kubernetes-storage/', - blurb: "Avec Portworx, vous pouvez gérer n'importe quelle base de données ou service avec état sur toute infrastructure utilisant Kubernetes. Vous obtenez une couche de gestion de données unique pour tous vos services avec état, quel que soit leur emplacement." - }, - { - type: 1, - name: 'Object Computing, Inc.', - logo: 'objectcomputing', - link: 'https://objectcomputing.com/services/software-engineering/devops/kubernetes-services', - blurb: "Notre gamme de services de conseil DevOps comprend le support, le développement et la formation de Kubernetes." - }, - { - type: 1, - name: 'Isotoma', - logo: 'isotoma', - link: 'https://www.isotoma.com/blog/2017/10/24/containerisation-tips-for-using-kubernetes-with-aws/', - blurb: "Basés dans le nord de l'Angleterre, les partenaires Amazon qui fournissent des solutions Kubernetes sur AWS pour la réplication et le développement natif." - }, - { - type: 1, - name: 'Servian', - logo: 'servian', - link: 'https://www.servian.com/cloud-and-technology/', - blurb: "Basé en Australie, Servian fournit des services de conseil, de conseil et de gestion pour la prise en charge des cas d'utilisation de kubernètes centrés sur les applications et les données." - }, - { - type: 1, - name: 'Redzara', - logo: 'redzara', - link: 'http://redzara.com/cloud-service', - blurb: "Redzara possède une vaste et approfondie expérience dans l'automatisation du Cloud, franchissant à présent une étape gigantesque en fournissant une offre de services de conteneur et des services à ses clients." - }, - { - type: 0, - name: 'Dataspine', - logo: 'dataspine', - link: 'http://dataspine.xyz/', - blurb: "Dataspine est en train de créer une plate-forme de déploiement sécurisée, élastique et sans serveur pour les charges de travail ML / AI de production au-dessus des k8s." - }, - { - type: 1, - name: 'CloudBourne', - logo: 'cloudbourne', - link: 'https://cloudbourne.com/kubernetes-enterprise-hybrid-cloud/', - blurb: "Vous voulez optimiser l'automatisation de la construction, du déploiement et de la surveillance avec Kubernetes? Nous pouvons aider." - }, - { - type: 0, - name: 'CloudBourne', - logo: 'cloudbourne', - link: 'https://cloudbourne.com/', - blurb: "Notre plate-forme cloud hybride AppZ peut vous aider à atteindre vos objectifs de transformation numérique en utilisant les puissants Kubernetes." - }, - { - type: 3, - name: 'BoCloud', - logo: 'bocloud', - link: 'http://www.bocloud.com.cn/en/index.html', - blurb: 'BoCloud - BeyondcentContainer' - }, - { - type: 2, - name: 'Naitways', - logo: 'naitways', - link: 'https://www.naitways.com/', - blurb: "Naitways est un opérateur (AS57119), un intégrateur et un fournisseur de services cloud (le nôtre!). Nous visons à fournir des services à valeur ajoutée grâce à notre maîtrise de l’ensemble de la chaîne de valeur (infrastructure, réseau, compétences humaines). Le cloud privé et public est disponible via Kubernetes, qu'il soit géré ou non." - }, - { - type: 2, - name: 'Kinvolk', - logo: 'kinvolk', - link: 'https://kinvolk.io/kubernetes/', - blurb: 'Kinvolk offre un support technique et opérationnel à Kubernetes, du cluster au noyau. Les entreprises leaders dans le cloud font confiance à Kinvolk pour son expertise approfondie de Linux. ' - }, - { - type: 1, - name: 'Cascadeo Corporation', - logo: 'cascadeo', - link: 'http://www.cascadeo.com/', - blurb: "Cascadeo conçoit, implémente et gère des charges de travail conteneurisées avec Kubernetes, tant pour les applications existantes que pour les projets de développement en amont." - }, - { - type: 1, - name: 'Elastisys AB', - logo: 'elastisys', - link: 'https://elastisys.com/services/#kubernetes', - blurb: "Nous concevons, construisons et exploitons des clusters Kubernetes. Nous sommes des experts des infrastructures Kubernetes hautement disponibles et auto-optimisées." - }, - { - type: 1, - name: 'Greenfield Guild', - logo: 'greenfield', - link: 'http://greenfieldguild.com/', - blurb: "La guilde Greenfield construit des solutions open source de qualité et offre une formation et une assistance pour Kubernetes dans tous les environnements." - }, - { - type: 1, - name: 'PolarSeven', - logo: 'polarseven', - link: 'https://polarseven.com/what-we-do/kubernetes/', - blurb: "Pour démarrer avec Kubernetes (K8), nos consultants PolarSeven peuvent vous aider à créer un environnement dockerized entièrement fonctionnel pour exécuter et déployer vos applications." - }, - { - type: 1, - name: 'Kloia', - logo: 'kloia', - link: 'https://kloia.com/kubernetes/', - blurb: 'Kloia est une société de conseil en développement et en microservices qui aide ses clients à faire migrer leur environnement vers des plates-formes cloud afin de créer des environnements plus évolutifs et sécurisés. Nous utilisons Kubernetes pour fournir à nos clients des solutions complètes tout en restant indépendantes du cloud. ' - }, - { - type: 0, - name: 'Bluefyre', - logo: 'bluefyre', - link: 'https://www.bluefyre.io', - blurb: "Bluefyre offre une plate-forme de sécurité d'abord destinée aux développeurs, native de Kubernetes. Bluefyre aide votre équipe de développement à envoyer du code sécurisé sur Kubernetes plus rapidement!" - }, - { - type: 0, - name: 'Harness', - logo: 'harness', - link: 'https://harness.io/harness-continuous-delivery/secret-sauce/smart-automation/', - blurb: "Harness propose une livraison continue, car un service assurera une prise en charge complète des applications conteneurisées et des clusters Kubernetes." - }, - { - type: 0, - name: 'VMware - Wavefront', - logo: 'wavefront', - link: 'https://www.wavefront.com/solutions/container-monitoring/', - blurb: "La plate-forme Wavefront fournit des analyses et une surveillance basées sur des mesures pour Kubernetes et des tableaux de bord de conteneurs pour DevOps et des équipes de développeurs, offrant une visibilité sur les services de haut niveau ainsi que sur des mesures de conteneurs granulaires." - }, - { - type: 0, - name: 'Bloombase, Inc.', - logo: 'bloombase', - link: 'https://www.bloombase.com/go/kubernetes', - blurb: "Bloombase fournit un cryptage de données au repos avec une bande passante élevée et une défense en profondeur pour verrouiller les joyaux de la couronne Kubernetes à grande échelle." - }, - { - type: 0, - name: 'Kasten', - logo: 'kasten', - link: 'https://kasten.io/product/', - blurb: "Kasten fournit des solutions d'entreprise spécialement conçues pour gérer la complexité opérationnelle de la gestion des données dans les environnements en nuage." - }, - { - type: 0, - name: 'Humio', - logo: 'humio', - link: 'https://humio.com', - blurb: "Humio est une base de données d'agrégation de journaux. Nous proposons une intégration Kubernetes qui vous donnera un aperçu de vos journaux à travers des applications et des instances." - }, - { - type: 0, - name: 'Outcold Solutions LLC', - logo: 'outcold', - link: 'https://www.outcoldsolutions.com/#monitoring-kubernetes', - blurb: 'Puissantes applications Splunk certifiées pour la surveillance OpenShift, Kubernetes et Docker.' - }, - { - type: 0, - name: 'SysEleven GmbH', - logo: 'syseleven', - link: 'http://www.syseleven.de/', - blurb: "Clients d'entreprise ayant besoin d'opérations à toute épreuve (portails d'entreprise et de commerce électronique à haute performance)" - }, - { - type: 0, - name: 'Landoop', - logo: 'landoop', - link: 'http://lenses.stream', - blurb: 'Lenses for Apache Kafka, to deploy, manage and operate with confidence data streaming pipelines and topologies at scale with confidence and native Kubernetes integration.' - }, - { - type: 0, - name: 'Redis Labs', - logo: 'redis', - link: 'https://redislabs.com/blog/getting-started-with-kubernetes-and-redis-using-redis-enterprise/', - blurb: "Redis Enterprise étend Redis open source et fournit une mise à l'échelle linéaire stable et de haute performance requise pour la création de microservices sur la plateforme Kubernetes." - }, - { - type: 3, - name: 'Diamanti', - logo: 'diamanti', - link: 'https://diamanti.com/', - blurb: 'Diamanti - Diamanti-D10' - }, - { - type: 3, - name: 'Eking', - logo: 'eking', - link: 'http://www.eking-tech.com/', - blurb: 'Hainan eKing Technology Co. - eKing Cloud Container Platform' - }, - { - type: 3, - name: 'Harmony Cloud', - logo: 'harmony', - link: 'http://harmonycloud.cn/products/rongqiyun/', - blurb: 'Harmonycloud - Harmonycloud Container Platform' - }, - { - type: 3, - name: 'Woqutech', - logo: 'woqutech', - link: 'http://woqutech.com/product_qfusion.html', - blurb: 'Woqutech - QFusion' - }, - { - type: 3, - name: 'Baidu', - logo: 'baidu', - link: 'https://cloud.baidu.com/product/cce.html', - blurb: 'Baidu Cloud - Baidu Cloud Container Engine' - }, - { - type: 3, - name: 'ZTE', - logo: 'zte', - link: 'https://sdnfv.zte.com.cn/en/home', - blurb: 'ZTE - TECS OpenPalette' - }, - { - type: 1, - name: 'Automatic Server AG', - logo: 'asag', - link: 'http://www.automatic-server.com/paas.html', - blurb: 'Nous installons et exploitons Kubernetes dans de grandes entreprises, créons des flux de travail de déploiement et aidons à la migration.' - }, - { - type: 1, - name: 'Circulo Siete', - logo: 'circulo', - link: 'https://circulosiete.com/consultoria/kubernetes/', - blurb: 'Notre entreprise basée au Mexique propose des formations, des conseils et une assistance pour la migration de vos charges de travail vers Kubernetes, Cloud Native Microservices & Devops.' - }, - { - type: 1, - name: 'DevOpsGuru', - logo: 'devopsguru', - link: 'http://devopsguru.ca/workshop', - blurb: 'DevOpsGuru travaille avec les petites entreprises pour passer du physique au virtuel en conteneurisé.' - }, - { - type: 1, - name: 'EIN Intelligence Co., Ltd', - logo: 'ein', - link: 'https://ein.io', - blurb: 'Startups et entreprises agiles en Corée du Sud.' - }, - { - type: 0, - name: 'GuardiCore', - logo: 'guardicore', - link: 'https://www.guardicore.com/', - blurb: 'GuardiCore a fourni une visibilité au niveau des processus et une application des stratégies réseau sur les actifs conteneurisés sur la plateforme Kubernetes.' - }, - { - type: 0, - name: 'Hedvig', - logo: 'hedvig', - link: 'https://www.hedviginc.com/blog/provisioning-hedvig-storage-with-kubernetes', - blurb: 'Hedvig est un stockage défini par logiciel qui utilise NFS ou iSCSI pour les volumes persistants afin de provisionner le stockage partagé pour les pods et les conteneurs.' - }, - { - type: 0, - name: 'Hewlett Packard Enterprise', - logo: 'hpe', - link: ' https://www.hpe.com/us/en/storage/containers.html', - blurb: 'Stockage permanent qui rend les données aussi faciles à gérer que les conteneurs: provisioning dynamique, performances et protection basées sur des stratégies, qualité de service, etc.' - }, - { - type: 0, - name: 'JetBrains', - logo: 'jetbrains', - link: 'https://blog.jetbrains.com/teamcity/2017/10/teamcity-kubernetes-support-plugin/', - blurb: "Exécutez des agents de génération de cloud TeamCity dans un cluster Kubernetes. Fournit un support Helm en tant qu'étape de construction." - }, - { - type: 2, - name: 'Opensense', - logo: 'opensense', - link: 'http://www.opensense.fr/en/kubernetes-en/', - blurb: 'Nous fournissons des services Kubernetes (intégration, exploitation, formation) ainsi que le développement de microservices bancaires basés sur notre expérience étendue en matière de cloud de conteneurs, de microservices, de gestion de données et du secteur financier.' - }, - { - type: 2, - name: 'SAP SE', - logo: 'sap', - link: 'https://cloudplatform.sap.com', - blurb: "SAP Cloud Platform fournit des fonctionnalités en mémoire et des services métier uniques pour la création et l'extension d'applications. Avec Open Source Project Project, SAP utilise la puissance de Kubernetes pour offrir une expérience ouverte, robuste et multi-cloud à ses clients. Vous pouvez utiliser des principes de conception natifs en nuage simples et modernes et exploiter les compétences dont votre organisation dispose déjà pour fournir des applications agiles et transformatives, tout en s'intégrant aux dernières fonctionnalités de SAP Leonardo." - }, - { - type: 1, - name: 'Mobilise Cloud Services Limited', - logo: 'mobilise', - link: 'https://www.mobilise.cloud/en/services/serverless-application-delivery/', - blurb: 'Mobilize aide les organisations à adopter Kubernetes et à les intégrer à leurs outils CI / CD.' - }, - { - type: 3, - name: 'AWS', - logo: 'aws', - link: 'https://aws.amazon.com/eks/', - blurb: 'Amazon Elastic Container Service pour Kubernetes (Amazon EKS) est un service géré qui facilite l’exécution de Kubernetes sur AWS sans avoir à installer ni à utiliser vos propres clusters Kubernetes.' - }, - { - type: 3, - name: 'Kontena', - logo: 'kontena', - link: 'https://pharos.sh', - blurb: 'Kontena Pharos - La distribution simple, solide et certifiée Kubernetes qui fonctionne.' - }, - { - type: 2, - name: 'NTTData', - logo: 'nttdata', - link: 'http://de.nttdata.com/altemista-cloud', - blurb: 'NTT DATA, membre du groupe NTT, apporte la puissance du plus important fournisseur d’infrastructures au monde dans la communauté mondiale des K8.' - }, - { - type: 2, - name: 'OCTO', - logo: 'octo', - link: 'https://www.octo.academy/fr/formation/275-kubernetes-utiliser-architecturer-et-administrer-une-plateforme-de-conteneurs', - blurb: "La technologie OCTO fournit des services de formation, d'architecture, de conseil technique et de livraison, notamment des conteneurs et des Kubernetes." - }, - { - type: 0, - name: 'Logdna', - logo: 'logdna', - link: 'https://logdna.com/kubernetes', - blurb: 'Identifiez instantanément les problèmes de production avec LogDNA, la meilleure plate-forme de journalisation que vous utiliserez jamais. Commencez avec seulement 2 commandes kubectl.' - } - ] - - var kcspContainer = document.getElementById('kcspContainer') - var distContainer = document.getElementById('distContainer') - var ktpContainer = document.getElementById('ktpContainer') - var isvContainer = document.getElementById('isvContainer') - var servContainer = document.getElementById('servContainer') - - var sorted = partners.sort(function (a, b) { - if (a.name > b.name) return 1 - if (a.name < b.name) return -1 - return 0 - }) - - sorted.forEach(function (obj) { - var box = document.createElement('div') - box.className = 'partner-box' - - var img = document.createElement('img') - img.src = '/images/square-logos/' + obj.logo + '.png' - - var div = document.createElement('div') - - var p = document.createElement('p') - p.textContent = obj.blurb - - var link = document.createElement('a') - link.href = obj.link - link.target = '_blank' - link.textContent = 'Learn more' - - div.appendChild(p) - div.appendChild(link) - - box.appendChild(img) - box.appendChild(div) - - var container; - if (obj.type === 0) { - container = isvContainer; - } else if (obj.type === 1) { - container = servContainer; - } else if (obj.type === 2) { - container = kcspContainer; - } else if (obj.type === 3) { - container = distContainer; - } else if (obj.type === 4) { - container = ktpContainer; - } - - container.appendChild(box) - }) -})(); diff --git a/content/fr/partners/_index.html b/content/fr/partners/_index.html index 415b46b34f6ad..164c8f6fc4d8a 100644 --- a/content/fr/partners/_index.html +++ b/content/fr/partners/_index.html @@ -8,85 +8,48 @@ ---
    -
    -
    Kubernetes travaille avec des partenaires pour créer une base de code forte et dynamique prenant en charge un large éventail de plates-formes complémentaires.
    -
    -
    -
    -
    - Fournisseurs de services certifiés Kubernetes -
    -
    Des fournisseurs de services aguerris ayant une expérience approfondie dans l'aide aux entreprises pour l'adoption de Kubernetes. -


    - -

    Intéressé à devenir un KCSP? -
    -
    -
    -
    -
    - Distributions Kubernetes, plates-formes hébergées et installateurs certifiés -
    La conformité logicielle garantit que chaque version de Kubernetes du fournisseur prend en charge les API requises. -


    - -

    Intéressé à devenir Agréé Kubernetes? -
    -
    -
    -
    -
    Formateurs Partenaires sur Kubernetes
    -
    Fournisseurs de formation sélectionnés ayant une expérience approfondie de la formation aux technologies cloud natives. -



    - -

    Intéressé à devenir un KTP? -
    -
    -
    - - - -
    - - +
    Kubernetes travaille avec des partenaires pour créer une base de code forte et dynamique prenant en charge un large éventail de plates-formes complémentaires.
    +
    +
    +
    +
    + Fournisseurs de services certifiés Kubernetes +
    +
    Des fournisseurs de services aguerris ayant une expérience approfondie dans l'aide aux entreprises pour l'adoption de Kubernetes. +


    + +

    Intéressé par devenir un + KCSP ? +
    +
    +
    +
    +
    + Distributions Kubernetes, plates-formes hébergées et installateurs certifiés +
    La conformité logicielle garantit que chaque version de Kubernetes du fournisseur prend en charge les API requises. +


    + +

    Intéressé par devenir + Agréé Kubernetes ? +
    +
    +
    +
    +
    + Formateurs Partenaires sur Kubernetes +
    +
    Fournisseurs de formation sélectionnés ayant une expérience approfondie de la formation aux technologies cloud natives. +


    + +

    Intéressé par devenir un + KTP ? +
    +
    - -
    + {{< cncf-landscape helpers=true >}}
    - diff --git a/content/hi/docs/reference/glossary/aggregation-layer.md b/content/hi/docs/reference/glossary/aggregation-layer.md index 02f4ffd714dd5..1ecbaf52b0c75 100644 --- a/content/hi/docs/reference/glossary/aggregation-layer.md +++ b/content/hi/docs/reference/glossary/aggregation-layer.md @@ -13,8 +13,8 @@ tags: - operation --- -एग्रीगेशन लेयर आपको अपने क्लस्टर में अतिरिक्त कुबेरनेट्स-शैली API स्थापित करने देता है।``` +एग्रीगेशन लेयर आपको अपने क्लस्टर में अतिरिक्त कुबेरनेट्स-शैली API स्थापित करने देता है। -जब आपने {{< glossary_tooltip text="कुबेरनेट्स API सर्वर" term_id="kube-apiserver" >}} को [अतिरिक्त API का समर्थन](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) करने के लिए कॉन्फ़िगर किया हो, आप कुबेरनेट्स एपीआई में URL पथ का "दावा" करने के लिए `APIService` ऑब्जेक्ट जोड़ सकते हैं। +जब आपने {{< glossary_tooltip text="कुबेरनेट्स API सर्वर" term_id="kube-apiserver" >}} को [अतिरिक्त API का समर्थन](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) करने के लिए कॉन्फ़िगर किया हो, आप कुबेरनेट्स API में URL पाथ का "दावा" करने के लिए `APIService` ऑब्जेक्ट जोड़ सकते हैं। diff --git a/content/hi/docs/reference/glossary/application-architect.md b/content/hi/docs/reference/glossary/application-architect.md new file mode 100644 index 0000000000000..7be9c115f16f9 --- /dev/null +++ b/content/hi/docs/reference/glossary/application-architect.md @@ -0,0 +1,18 @@ +--- +title: एप्लीकेशन आर्किटेक्ट (Application Architect) +id: application-architect +date: 2018-04-12 +full_link: +short_description: > + किसी एप्लिकेशन के उच्च-स्तरीय रचना के लिए जिम्मेदार व्यक्ति। + +aka: +tags: + - user-type +--- + +किसी एप्लिकेशन के उच्च-स्तरीय रचना के लिए जिम्मेदार व्यक्ति। + + + +एक आर्किटेक्ट यह सुनिश्चित करता है कि एक ऐप का अमल इसे अपने आसपास के घटकों के साथ एक स्केलेबल, रखरखाव योग्य तरीके से बातचीत करने की अनुमति देता है। आसपास के घटकों में डेटाबेस, लॉगिंग इन्फ्रास्ट्रक्चर और अन्य माइक्रोसर्विसेज शामिल हैं। diff --git a/content/hi/docs/reference/glossary/application-developer.md b/content/hi/docs/reference/glossary/application-developer.md new file mode 100644 index 0000000000000..9f0ea06b02604 --- /dev/null +++ b/content/hi/docs/reference/glossary/application-developer.md @@ -0,0 +1,17 @@ +--- +title: एप्लिकेशन डेवलपर (Application Developer) +id: application-developer +date: 2018-04-12 +full_link: +short_description: > + एक व्यक्ति जो कुबेरनेट्स क्लस्टर में चलने वाले एप्लिकेशन लिखता है। +aka: +tags: + - user-type +--- + +एक व्यक्ति जो कुबेरनेट्स क्लस्टर में चलने वाले एप्लिकेशन लिखता है। + + + +एप्लिकेशन डेवलपर्स एप्लिकेशन के एक हिस्से पर ध्यान केंद्रित करते हैं। उनके फोकस का पैमाना आकारस्वरूप काफी भिन्न हो सकता है। diff --git a/content/hi/docs/reference/glossary/cidr.md b/content/hi/docs/reference/glossary/cidr.md new file mode 100644 index 0000000000000..4d03824ddcd30 --- /dev/null +++ b/content/hi/docs/reference/glossary/cidr.md @@ -0,0 +1,17 @@ +--- +title: सीआईडीआर (CIDR) +id: cidr +date: 2019-11-12 +full_link: +short_description: > + सीआईडीआर IP पतों के ब्लॉक का वर्णन करने के लिए एक संकेतन है और विभिन्न नेटवर्किंग कॉन्फ़िगरेशन में इसका भारी उपयोग किया जाता है। +aka: +tags: + - networking +--- + +सीआईडीआर (क्लासलेस इंटर-डोमेन रौटिंग) IP पतों के ब्लॉक का वर्णन करने के लिए एक संकेतन है और विभिन्न नेटवर्किंग कॉन्फ़िगरेशन में इसका भारी उपयोग किया जाता है। + + + +कुबेरनेट्स के संदर्भ में, प्रत्येक {{}} को आरंभिक पते के माध्यम से IP पतों की एक श्रृंखला और सीआईडीआर का उपयोग करके एक सबनेट मास्क सौंपा गया है। यह प्रत्येक {{}} को एक अद्वितीय IP पता निर्दिष्ट करने की अनुमति नोड्स को देता है। हालाँकि यह मूल रूप से IPv4 के लिए एक अवधारणा थी, IPv6 को शामिल करने के लिए सीआईडीआर का विस्तार किया गया है | diff --git a/content/hi/docs/reference/glossary/cla.md b/content/hi/docs/reference/glossary/cla.md new file mode 100644 index 0000000000000..8b22b14d5a270 --- /dev/null +++ b/content/hi/docs/reference/glossary/cla.md @@ -0,0 +1,19 @@ +--- +title: सीएलए (CLA/Contributor License Agreement) +id: cla +date: 2018-04-12 +full_link: https://github.com/kubernetes/community/blob/master/CLA.md +short_description: > + शर्तें जिसके तहत एक योगदानकर्ता अपने योगदान के लिए एक ओपन सोर्स प्रोजेक्ट को लाइसेंस देता है। + + +aka: +tags: +- community +--- + शर्तें जिसके तहत एक {{< glossary_tooltip text="योगदानकर्ता" term_id="contributor" >}} अपने योगदान के लिए एक ओपन सोर्स प्रोजेक्ट को लाइसेंस देता है। + + + + +सीएलए योगदान सामग्री और बौद्धिक संपदा से जुड़े कानूनी विवादों को सुलझाने में मदद करते है| diff --git a/content/hi/docs/reference/glossary/cloud-provider.md b/content/hi/docs/reference/glossary/cloud-provider.md new file mode 100644 index 0000000000000..d17f3196b2f51 --- /dev/null +++ b/content/hi/docs/reference/glossary/cloud-provider.md @@ -0,0 +1,22 @@ +--- +title: क्लाउड प्रदाता (Cloud Provider) +id: cloud-provider +date: 2018-04-12 +short_description: > + एक संगठन जो क्लाउड कंप्यूटिंग प्लेटफॉर्म प्रदान करता है। + +aka: + - क्लाउड सेवा प्रदाता (Cloud Service Provider) +tags: + - community +--- + +एक व्यवसाय या अन्य संगठन जो क्लाउड कंप्यूटिंग प्लेटफॉर्म प्रदान करता हैं। + + + +क्लाउड प्रदाता, जिन्हें कभी-कभी क्लाउड सेवा प्रदाता (CSP) भी कहा जाता है, क्लाउड कंप्यूटिंग प्लेटफॉर्म या सेवाएं प्रदान करते हैं। + +कई क्लाउड प्रदाता प्रबंधित अवसंरचना प्रदान करते हैं (जिन्हें Infrastructure as a Service या IaaS भी कहा जाता है)। प्रबंधित अवसंरचना के साथ क्लाउड प्रदाता सर्वर, स्टोरेज और नेटवर्किंग प्रदान करने के लिए जिम्मेदार है जबकि आप उसके ऊपरी लेयर्स का प्रबंधन करते हैं जैसे कि कुबेरनेट्स क्लस्टर चलाना। + +आप कुबेरनेट्स को एक प्रबंधित सेवा के रूप में भी पा सकते हैं; कई बार इसे Platform as a Service, या PaaS भी कहा जाता है। प्रबंधित कुबेरनेट्स के साथ, आपका क्लाउड प्रदाता कुबेरनेट्स कंट्रोल प्लेन{{< glossary_tooltip text="नोड" term_id="node" >}} और जिस अवसंरचना पर वे भरोसा करते हैं: नेटवर्किंग, स्टोरेज, और संभवतः अन्य तत्व जैसे लोड बैलेंसर्स के लिए जिम्मेदार है। diff --git a/content/hi/docs/reference/glossary/data-plane.md b/content/hi/docs/reference/glossary/data-plane.md new file mode 100644 index 0000000000000..d8ca88614b8a2 --- /dev/null +++ b/content/hi/docs/reference/glossary/data-plane.md @@ -0,0 +1,14 @@ +--- +title: डेटा प्लेन (Data Plane) +id: data-plane +date: 2019-05-12 +full_link: +short_description: > + वह परत जो CPU, मेमोरी, नेटवर्क और स्टोरेज जैसी क्षमता प्रदान करता है ताकि कंटेनर चल सकें और नेटवर्क से जुड़ सकें। + +aka: +tags: + - fundamental +--- + +वह परत जो CPU, मेमोरी, नेटवर्क और स्टोरेज जैसी क्षमता प्रदान करता है ताकि कंटेनर चल सकें और नेटवर्क से जुड़ सकें। diff --git a/content/hi/docs/reference/glossary/dockershim.md b/content/hi/docs/reference/glossary/dockershim.md new file mode 100644 index 0000000000000..11f86b59913ae --- /dev/null +++ b/content/hi/docs/reference/glossary/dockershim.md @@ -0,0 +1,18 @@ +--- +title: डॉकरशिम (Dockershim) +id: dockershim +date: 2022-04-15 +full_link: /dockershim +short_description: > + कुबेरनेट्स संस्करण 1.23 और पहले का एक घटक, जो कुबेरनेट्स सिस्टम घटकों को डॉकर इंजन के साथ संचार करने की अनुमति देता है। + +aka: +tags: + - fundamental +--- + +डॉकरशिम कुबेरनेट्स संस्करण 1.23 और पहले का एक घटक है। यह क्यूबलेट को {{}} के साथ संचार करने के लिए अनुमति देता है। + + + +संस्करण 1.24 से शुरू होकर, डॉकरशिम को कुबेरनेट्स से हटा दिया गया है। अधिक जानकारी के लिए, [डॉकरशिम FAQ](/dockershim) देखें। diff --git a/content/hi/docs/reference/glossary/etcd.md b/content/hi/docs/reference/glossary/etcd.md new file mode 100644 index 0000000000000..c1fe297287004 --- /dev/null +++ b/content/hi/docs/reference/glossary/etcd.md @@ -0,0 +1,20 @@ +--- +title: etcd +id: etcd +date: 2018-04-12 +full_link: /docs/tasks/administer-cluster/configure-upgrade-etcd/ +short_description: > + सभी क्लस्टर डेटा के लिए कुबेरनेट्स के बैकिंग स्टोर के रूप में उपयोग किया जाने वाला सुसंगत और उच्च उपलब्धता की-वैल्यू स्टोर। + +aka: +tags: +- architecture +- storage +--- + सभी क्लस्टर डेटा के लिए कुबेरनेट्स के बैकिंग स्टोर के रूप में उपयोग किया जाने वाला सुसंगत और उच्च उपलब्धता की-वैल्यू स्टोर। + + + +यदि आपका कुबेरनेट्स क्लस्टर बैकिंग स्टोर के रूप में etcd का उपयोग करता है, तो सुनिश्चित करें कि आपके पास उन डेटा के लिए एक [बैकअप](/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster) योजना है। + +आप आधिकारिक [प्रलेखन](https://etcd.io/docs/) में etcd के बारे में गहन जानकारी प्राप्त कर सकते हैं। diff --git a/content/hi/docs/reference/glossary/helm-chart.md b/content/hi/docs/reference/glossary/helm-chart.md new file mode 100644 index 0000000000000..9d4eabbda2d1b --- /dev/null +++ b/content/hi/docs/reference/glossary/helm-chart.md @@ -0,0 +1,18 @@ +--- +title: हेल्म चार्ट (Helm Chart) +id: helm-chart +date: 2018-04-12 +full_link: https://helm.sh/docs/topics/charts/ +short_description: > + हेल्म चार्ट(Helm Chart) पूर्व-कॉन्फ़िगर(pre-configured) किए गए कुबेरनेट्स संसाधनों का एक पैकेज है जिसे हेल्म टूल के माध्यम से प्रबंधित किया जा सकता है। +aka: +tags: + - tool +--- + +हेल्म चार्ट(Helm Chart) पूर्व-कॉन्फ़िगर (pre-configured) किए गए कुबेरनेट्स संसाधनों का एक पैकेज है जिसे हेल्म टूल के माध्यम से प्रबंधित किया जा सकता है। + + + +चार्ट्स कुबेरनेट्स एप्लिकेशन बनाने और साझा करने के लिए एक पुनरुत्पादनीय तरीका प्रदान करते हैं। +एक एकल चार्ट का उपयोग कुछ सरल, जैसे कि मेमकैच्ड पॉड (Memcached Pod), या फिर कुछ जटिल, जैसे HTTP सर्वर, डेटाबेस, कैश (cache) आदि के साथ एक फुल वेब ऐप स्टैक को डिप्लॉय करने के लिए किया जा सकता है। diff --git a/content/hi/docs/reference/glossary/istio.md b/content/hi/docs/reference/glossary/istio.md new file mode 100644 index 0000000000000..0170abbe4d1db --- /dev/null +++ b/content/hi/docs/reference/glossary/istio.md @@ -0,0 +1,19 @@ +--- +title: Istio +id: istio +date: 2018-04-12 +full_link: https://istio.io/docs/concepts/what-is-istio/ +short_description: > + एक ओपन प्लैटफ़ॉर्म (कुबेरनेट्स-विशिष्ट नहीं) जो माइक्रोसर्विसेज को एकीकृत करने, ट्रैफ़िक प्रवाह को प्रबंधित करने, नीतियों को लागू करने और टेलीमेट्री डेटा को एकत्र करने का एक समान तरीका प्रदान करता हैं। +aka: +tags: +- networking +- architecture +- extension +--- + + एक ओपन प्लैटफ़ॉर्म (कुबेरनेट्स-विशिष्ट नहीं) जो माइक्रोसर्विसेज को एकीकृत करने, ट्रैफ़िक प्रवाह को प्रबंधित करने, नीतियों को लागू करने और टेलीमेट्री डेटा को एकत्र करने का एक समान तरीका प्रदान करता हैं। + + + +Istio को जोड़ने के लिए एप्लिकेशन कोड बदलने की आवश्यकता नहीं है। यह एक सर्विस और नेटवर्क के बीच बुनियादी ढांचे की एक परत है, जिसे जब सर्विस डिप्लॉयमेंट के साथ जोड़ा जाता है, तो इसे आमतौर पर सर्विस मैश (Service Mesh) के रूप में भी जाना जाता है। Istio का कंट्रोल प्लेन अंतर्निहित क्लस्टर प्रबंधन प्लैटफ़ॉर्म को अलग कर देता है, जो कुबेरनेट्स, Mesosphere आदि हो सकते हैं। diff --git a/content/hi/docs/reference/glossary/operator-pattern.md b/content/hi/docs/reference/glossary/operator-pattern.md new file mode 100644 index 0000000000000..57179224f9706 --- /dev/null +++ b/content/hi/docs/reference/glossary/operator-pattern.md @@ -0,0 +1,19 @@ +--- +title: ऑपरेटर पैटर्न +id: operator-pattern +date: 2019-05-21 +full_link: /docs/concepts/extend-kubernetes/operator/ +short_description: > + कस्टम संसाधन का प्रबंधन करने के लिए उपयोग किया जाने वाला एक विशेष नियंत्रक + +aka: +tags: +- architecture +--- +[ऑपरेटर पैटर्न](/docs/concepts/extend-kubernetes/operator/) एक सिस्टम रचना है जो {{< glossary_tooltip text="नियंत्रक" term_id="controller" >}} +को एक या अधिक कस्टम संसाधनों से जोड़ता है। + + +आप अंतर्निहित नियंत्रक, जो स्वयं कुबेरनेट्स का हिस्सा हैं, का उपयोग करने से परे, अपने क्लस्टर में नियंत्रकों को जोड़कर कुबेरनेट्स की कार्यक्षमता का विस्तार कर सकते हैं। + +यदि कोई चालू एप्लिकेशन नियंत्रक के रूप में कार्य करता है और उसके पास कंट्रोल प्लेन में परिभाषित कस्टम संसाधन पर कार्य करने के लिए API अभिगम है, तो यह ऑपरेटर पैटर्न का एक उदाहरण है। diff --git a/content/hi/docs/reference/glossary/pod-disruption.md b/content/hi/docs/reference/glossary/pod-disruption.md new file mode 100644 index 0000000000000..ca08d1dc78b0a --- /dev/null +++ b/content/hi/docs/reference/glossary/pod-disruption.md @@ -0,0 +1,22 @@ +--- +title: पॉड विघटन (Pod Disruption) +id: pod-disruption +date: 2021-05-12 +full_link: /docs/concepts/workloads/pods/disruptions/ +short_description: > + पॉड विघटन वह प्रक्रिया है जिसके द्वारा नोड्स पर पॉड्स को स्वेच्छा से या अनैच्छिक रूप से समाप्त कर दिया जाता है। + +aka: +related: + - pod + - container +tags: + - operation +--- + +[पॉड विघटन](/docs/concepts/workloads/pods/disruptions/) वह प्रक्रिया है जिसके द्वारा नोड्स पर पॉड्स को स्वेच्छा से या अनैच्छिक रूप से समाप्त कर दिया जाता है। + + + +स्वैच्छिक विघटन एप्लीकेशन मालिक या फिर क्लस्टर प्रशासक अभिप्रायपूर्वक चालू करते है। +अनैच्छिक विघटन अनजाने में होते है और वो अपरिहार्य वजह से उत्पन्न हो सकते हैं जैसे कि नोड्स के पास संसाधन ख़तम हो जाना या आकस्मिक विलोपन। diff --git a/content/hi/docs/reference/glossary/pod.md b/content/hi/docs/reference/glossary/pod.md new file mode 100644 index 0000000000000..1496b7ab31f4d --- /dev/null +++ b/content/hi/docs/reference/glossary/pod.md @@ -0,0 +1,19 @@ +--- +title: पॉड (Pod) +id: pod +date: 2018-04-12 +full_link: /docs/concepts/workloads/pods/ +short_description: > + पॉड आपके क्लस्टर में चल रहे कंटेनरों के समूह का प्रतिनिधित्व करता है। + +aka: +tags: + - core-object + - fundamental +--- + +सबसे छोटी और सरल कुबेरनेट्स वस्तु। पॉड आपके क्लस्टर में चल रहे {{< glossary_tooltip text="कंटेनरों" term_id="container" >}} के समूह का प्रतिनिधित्व करता है। + + + +एक पॉड आमतौर पर एक प्राथमिक कंटेनर चलाने के लिए स्थापित किया जाता है। यह वैकल्पिक साइडकार कंटेनर भी चला सकता है जो लॉगिंग जैसे पूरक सुविधाओं को जोड़ता है। पॉड्स को आमतौर पर एक {{< glossary_tooltip text="डिप्लॉयमेंट" term_id="deployment" >}} द्वारा प्रबंधित किया जाता है। diff --git a/content/hi/docs/setup/production-environment/_index.md b/content/hi/docs/setup/production-environment/_index.md index cf1f2f753f4f3..03e641cb6475d 100644 --- a/content/hi/docs/setup/production-environment/_index.md +++ b/content/hi/docs/setup/production-environment/_index.md @@ -59,12 +59,12 @@ no_list: true विवरण के लिए [एक बाहरी लोड बैलेंसर बनाना](/docs/tasks/access-application-cluster/create-external-load-balancer/) देखें। - *अलग और बैकअप etcd सेवा*: अतिरिक्त सुरक्षा और उपलब्धता के लिए etcd सेवाएं या तो अन्य कंट्रोल प्लेन सेवाओं के समान मशीनों पर चल सकती हैं या अलग मशीनों पर चल सकती हैं। क्योंकि etcd क्लस्टर कॉन्फ़िगरेशन डेटा संग्रहीत करता है, etcd डेटाबेस का बैकअप नियमित रूप से किया जाना चाहिए ताकि यह सुनिश्चित हो सके कि यदि आवश्यक हो तो आप उस डेटाबेस की मरम्मत कर सकते हैं। etcd को कॉन्फ़िगर करने और उपयोग करने के विवरण के लिए [etcd अक्सर पूछे जाने वाले प्रश्न](https://etcd.io/docs/v3.5/faq/) देखें। -विवरण के लिए [कुबेरनेट्स के लिए ऑपरेटिंग etcd क्लस्टर](/docs/tasks/administer-cluster/configure-upgrade-etcd/) और [kubeadm के साथ एक उच्च उपलब्धता etcd क्लस्टर स्थापित करें](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) देखें। +विवरण के लिए [कुबेरनेट्स के लिए ऑपरेटिंग etcd क्लस्टर](/docs/tasks/administer-cluster/configure-upgrade-etcd/) और [क्यूबएडीएम के साथ एक उच्च उपलब्धता etcd क्लस्टर स्थापित करें](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) देखें। - *मल्टीपल कण्ट्रोल प्लेन सिस्टम बनाएं*: उच्च उपलब्धता के लिए, कण्ट्रोल प्लेन एक मशीन तक सीमित नहीं होना चाहिए। यदि कण्ट्रोल प्लेन सेवाएं एक init सेवा (जैसे systemd) द्वारा चलाई जाती हैं, तो प्रत्येक सेवा को कम से कम तीन मशीनों पर चलना चाहिए। हालाँकि, कुबेरनेट्स में पॉड्स के रूप में कण्ट्रोल प्लेन सेवाएं चलाना सुनिश्चित करता है कि आपके द्वारा अनुरोधित सेवाओं की प्रतिकृति संख्या हमेशा उपलब्ध रहेगी। अनुसूचक फॉल्ट सहने वाला होना चाहिए, लेकिन अत्यधिक उपलब्ध नहीं होना चाहिए। कुबेरनेट्स सेवाओं के नेता चुनाव करने के लिए कुछ डिप्लॉयमेंट उपकरण [राफ्ट](https://raft.github.io/) सर्वसम्मति एल्गोरिथ्म की स्थापना करते हैं। यदि प्राथमिक चला जाता है, तो दूसरी सेवा स्वयं को चुनती है और कार्यभार संभालती है। - *कई क्षेत्रों में विस्तार करना*: यदि अपने क्लस्टर को हर समय उपलब्ध रखना महत्वपूर्ण है, तो एक ऐसा क्लस्टर बनाने पर विचार करें, जो कई डेटा केंद्रों पर चलता हो, जिसे क्लाउड वातावरण में ज़ोन के रूप में संदर्भित किया जाता है। ज़ोन(zone) के समूहों को रीजन(region) कहा जाता है। एक ही क्षेत्र में कई क्षेत्रों में एक क्लस्टर फैलाकर, यह इस संभावना में सुधार कर सकता है कि एक क्षेत्र अनुपलब्ध होने पर भी आपका क्लस्टर कार्य करना जारी रखेगा। -विवरण के लिए [Running in multiple zones](/docs/setup/best-practices/multiple-zones/) देखें। -- *चल रही सुविधाओं का प्रबंधन*: यदि आप अपने क्लस्टर को समय के साथ रखने की योजना बनाते हैं, तो इसके स्वास्थ्य और सुरक्षा को बनाए रखने के लिए आपको कुछ कार्य करने होंगे। उदाहरण के लिए, यदि आपने kubeadm के साथ स्थापित किया है, तो आपको [सर्टिफिकेट प्रबंधन](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) और [kubeadm क्लस्टर्स को अपग्रेड करने](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) में मदद करने के लिए निर्देश दिए गए हैं, कुबेरनेट्स प्रशासनिक कार्यों की लंबी सूची के लिए [क्लस्टर का एडमिनिस्टर](/docs/tasks/administer-cluster/) देखें। +विवरण के लिए [एक से अधिक ज़ोन मे चलाना](/docs/setup/best-practices/multiple-zones/) देखें। +- *चल रही सुविधाओं का प्रबंधन*: यदि आप अपने क्लस्टर को समय के साथ रखने की योजना बनाते हैं, तो इसके स्वास्थ्य और सुरक्षा को बनाए रखने के लिए आपको कुछ कार्य करने होंगे। उदाहरण के लिए, यदि आपने क्यूबएडीएम के साथ स्थापित किया है, तो आपको [सर्टिफिकेट प्रबंधन](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) और [क्यूबएडीएम क्लस्टर्स को अपग्रेड करने](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) में मदद करने के लिए निर्देश दिए गए हैं, कुबेरनेट्स प्रशासनिक कार्यों की लंबी सूची के लिए [क्लस्टर का एडमिनिस्टर](/docs/tasks/administer-cluster/) देखें। जब आप कण्ट्रोल प्लेन सेवाएं चलाते हैं, तो उपलब्ध विकल्पों के बारे में जानने के लिए, [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/), [क्यूब-कंट्रोलर-मैनेजर](/docs/reference/command-line-tools-reference/kube-controller-manager/), देखें। और [क्यूब-शेड्यूलर](/docs/reference/command-line-tools-reference/kube-scheduler/) कॉम्पोनेन्ट पेज। अत्यधिक उपलब्ध कंट्रोल प्लेन उदाहरणों के लिए [अत्यधिक उपलब्ध टोपोलॉजी के लिए विकल्प](/docs/setup/production-environment/tools/kubeadm/ha-topology/), diff --git a/content/id/docs/concepts/cluster-administration/manage-deployment.md b/content/id/docs/concepts/cluster-administration/manage-deployment.md index d67da9c13eb1b..4bdc7f790e6dc 100644 --- a/content/id/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/id/docs/concepts/cluster-administration/manage-deployment.md @@ -319,7 +319,7 @@ Saat beban aplikasi naik maupun turun, mudah untuk mengubah kapasitas dengan `ku kubectl scale deployment/my-nginx --replicas=1 ``` ```shell -deployment.extensions/my-nginx scaled +deployment.apps/my-nginx scaled ``` Sekarang kamu hanya memiliki satu _pod_ yang dikelola oleh deployment. diff --git a/content/id/docs/concepts/configuration/secret.md b/content/id/docs/concepts/configuration/secret.md index c0a1e750bfcee..4a30e2b28ce04 100644 --- a/content/id/docs/concepts/configuration/secret.md +++ b/content/id/docs/concepts/configuration/secret.md @@ -559,7 +559,7 @@ apakah terdapat perubahan pada Secret yang telah di-_mount_. Meskipun demikian, proses pengecekan ini dilakukan dengan menggunakan _cache_ lokal untuk mendapatkan _value_ saat ini dari sebuah Secret. Tipe _cache_ yang ada dapat diatur dengan menggunakan (_field_ `ConfigMapAndSecretChangeDetectionStrategy` pada -[_struct_ KubeletConfiguration](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)). +[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/)). Mekanisme ini kemudian dapat diteruskan dengan mekanisme _watch_(_default_), ttl, atau melakukan pengalihan semua _request_ secara langsung pada kube-apiserver. Sebagai hasilnya, _delay_ total dari pertama kali Secret diubah hingga dilakukannya mekanisme diff --git a/content/id/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/id/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md index 62f7c8d41d737..0ed2382212e4d 100644 --- a/content/id/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md +++ b/content/id/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -221,7 +221,7 @@ Berikut beberapa contoh implementasi _plugin_ perangkat: * [Plugin perangkat RDMA](https://github.com/hustcat/k8s-rdma-device-plugin) * [Plugin perangkat Solarflare](https://github.com/vikaschoudhary16/sfc-device-plugin) * [Plugin perangkat SR-IOV Network](https://github.com/intel/sriov-network-device-plugin) -* [Plugin perangkat Xilinx FPGA](https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-fpga-device-plugin) untuk perangkat Xilinx FPGA +* [Plugin perangkat Xilinx FPGA](https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-device-plugin) untuk perangkat Xilinx FPGA ## {{% heading "whatsnext" %}} diff --git a/content/id/docs/concepts/security/overview.md b/content/id/docs/concepts/security/overview.md index 0346314f3afae..5fa480809c1ae 100644 --- a/content/id/docs/concepts/security/overview.md +++ b/content/id/docs/concepts/security/overview.md @@ -34,6 +34,7 @@ IaaS Provider | Link | Alibaba Cloud | https://www.alibabacloud.com/trust-center | Amazon Web Services | https://aws.amazon.com/security/ | Google Cloud Platform | https://cloud.google.com/security/ | +Huawei Cloud | https://www.huaweicloud.com/securecenter/overallsafety.html | IBM Cloud | https://www.ibm.com/cloud/security | Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security | Oracle Cloud Infrastructure | https://www.oracle.com/security/ | diff --git a/content/id/docs/concepts/storage/persistent-volumes.md b/content/id/docs/concepts/storage/persistent-volumes.md index 51163d36a9ac9..f77f60ef18230 100644 --- a/content/id/docs/concepts/storage/persistent-volumes.md +++ b/content/id/docs/concepts/storage/persistent-volumes.md @@ -166,7 +166,7 @@ Namun, alamat yang dispesifikasikan pada templat _recycler pod_ kustom pada bagi ### Memperluas _Persistent Volumes Claim_ -{{< feature-state for_k8s_version="v1.11" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} Dukungan untuk memperluas PersistentVolumeClaim (PVC) sekarang sudah diaktifkan sejak awal. Kamu dapat memperluas tipe-tipe volume berikut: diff --git a/content/id/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/id/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index c4f68432ab921..c05757cbba507 100644 --- a/content/id/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/id/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -614,7 +614,7 @@ data dan mungkin harus dibuat kembali dari awal. Solusi: -* Lakukan [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html) secara reguler. Direktori data +* Lakukan [back up etcd](https://etcd.io/docs/v3.5/op-guide/recovery/) secara reguler. Direktori data etcd yang dikonfigurasi oleh kubeadm berada di `/var/lib/etcd` pada Node _control-plane_. * Gunakan banyak Node _control-plane_. Kamu dapat membaca diff --git a/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md index 5a850cb73936a..349a283d7a01a 100644 --- a/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -208,7 +208,7 @@ Ketika `.spec.suspend` diubah dari `true` ke `false` pada CronJob yang memiliki ### Batas Riwayat Pekerjaan -_Field_ `.spec.successfulJobHistoryLimit` dan `.spec.failedJobHistoryLimit` juga opsional. +_Field_ `.spec.successfulJobsHistoryLimit` dan `.spec.failedJobsHistoryLimit` juga opsional. _Field_ tersebut menentukan berapa banyak Job yang sudah selesai dan gagal yang harus disimpan. Secara bawaan, masing-masing _field_ tersebut disetel 3 dan 1. Mensetel batas ke `0` untuk menjaga tidak ada Job yang sesuai setelah Job tersebut selesai. diff --git a/content/ja/docs/concepts/architecture/garbage-collection.md b/content/ja/docs/concepts/architecture/garbage-collection.md index 11f4796fd1b75..2ac48608b911a 100644 --- a/content/ja/docs/concepts/architecture/garbage-collection.md +++ b/content/ja/docs/concepts/architecture/garbage-collection.md @@ -86,10 +86,10 @@ Kubernetesがオーナーオブジェクトを削除すると、残された依 ## 未使用のコンテナとイメージのガベージコレクション {#containers-images} -{{}}は未使用のイメージに対して5分ごとに、未使用のコンテナーに対して1分ごとにガベージコレクションを実行します。 +{{}}は未使用のイメージに対して5分ごとに、未使用のコンテナに対して1分ごとにガベージコレクションを実行します。 外部のガベージコレクションツールは、kubeletの動作を壊し、存在するはずのコンテナを削除する可能性があるため、使用しないでください。 -未使用のコンテナーとイメージのガベージコレクションのオプションを設定するには、[設定ファイル](/docs/tasks/administer-cluster/kubelet-config-file/)を使用してkubeletを調整し、[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)リソースタイプを使用してガベージコレクションに関連するパラメーターを変更します。 +未使用のコンテナとイメージのガベージコレクションのオプションを設定するには、[設定ファイル](/docs/tasks/administer-cluster/kubelet-config-file/)を使用してkubeletを調整し、[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)リソースタイプを使用してガベージコレクションに関連するパラメーターを変更します。 ### コンテナイメージのライフサイクル @@ -108,12 +108,12 @@ kubeletは、次の変数に基づいて未使用のコンテナをガベージ * `MinAge`: kubeletがガベージコレクションできるコンテナの最低期間。`0`を設定すると無効化されます。 * `MaxPerPodContainer`: 各Podのペアが持つことができるデッドコンテナの最大数。`0`未満に設定すると無効化されます。 - * `MaxContainers`: クラスターが持つことができるデッドコンテナーの最大数。`0`未満に設定すると無効化されます。 + * `MaxContainers`: クラスターが持つことができるデッドコンテナの最大数。`0`未満に設定すると無効化されます。 これらの変数に加えて、kubeletは、通常、最も古いものから順に、定義されていない削除されたコンテナをガベージコレクションします。 -`MaxPerPodContainer`と`MaxContainers`は、Podごとのコンテナーの最大数(`MaxPerPodContainer`)を保持すると、グローバルなデッドコンテナの許容合計(`MaxContainers`)を超える状況で、互いに競合する可能性があります。 -この状況では、kubeletは`MaxPerPodContainer`を調整して競合に対処します。最悪のシナリオは、`MaxPerPodContainer`を1にダウングレードし、最も古いコンテナーを削除することです。 +`MaxPerPodContainer`と`MaxContainers`は、Podごとのコンテナの最大数(`MaxPerPodContainer`)を保持すると、グローバルなデッドコンテナの許容合計(`MaxContainers`)を超える状況で、互いに競合する可能性があります。 +この状況では、kubeletは`MaxPerPodContainer`を調整して競合に対処します。最悪のシナリオは、`MaxPerPodContainer`を1にダウングレードし、最も古いコンテナを削除することです。 さらに、削除されたPodが所有するコンテナは、`MinAge`より古くなると削除されます。 {{}} diff --git a/content/ja/docs/concepts/architecture/nodes.md b/content/ja/docs/concepts/architecture/nodes.md index 6affe7ccb7096..101e4da14b5d5 100644 --- a/content/ja/docs/concepts/architecture/nodes.md +++ b/content/ja/docs/concepts/architecture/nodes.md @@ -92,7 +92,7 @@ kubectl cordon $ノード名 これは、再起動の準備中にアプリケーションからアプリケーションが削除されている場合でも、DaemonSetがマシンに属していることを前提としているためです。 {{< /note >}} -## ノードのステータス +## ノードのステータス {#node-status} ノードのステータスは以下の情報を含みます: @@ -176,7 +176,7 @@ CapacityとAllocatableについて深く知りたい場合は、ノード上で この情報はノードからkubeletを通じて取得され、Kubernetes APIに公開されます。 -## ハートビート +## ハートビート {#heartbeats} ハートビートは、Kubernetesノードから送信され、ノードが利用可能か判断するのに役立ちます。 以下の2つのハートビートがあります: * Nodeの`.status`の更新 @@ -191,7 +191,7 @@ kubeletが`NodeStatus`とLeaseオブジェクトの作成および更新を担 -## ノードコントローラー +## ノードコントローラー {#node-controller} ノード{{< glossary_tooltip text="コントローラー" term_id="controller" >}}は、ノードのさまざまな側面を管理するKubernetesのコントロールプレーンコンポーネントです。 @@ -206,7 +206,7 @@ kubeletが`NodeStatus`とLeaseオブジェクトの作成および更新を担 ノードコントローラーは、`--node-monitor-period`に設定された秒数ごとに各ノードの状態をチェックします。 -#### 信頼性 +#### 信頼性 {#rate-limits-on-eviction} ほとんどの場合、排除の速度は1秒あたり`--node-eviction-rate`に設定された数値(デフォルトは秒間0.1)です。つまり、10秒間に1つ以上のPodをノードから追い出すことはありません。 @@ -228,7 +228,7 @@ kubeletが`NodeStatus`とLeaseオブジェクトの作成および更新を担 サービスコントローラーの副次的な効果をもたらします。これにより、ロードバランサトラフィックの流入をcordonされたノードから効率的に除去する事ができます。 {{< /caution >}} -### ノードのキャパシティ +### ノードのキャパシティ {#node-capacity} Nodeオブジェクトはノードのリソースキャパシティ(CPUの数とメモリの量)を監視します。 [自己登録](#self-registration-of-nodes)したノードは、Nodeオブジェクトを作成するときにキャパシティを報告します。 @@ -241,7 +241,7 @@ Kubernetes{{< glossary_tooltip text="スケジューラー" term_id="kube-schedu Pod以外のプロセス用にリソースを明示的に予約したい場合は、[Systemデーモン用にリソースを予約](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved)を参照してください。 {{< /note >}} -## ノードのトポロジー +## ノードのトポロジー {#node-topology} {{< feature-state state="alpha" for_k8s_version="v1.16" >}} `TopologyManager`の[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を有効にすると、 diff --git a/content/ja/docs/concepts/cluster-administration/certificates.md b/content/ja/docs/concepts/cluster-administration/certificates.md index 2cc32294ffe5d..9a5212ad66f88 100644 --- a/content/ja/docs/concepts/cluster-administration/certificates.md +++ b/content/ja/docs/concepts/cluster-administration/certificates.md @@ -105,7 +105,7 @@ weight: 20 openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ -CAcreateserial -out server.crt -days 10000 \ - -extensions v3_ext -extfile csr.conf + -extensions v3_ext -extfile csr.conf -sha256 1. 証明書を表示します。 openssl x509 -noout -text -in ./server.crt diff --git a/content/ja/docs/concepts/configuration/configmap.md b/content/ja/docs/concepts/configuration/configmap.md index f7d9ea7aa01d9..4956632a5d089 100644 --- a/content/ja/docs/concepts/configuration/configmap.md +++ b/content/ja/docs/concepts/configuration/configmap.md @@ -164,7 +164,7 @@ Pod内に複数のコンテナが存在する場合、各コンテナにそれ #### マウントしたConfigMapの自動的な更新 -ボリューム内で現在使用中のConfigMapが更新されると、射影されたキーも最終的に(eventually)更新されます。kubeletは定期的な同期のたびにマウントされたConfigMapが新しいかどうか確認します。しかし、kubeletが現在のConfigMapの値を取得するときにはローカルキャッシュを使用します。キャッシュの種類は、[KubeletConfiguration構造体](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)の中の`ConfigMapAndSecretChangeDetectionStrategy`フィールドで設定可能です。ConfigMapは、監視(デフォルト)、ttlベース、またはすべてのリクエストを直接APIサーバーへ単純にリダイレクトする方法のいずれかによって伝搬されます。その結果、ConfigMapが更新された瞬間から、新しいキーがPodに射影されるまでの遅延の合計は、最長でkubeletの同期期間+キャッシュの伝搬遅延になります。ここで、キャッシュの伝搬遅延は選択したキャッシュの種類に依存します(監視の伝搬遅延、キャッシュのttl、または0に等しくなります)。 +ボリューム内で現在使用中のConfigMapが更新されると、射影されたキーも最終的に(eventually)更新されます。kubeletは定期的な同期のたびにマウントされたConfigMapが新しいかどうか確認します。しかし、kubeletが現在のConfigMapの値を取得するときにはローカルキャッシュを使用します。キャッシュの種類は、[KubeletConfiguration構造体](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubelet/config/v1beta1/types.go)の中の`ConfigMapAndSecretChangeDetectionStrategy`フィールドで設定可能です。ConfigMapは、監視(デフォルト)、ttlベース、またはすべてのリクエストを直接APIサーバーへ単純にリダイレクトする方法のいずれかによって伝搬されます。その結果、ConfigMapが更新された瞬間から、新しいキーがPodに射影されるまでの遅延の合計は、最長でkubeletの同期期間+キャッシュの伝搬遅延になります。ここで、キャッシュの伝搬遅延は選択したキャッシュの種類に依存します(監視の伝搬遅延、キャッシュのttl、または0に等しくなります)。 環境変数として使用されるConfigMapは自動的に更新されないため、ポッドを再起動する必要があります。 ## イミュータブルなConfigMap {#configmap-immutable} diff --git a/content/ja/docs/concepts/configuration/manage-resources-containers.md b/content/ja/docs/concepts/configuration/manage-resources-containers.md index 6d512e5d925e7..499e2a7214976 100644 --- a/content/ja/docs/concepts/configuration/manage-resources-containers.md +++ b/content/ja/docs/concepts/configuration/manage-resources-containers.md @@ -24,9 +24,9 @@ Podが動作しているNodeに利用可能なリソースが十分にある場 たとえば、コンテナに256MiBの`メモリー`要求を設定し、そのコンテナが8GiBのメモリーを持つNodeにスケジュールされたPod内に存在し、他のPodが存在しない場合、コンテナはより多くのRAMを使用しようとする可能性があります。 -そのコンテナに4GiBの`メモリー`制限を設定すると、kubelet(および{{< glossary_tooltip text="コンテナランタイム" term_id="container-runtime" >}}) が制限を適用します。ランタイムは、コンテナーが設定済みのリソース制限を超えて使用するのを防ぎます。例えば、コンテナ内のプロセスが、許容量を超えるメモリを消費しようとすると、システムカーネルは、メモリ不足(OOM)エラーで、割り当てを試みたプロセスを終了します。 +そのコンテナに4GiBの`メモリー`制限を設定すると、kubelet(および{{< glossary_tooltip text="コンテナランタイム" term_id="container-runtime" >}}) が制限を適用します。ランタイムは、コンテナが設定済みのリソース制限を超えて使用するのを防ぎます。例えば、コンテナ内のプロセスが、許容量を超えるメモリを消費しようとすると、システムカーネルは、メモリ不足(OOM)エラーで、割り当てを試みたプロセスを終了します。 -制限は、違反が検出されるとシステムが介入するように事後的に、またはコンテナーが制限を超えないようにシステムが防ぐように強制的に、実装できます。 +制限は、違反が検出されるとシステムが介入するように事後的に、またはコンテナが制限を超えないようにシステムが防ぐように強制的に、実装できます。 異なるランタイムは、同じ制限を実装するために異なる方法をとることができます。 {{< note >}} diff --git a/content/ja/docs/concepts/configuration/secret.md b/content/ja/docs/concepts/configuration/secret.md index 3a2dd6ce43141..f0e9b248d117c 100644 --- a/content/ja/docs/concepts/configuration/secret.md +++ b/content/ja/docs/concepts/configuration/secret.md @@ -582,7 +582,7 @@ cat /etc/foo/password ボリュームとして使用されているSecretが更新されると、やがて割り当てられたキーも同様に更新されます。 kubeletは定期的な同期のたびにマウントされたSecretが新しいかどうかを確認します。 しかしながら、kubeletはSecretの現在の値の取得にローカルキャッシュを使用します。 -このキャッシュは[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)内の`ConfigMapAndSecretChangeDetectionStrategy`フィールドによって設定可能です。 +このキャッシュは[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubelet/config/v1beta1/types.go)内の`ConfigMapAndSecretChangeDetectionStrategy`フィールドによって設定可能です。 Secretはwatch(デフォルト)、TTLベース、単に全てのリクエストをAPIサーバーへリダイレクトすることのいずれかによって伝搬します。 結果として、Secretが更新された時点からPodに新しいキーが反映されるまでの遅延時間の合計は、kubeletの同期間隔 + キャッシュの伝搬遅延となります。 キャッシュの遅延は、キャッシュの種別により、それぞれwatchの伝搬遅延、キャッシュのTTL、0になります。 @@ -838,7 +838,7 @@ spec: /etc/secret-volume/ssh-privatekey ``` -コンテナーはSecretのデータをSSH接続を確立するために使用することができます。 +コンテナはSecretのデータをSSH接続を確立するために使用することができます。 ### ユースケース: 本番、テスト用の認証情報を持つPod diff --git a/content/ja/docs/concepts/containers/overview.md b/content/ja/docs/concepts/containers/overview.md deleted file mode 100644 index 6659bfb35c921..0000000000000 --- a/content/ja/docs/concepts/containers/overview.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: コンテナの概要 -content_type: concept -weight: 1 ---- - - - -コンテナは、アプリケーションの(コンパイルされた)コードと、実行時に必要な依存関係をパッケージ化するための技術です。実行する各コンテナは再現性があります。依存関係を含めることによる標準化は、どこで実行しても同じ動作が得られることを意味します。 - -コンテナは、基礎となるホストインフラストラクチャからアプリケーションを切り離します。これにより、さまざまなクラウド環境やOS環境でのデプロイが容易になります。 - - - -## コンテナイメージ {#container-images} -[コンテナイメージ](/docs/concepts/containers/images/)は、アプリケーションを実行するために必要なすべてのものを含んだ、すぐに実行可能なソフトウェアパッケージです。コードとそれが必要とする任意のランタイム、アプリケーションとシステムのライブラリ、および必須の設定のデフォルト値が含まれています。 - -設計上、コンテナは不変であるため、すでに実行中のコンテナのコードを変更することはできません。コンテナ化されたアプリケーションがあり、変更を加えたい場合は、変更を含む新しいコンテナをビルドし、コンテナを再作成して更新されたイメージから起動する必要があります。 - -## コンテナランタイム {#container-runtimes} - -{{< glossary_definition term_id="container-runtime" length="all" >}} - -## {{% heading "whatsnext" %}} -* [コンテナイメージ](/docs/concepts/containers/images/)についてお読みください。 -* [Pod](/ja/docs/concepts/workloads/pods/)についてお読みください。 diff --git a/content/ja/docs/concepts/extend-kubernetes/_index.md b/content/ja/docs/concepts/extend-kubernetes/_index.md index 132f1c100cfe4..67ccb50adf9a7 100644 --- a/content/ja/docs/concepts/extend-kubernetes/_index.md +++ b/content/ja/docs/concepts/extend-kubernetes/_index.md @@ -72,7 +72,7 @@ Webhookのモデルでは、Kubernetesは外部のサービスを呼び出しま 1. ユーザーは頻繁に`kubectl`を使って、Kubernetes APIとやり取りをします。[Kubectlプラグイン](/docs/tasks/extend-kubectl/kubectl-plugins/)は、kubectlのバイナリを拡張します。これは個別ユーザーのローカル環境のみに影響を及ぼすため、サイト全体にポリシーを強制することはできません。 2. APIサーバーは全てのリクエストを処理します。APIサーバーのいくつかの拡張ポイントは、リクエストを認可する、コンテキストに基づいてブロックする、リクエストを編集する、そして削除を処理することを可能にします。これらは[APIアクセス拡張](/docs/concepts/extend-kubernetes/#api-access-extensions)セクションに記載されています。 3. APIサーバーは様々な種類の *リソース* を扱います。`Pod`のような *ビルトインリソース* はKubernetesプロジェクトにより定義され、変更できません。ユーザーも、自身もしくは、他のプロジェクトで定義されたリソースを追加することができます。それは *カスタムリソース* と呼ばれ、[カスタムリソース](/docs/concepts/extend-kubernetes/#user-defined-types)セクションに記載されています。カスタムリソースは度々、APIアクセス拡張と一緒に使われます。 -4. KubernetesのスケジューラーはPodをどのノードに配置するかを決定します。スケジューリングを拡張するには、いくつかの方法があります。それらは[スケジューラー拡張](/docs/concepts/extend-kubernetes/#scheduler-extensions)セクションに記載されています。 +4. KubernetesのスケジューラーはPodをどのノードに配置するかを決定します。スケジューリングを拡張するには、いくつかの方法があります。それらは[スケジューラー拡張](#scheduling-extensions)セクションに記載されています。 5. Kubernetesにおける多くの振る舞いは、APIサーバーのクライアントであるコントローラーと呼ばれるプログラムに実装されています。コントローラーは度々、カスタムリソースと共に使われます。 6. kubeletはサーバー上で実行され、Podが仮想サーバーのようにクラスターネットワーク上にIPを持った状態で起動することをサポートします。[ネットワークプラグイン](/docs/concepts/extend-kubernetes/#network-plugins)がPodのネットワーキングにおける異なる実装を適用することを可能にします。 7. kubeletはまた、コンテナのためにボリュームをマウント、アンマウントします。新しい種類のストレージは[ストレージプラグイン](/docs/concepts/extend-kubernetes/#storage-plugins)を通じてサポートされます。 @@ -139,7 +139,7 @@ Kubernetesはいくつかのビルトイン認証方式と、それらが要件 他のネットワークファブリックが[ネットワークプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)を通じてサポートされます。 -### スケジューラー拡張 +### スケジューラー拡張 {#scheduling-extensions} スケジューラーは特別な種類のコントローラーで、Podを監視し、Podをノードに割り当てます。デフォルトのコントローラーを完全に置き換えることもできますが、他のKubernetesのコンポーネントの利用を継続する、または[複数のスケジューラー](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)を同時に動かすこともできます。 diff --git a/content/ja/docs/concepts/overview/components.md b/content/ja/docs/concepts/overview/components.md index 3f696afd57f49..a1bdb4d5b14f3 100644 --- a/content/ja/docs/concepts/overview/components.md +++ b/content/ja/docs/concepts/overview/components.md @@ -2,8 +2,8 @@ title: Kubernetesのコンポーネント content_type: concept description: > - Kubernetesクラスターはコントロールプレーンやノードと呼ばれるマシン群といったコンポーネントからなります。 -weight: 20 + Kubernetesクラスターはコントロールプレーンのコンポーネントとノードと呼ばれるマシン群で構成されています。 +weight: 30 card: name: concepts weight: 20 @@ -15,11 +15,7 @@ Kubernetesをデプロイすると、クラスターが展開されます。 このドキュメントでは、Kubernetesクラスターが機能するために必要となるさまざまなコンポーネントの概要を説明します。 -すべてのコンポーネントが結び付けられたKubernetesクラスターの図を次に示します。 - -![Kubernetesのコンポーネント](/images/docs/components-of-kubernetes.svg) - - +{{< figure src="/images/docs/components-of-kubernetes.svg" alt="Kubernetesのコンポーネント" caption="Kubernetesクラスターを構成するコンポーネント" class="diagram-large" >}} @@ -28,7 +24,7 @@ Kubernetesをデプロイすると、クラスターが展開されます。 コントロールプレーンコンポーネントは、クラスターに関する全体的な決定(スケジューリングなど)を行います。また、クラスターイベントの検出および応答を行います(たとえば、deploymentの`replicas`フィールドが満たされていない場合に、新しい {{< glossary_tooltip text="Pod" term_id="pod">}} を起動する等)。 コントロールプレーンコンポーネントはクラスター内のどのマシンでも実行できますが、シンプルにするため、セットアップスクリプトは通常、すべてのコントロールプレーンコンポーネントを同じマシンで起動し、そのマシンではユーザーコンテナを実行しません。 -マルチマスター VMセットアップの例については、[高可用性クラスターの構築](/docs/admin/high-availability/) を参照してください。 +複数のマシンにまたがって実行されるコントロールプレーンのセットアップ例については、[kubeadmを使用した高可用性クラスターの構築](/ja/docs/setup/production-environment/tools/kubeadm/high-availability/) を参照してください。 ### kube-apiserver @@ -49,24 +45,25 @@ Kubernetesをデプロイすると、クラスターが展開されます。 コントローラーには以下が含まれます。 * ノードコントローラー:ノードがダウンした場合の通知と対応を担当します。 - * レプリケーションコントローラー:システム内の全レプリケーションコントローラーオブジェクトについて、Podの数を正しく保つ役割を持ちます。 - * エンドポイントコントローラー:エンドポイントオブジェクトを注入します(つまり、ServiceとPodを紐付けます)。 - * サービスアカウントとトークンコントローラー:新規の名前空間に対して、デフォルトアカウントとAPIアクセストークンを作成します。 + * Jobコントローラー:単発タスクを表すJobオブジェクトを監視し、そのタスクを実行して完了させるためのPodを作成します。 + * EndpointSliceコントローラー:EndpointSliceオブジェクトを作成します(つまり、ServiceとPodを紐付けます)。 + * ServiceAccountコントローラー:新規の名前空間に対して、デフォルトのServiceAccountを作成します。 ### cloud-controller-manager {{< glossary_definition term_id="cloud-controller-manager" length="short" >}} cloud-controller-managerは、クラウドプロバイダー固有のコントローラーのみを実行します。 -KubernetesをオンプレミスあるいはPC内での学習環境で動かす際には、クラスターにcloud container managerはありません。 +Kubernetesをオンプレミスあるいは個人のPC内での学習環境で動かす際には、クラスターにcloud container managerはありません。 + +kube-controller-managerと同様に、cloud-controller-managerは複数の論理的に独立したコントロールループをシングルバイナリにまとめ、一つのプロセスとして動作します。パフォーマンスを向上させるあるいは障害に耐えるために水平方向にスケールする(一つ以上のコピーを動かす)ことができます。 -kube-controller-managerを使用すると、cloud-controller-managerは複数の論理的に独立したコントロールループをシングルバイナリにまとめ、これが一つのプロセスとして動作します。パフォーマンスを向上させるあるいは障害に耐えるために水平方向にスケールする(一つ以上のコピーを動かす)ことができます。 +次のコントローラーは、クラウドプロバイダーへの依存関係を持つことがあります。 -次のコントローラーには、クラウドプロバイダーへの依存関係を持つ可能性があります。 + * Nodeコントローラー:ノードが応答を停止した後、クラウドで削除されたかどうかを判断するため、クラウドプロバイダーをチェックします。 + * Routeコントローラー:基盤であるクラウドインフラでルーティングを設定します。 + * Serviceコントローラー:クラウドプロバイダーのロードバランサーの作成、更新、削除を行います。 - * ノードコントローラー:ノードが応答を停止した後、クラウドで削除されたかどうかを判断するため、クラウドプロバイダーをチェックします。 - * ルーティングコントローラー:基盤であるクラウドインフラでルーティングを設定します。 - * サービスコントローラー:クラウドプロバイダーのロードバランサーの作成、更新、削除を行います。 ## ノードコンポーネント {#node-components} ノードコンポーネントはすべてのノードで実行され、稼働中のPodの管理やKubernetesの実行環境を提供します。 @@ -105,11 +102,11 @@ Kubernetesによって開始されたコンテナは、DNS検索にこのDNSサ ### コンテナリソース監視 -[コンテナリソース監視](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)は、コンテナに関する一般的な時系列メトリックを中央データベースに記録します。また、そのデータを閲覧するためのUIを提供します。 +[コンテナリソース監視](/ja/docs/tasks/debug/debug-cluster/resource-usage-monitoring/)は、コンテナに関する一般的な時系列メトリックを中央データベースに記録します。また、そのデータを閲覧するためのUIを提供します。 -### クラスターレベルログ +### クラスターレベルのロギング -[クラスターレベルログ](/docs/concepts/cluster-administration/logging/)メカニズムは、コンテナのログを、検索/参照インターフェイスを備えた中央ログストアに保存します。 +[クラスターレベルのロギング](/ja/docs/concepts/cluster-administration/logging/)メカニズムは、コンテナのログを、検索/参照インターフェイスを備えた中央ログストアに保存します。 ## {{% heading "whatsnext" %}} diff --git a/content/ja/docs/concepts/security/controlling-access.md b/content/ja/docs/concepts/security/controlling-access.md index 914733d32c0ed..b9ec55417f5e6 100644 --- a/content/ja/docs/concepts/security/controlling-access.md +++ b/content/ja/docs/concepts/security/controlling-access.md @@ -116,7 +116,7 @@ Kubernetesは、ABACモード、RBACモード、Webhookモードなど、複数 Kubernetesの監査は、クラスター内の一連のアクションを文書化した、セキュリティに関連する時系列の記録を提供します。 クラスターは、ユーザー、Kubernetes APIを使用するアプリケーション、およびコントロールプレーン自身によって生成されるアクティビティを監査します。 -詳しくは[監査](/ja/docs/tasks/debug-application-cluster/audit/)をご覧ください。 +詳しくは[監査](/ja/docs/tasks/debug/debug-cluster/audit/)をご覧ください。 ## APIサーバーのIPとポート {#api-server-ports-and-ips} @@ -167,4 +167,3 @@ APIサーバーは、実際には2つのポートでサービスを提供する 以下についても知ることができます。 - PodがAPIクレデンシャルを取得するために[Secrets](/ja/docs/concepts/configuration/secret/#service-accounts-automatically-create-and-attach-secrets-with-api-credentials)を使用する方法について。 - diff --git a/content/ja/docs/concepts/security/overview.md b/content/ja/docs/concepts/security/overview.md index f1bb1fed433cb..c9d656a423268 100644 --- a/content/ja/docs/concepts/security/overview.md +++ b/content/ja/docs/concepts/security/overview.md @@ -42,6 +42,7 @@ IaaSプロバイダー | リンク | Alibaba Cloud | https://www.alibabacloud.com/trust-center | Amazon Web Services | https://aws.amazon.com/security/ | Google Cloud Platform | https://cloud.google.com/security/ | +Huawei Cloud | https://www.huaweicloud.com/securecenter/overallsafety.html | IBM Cloud | https://www.ibm.com/cloud/security | Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security | Oracle Cloud Infrastructure | https://www.oracle.com/security/ | diff --git a/content/ja/docs/concepts/security/pod-security-admission.md b/content/ja/docs/concepts/security/pod-security-admission.md index 807ac69f88069..2b5703687731f 100644 --- a/content/ja/docs/concepts/security/pod-security-admission.md +++ b/content/ja/docs/concepts/security/pod-security-admission.md @@ -65,7 +65,7 @@ Kubernetesは、名前空間に使用したい定義済みのPodセキュリテ モード | 説明 :---------|:------------ **enforce** | ポリシーに違反した場合、Podは拒否されます。 -**audit** | ポリシー違反は、[監査ログ](/ja/docs/tasks/debug-application-cluster/audit/)に記録されるイベントに監査アノテーションを追加するトリガーとなりますが、それ以外は許可されます。 +**audit** | ポリシー違反は、[監査ログ](/ja/docs/tasks/debug/debug-cluster/audit/)に記録されるイベントに監査アノテーションを追加するトリガーとなりますが、それ以外は許可されます。 **warn** | ポリシーに違反した場合は、ユーザーへの警告がトリガーされますが、それ以外は許可されます。 {{< /table >}} diff --git a/content/ja/docs/concepts/storage/ephemeral-volumes.md b/content/ja/docs/concepts/storage/ephemeral-volumes.md new file mode 100644 index 0000000000000..1b3ed0d5b8692 --- /dev/null +++ b/content/ja/docs/concepts/storage/ephemeral-volumes.md @@ -0,0 +1,178 @@ +--- +title: エフェメラルボリューム +content_type: concept +weight: 30 +--- + + + +このドキュメントでは、Kubernetesの*エフェメラルボリューム*について説明します。[ボリューム](/ja/docs/concepts/storage/volumes/)、特にPersistentVolumeClaimとPersistentVolumeに精通していることをお勧めします。 + + + +一部のアプリケーションでは追加のストレージが必要ですが、そのデータが再起動後も永続的に保存されるかどうかは気にしません。 +たとえば、キャッシュサービスは多くの場合メモリサイズによって制限されており、使用頻度の低いデータを、全体的なパフォーマンスにほとんど影響を与えずに、メモリよりも低速なストレージに移動できます。 + +他のアプリケーションは、構成データや秘密鍵など、読み取り専用の入力データがファイルに存在することを想定しています。 + +*エフェメラルボリューム*は、これらのユースケース向けに設計されています。 +ボリュームはPodの存続期間に従い、Podとともに作成および削除されるため、Podは、永続ボリュームが利用可能な場所に制限されることなく停止および再起動できます。 + +エフェメラルボリュームはPod仕様で*インライン*で指定されているため、アプリケーションの展開と管理が簡素化されます。 + +### エフェメラルボリュームのタイプ {#types-of-ephemeral-volumes} + +Kubernetesは、さまざまな目的のためにいくつかの異なる種類のエフェメラルボリュームをサポートしています。 +- [emptyDir](/ja/docs/concepts/storage/volumes/#emptydir):Podの起動時には空で、ストレージはkubeletベースディレクトリ(通常はルートディスク)またはRAMからローカルに取得されます。 +- [configMap](/ja/docs/concepts/storage/volumes/#configmap)、[downwardAPI](/ja/docs/concepts/storage/volumes/#downwardapi)、[secret](/ja/docs/concepts/storage/volumes/#secret):Podにさまざまな種類のKubernetesデータを挿入します。 +- [CSIエフェメラルボリューム](#csi-ephemeral-volumes):上のボリュームの種類に似ていますが、特に[この機能をサポートする](https://kubernetes-csi.github.io/docs/drivers.html)特別な[CSIドライバー](https://github.com/container-storage-interface/spec/blob/master/spec.md)によって提供されます。 +- [汎用エフェメラルボリューム](#generic-ephemeral-volumes):これは、永続ボリュームもサポートするすべてのストレージドライバーで提供できます。 + +`emptyDir`、`configMap`、`downwardAPI`、`secret`は[ローカルエフェメラルストレージ](/ja/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage)として提供されます。 +これらは、各ノードのkubeletによって管理されます。 + +CSIエフェメラルボリュームは、サードパーティーのCSIストレージドライバーによって提供される*必要があります*。 + +汎用エフェメラルボリュームは、サードパーティーのCSIストレージドライバーによって提供される*可能性があります*が、動的プロビジョニングをサポートする他のストレージドライバーによって提供されることもあります。一部のCSIドライバーは、CSIエフェメラルボリューム用に特別に作成されており、動的プロビジョニングをサポートしていません。これらは汎用エフェメラルボリュームには使用できません。 + +サードパーティー製ドライバーを使用する利点は、Kubernetes自体がサポートしていない機能を提供できることです。たとえば、kubeletによって管理されるディスクとは異なるパフォーマンス特性を持つストレージや、異なるデータの挿入などです。 + +### CSIエフェメラルボリューム {#csi-ephemeral-volumes} + +{{< feature-state for_k8s_version="v1.25" state="stable" >}} + +{{< note >}} +CSIエフェメラルボリュームは、CSIドライバーのサブセットによってのみサポートされます。 +Kubernetes CSI[ドライバーリスト](https://kubernetes-csi.github.io/docs/drivers.html)には、エフェメラルボリュームをサポートするドライバーが表示されます。 +{{< /note >}} + +概念的には、CSIエフェメラルボリュームは`configMap`、`downwardAPI`、および`secret`ボリュームタイプに似ています。 +ストレージは各ノードでローカルに管理され、Podがノードにスケジュールされた後に他のローカルリソースと一緒に作成されます。Kubernetesには、この段階でPodを再スケジュールするという概念はもうありません。 +ボリュームの作成は、失敗する可能性が低くなければなりません。さもないと、Podの起動が停止します。 +特に、[ストレージ容量を考慮したPodスケジューリング](/ja/docs/concepts/storage/storage-capacity/)は、これらのボリュームではサポートされて*いません*。 +これらは現在、Podのストレージリソースの使用制限の対象外です。これは、kubeletが管理するストレージに対してのみ強制できるものであるためです。 + +CSIエフェメラルストレージを使用するPodのマニフェストの例を次に示します。 + +```yaml +kind: Pod +apiVersion: v1 +metadata: + name: my-csi-app +spec: + containers: + - name: my-frontend + image: busybox:1.28 + volumeMounts: + - mountPath: "/data" + name: my-csi-inline-vol + command: [ "sleep", "1000000" ] + volumes: + - name: my-csi-inline-vol + csi: + driver: inline.storage.kubernetes.io + volumeAttributes: + foo: bar +``` + +`volumeAttributes`は、ドライバーによって準備されるボリュームを決定します。これらの属性は各ドライバーに固有のものであり、標準化されていません。詳細な手順については、各CSIドライバーのドキュメントを参照してください。 + +### CSIドライバーの制限事項 {#csi-driver-restrictions} + +CSIエフェメラルボリュームを使用すると、ユーザーはPod仕様の一部として`volumeAttributes`をCSIドライバーに直接提供できます。 +通常は管理者に制限されている`volumeAttributes`を許可するCSIドライバーは、インラインエフェメラルボリュームでの使用には適していません。 +たとえば、通常StorageClassで定義されるパラメーターは、インラインエフェメラルボリュームを使用してユーザーに公開しないでください。 + +Pod仕様内でインラインボリュームとして使用できるCSIドライバーを制限する必要があるクラスタ管理者は、次の方法で行うことができます。 + +- CSIドライバー仕様の`volumeLifecycleModes`から`Ephemeral`を削除します。これにより、ドライバーをインラインエフェメラルボリュームとして使用できなくなります。 +- [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)を使用して、このドライバーの使用方法を制限します。 + +### 汎用エフェメラルボリューム {#generic-ephemeral-volumes} + +{{< feature-state for_k8s_version="v1.23" state="stable" >}} + +汎用エフェメラルボリュームは、プロビジョニング後に通常は空であるスクラッチデータ用のPodごとのディレクトリを提供するという意味で、`emptyDir`ボリュームに似ています。ただし、追加の機能がある場合もあります。 + +- ストレージは、ローカルまたはネットワークに接続できます。 +- ボリュームは、Podが超えることができない固定サイズを持つことができます。 +- ボリュームには、ドライバーとパラメーターによっては、いくつかの初期データがある場合があります。 +- [スナップショット](/docs/concepts/storage/volume-snapshots/)、[クローン作成](/ja/docs/concepts/storage/volume-pvc-datasource/)、[サイズ変更](/ja/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)、[ストレージ容量の追跡](/ja/docs/concepts/storage/storage-capacity/)などボリュームに対する一般的な操作は、ドライバーがそれらをサポートしていることを前提としてサポートされています。 + +例: + +```yaml +kind: Pod +apiVersion: v1 +metadata: + name: my-app +spec: + containers: + - name: my-frontend + image: busybox:1.28 + volumeMounts: + - mountPath: "/scratch" + name: scratch-volume + command: [ "sleep", "1000000" ] + volumes: + - name: scratch-volume + ephemeral: + volumeClaimTemplate: + metadata: + labels: + type: my-frontend-volume + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "scratch-storage-class" + resources: + requests: + storage: 1Gi +``` + +### LifecycleとPersistentVolumeClaim {#lifecycle-and-persistentvolumeclaim} + +設計上の重要なアイデアは、[ボリュームクレームのパラメーター](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ephemeralvolumesource-v1alpha1-core)がPodのボリュームソース内で許可されることです。 +PersistentVolumeClaimのラベル、アノテーション、および一連のフィールド全体がサポートされています。 +そのようなPodが作成されると、エフェメラルボリュームコントローラーは、Podと同じ名前空間に実際のPersistentVolumeClaimオブジェクトを作成し、Podが削除されたときにPersistentVolumeClaimが確実に削除されるようにします。 + +これにより、ボリュームバインディングおよび/またはプロビジョニングがトリガーされます。 +これは、{{< glossary_tooltip text="StorageClass" term_id="storage-class" >}}が即時ボリュームバインディングを使用する場合、またはPodが一時的にノードにスケジュールされている場合(`WaitForFirstConsumer`ボリュームバインディングモード)のいずれかです。 +後者は、スケジューラーがPodに適したノードを自由に選択できるため、一般的なエフェメラルボリュームに推奨されます。即時バインディングでは、ボリュームが利用可能になった時点で、ボリュームにアクセスできるノードをスケジューラーが選択する必要があります。 + +[リソースの所有権](/ja/docs/concepts/architecture/garbage-collection/#owners-dependents)に関して、一般的なエフェメラルストレージを持つPodは、そのエフェメラルストレージを提供するPersistentVolumeClaimの所有者です。Podが削除されると、KubernetesガベージコレクターがPVCを削除します。これにより、通常、ボリュームの削除がトリガーされます。これは、ストレージクラスのデフォルトの再利用ポリシーがボリュームを削除することであるためです。`retain`の再利用ポリシーを持つStorageClassを使用して、準エフェメラルなローカルストレージを作成できます。ストレージはPodよりも長く存続します。この場合、ボリュームのクリーンアップが個別に行われるようにする必要があります。 + +これらのPVCは存在しますが、他のPVCと同様に使用できます。特に、ボリュームのクローン作成またはスナップショットでデータソースとして参照できます。PVCオブジェクトは、ボリュームの現在のステータスも保持します。 + +### PersistentVolumeClaimの命名 {#persistentpolumeplaim-naming} + +自動的に作成されたPVCの命名は決定論的です。名前はPod名とボリューム名を組み合わせたもので、途中にハイフン(`-`)があります。上記の例では、PVC名は`my-app-scratch-volume`になります。この決定論的な命名により、Pod名とボリューム名が分かればPVCを検索する必要がないため、PVCとの対話が容易になります。 + +また、決定論的な命名では、異なるPod間、およびPodと手動で作成されたPVCの間で競合が発生する可能性があります(ボリュームが"scratch"のPod"pod-a"と、名前が"pod"でボリュームが"a-scratch"の別のPodは、どちらも同じPVC名"pod-a-scratch")。 + +次のような競合が検出されます。Pod用に作成された場合、PVCはエフェメラルボリュームにのみ使用されます。このチェックは、所有関係に基づいています。既存のPVCは上書きまたは変更されません。ただし、適切なPVCがないとPodを起動できないため、これでは競合が解決されません。 + +{{< caution >}} +これらの競合が発生しないように、同じ名前空間内でPodとボリュームに名前を付けるときは注意してください。 +{{< /caution >}} + +### セキュリティ {#security} + +GenericEphemeralVolume機能を有効にすると、ユーザーは、PVCを直接作成する権限がなくても、Podを作成できる場合、間接的にPVCを作成できます。クラスター管理者はこれを認識している必要があります。これがセキュリティモデルに適合しない場合は、一般的なエフェメラルボリュームを持つPodなどのオブジェクトを拒否する[admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)を使用する必要があります。 + +通常の[PVCの名前空間割り当て](/ja/docs/concepts/policy/resource-quotas/#storage-resource-quota)は引き続き適用されるため、ユーザーがこの新しいメカニズムの使用を許可されたとしても、他のポリシーを回避するために使用することはできません。 + +## {{% heading "whatsnext" %}} + +### kubeletによって管理されるエフェメラルボリューム {#ephemeral-volumes-managed-by-kubelet} + +[ローカルエフェメラルボリューム](/ja/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage)を参照してください。 + +### CSIエフェメラルボリューム {#csi-ephemeral-volumes} + +- 設計の詳細については[エフェメラルインラインCSIボリュームKEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md)を参照してください。 +- この機能のさらなる開発の詳細については、[KEPのトラッキングイシュー](https://github.com/kubernetes/enhancements/issues/596)を参照してください。 + +### 汎用エフェメラルボリューム {#generic-ephemeral-volumes} + +- 設計の詳細については、[汎用インラインエフェメラルボリュームKEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1698-generic-ephemeral-volumes/README.md)を参照してください。 + diff --git a/content/ja/docs/concepts/storage/persistent-volumes.md b/content/ja/docs/concepts/storage/persistent-volumes.md index d976b7ef29c22..292d21ea68148 100644 --- a/content/ja/docs/concepts/storage/persistent-volumes.md +++ b/content/ja/docs/concepts/storage/persistent-volumes.md @@ -207,7 +207,7 @@ spec: ### 永続ボリュームクレームの拡大 -{{< feature-state for_k8s_version="v1.11" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} PersistentVolumeClaim(PVC)の拡大はデフォルトで有効です。次のボリュームの種類で拡大できます。 @@ -655,7 +655,7 @@ spec: ``` {{< note >}} -Podにrawブロックデバイスを追加する場合は、マウントパスの代わりにコンテナーでデバイスパスを指定します。 +Podにrawブロックデバイスを追加する場合は、マウントパスの代わりにコンテナでデバイスパスを指定します。 {{< /note >}} ### ブロックボリュームのバインド @@ -678,7 +678,7 @@ Podにrawブロックデバイスを追加する場合は、マウントパス アルファリリースでは、静的にプロビジョニングされたボリュームのみがサポートされます。管理者は、rawブロックデバイスを使用する場合、これらの値を考慮するように注意する必要があります。 {{< /note >}} -## ボリュームのスナップショットとスナップショットからのボリュームの復元のサポート +## ボリュームのスナップショットとスナップショットからのボリュームの復元のサポート {#volume-snapshot-and-restore-volume-from-snapshot-support} {{< feature-state for_k8s_version="v1.17" state="beta" >}} diff --git a/content/ja/docs/concepts/storage/storage-capacity.md b/content/ja/docs/concepts/storage/storage-capacity.md index 7e2f6c34f79dd..cff887a125a81 100644 --- a/content/ja/docs/concepts/storage/storage-capacity.md +++ b/content/ja/docs/concepts/storage/storage-capacity.md @@ -37,7 +37,7 @@ weight: 45 volume binding modeが`Immediate`のボリュームの場合、ストレージドライバーはボリュームを使用するPodとは関係なく、ボリュームを作成する場所を決定します。次に、スケジューラーはボリュームが作成された後、Podをボリュームが利用できるノードにスケジューリングします。 -[CSI ephemeral volumes](/docs/concepts/storage/volumes/#csi)の場合、スケジューリングは常にストレージ容量を考慮せずに行われます。このような動作になっているのは、このボリュームタイプはノードローカルな特別なCSIドライバーでのみ使用され、そこでは特に大きなリソースが必要になることはない、という想定に基づいています。 +[CSI ephemeral volumes](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes)の場合、スケジューリングは常にストレージ容量を考慮せずに行われます。このような動作になっているのは、このボリュームタイプはノードローカルな特別なCSIドライバーでのみ使用され、そこでは特に大きなリソースが必要になることはない、という想定に基づいています。 ## 再スケジューリング diff --git a/content/ja/docs/concepts/storage/storage-classes.md b/content/ja/docs/concepts/storage/storage-classes.md new file mode 100644 index 0000000000000..c27aec26ae3e0 --- /dev/null +++ b/content/ja/docs/concepts/storage/storage-classes.md @@ -0,0 +1,538 @@ +--- +title: ストレージクラス +content_type: concept +weight: 40 +--- + + + +このドキュメントでは、KubernetesにおけるStorageClassの概念について説明します。[ボリューム](/ja/docs/concepts/storage/volumes/)と[永続ボリューム](/ja/docs/concepts/storage/persistent-volumes)に精通していることをお勧めします。 + + + +## 概要 + +StorageClassは、管理者が提供するストレージの「クラス」を記述する方法を提供します。さまざまなクラスが、サービス品質レベル、バックアップポリシー、またはクラスター管理者によって決定された任意のポリシーにマップされる場合があります。Kubernetes自体は、クラスが何を表すかについて意見を持っていません。この概念は、他のストレージシステムでは「プロファイル」と呼ばれることがあります。 + +## StorageClassリソース + +各StorageClassには、クラスに属するPersistentVolumeを動的にプロビジョニングする必要がある場合に使用されるフィールド`provisioner`、`parameters`、および`reclaimPolicy`が含まれています。 + +StorageClassオブジェクトの名前は重要であり、ユーザーが特定のクラスを要求する方法です。管理者は、最初にStorageClassオブジェクトを作成するときにクラスの名前とその他のパラメーターを設定します。オブジェクトは、作成後に更新することはできません。 + +管理者は、バインドする特定のクラスを要求しないPVCに対してのみ、デフォルトのStorageClassを指定できます。詳細については、[PersistentVolumeClaimセクション](/ja/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)を参照してください。 + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: standard +provisioner: kubernetes.io/aws-ebs +parameters: + type: gp2 +reclaimPolicy: Retain +allowVolumeExpansion: true +mountOptions: + - debug +volumeBindingMode: Immediate +``` + +### プロビジョナー + +各StorageClassには、PVのプロビジョニングに使用するボリュームプラグインを決定するプロビジョナーがあります。このフィールドを指定する必要があります。 + +| Volume Plugin | Internal Provisioner| Config Example | +| :--- | :---: | :---: | +| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) | +| AzureFile | ✓ | [Azure File](#azure-file) | +| AzureDisk | ✓ | [Azure Disk](#azure-disk) | +| CephFS | - | - | +| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder)| +| FC | - | - | +| FlexVolume | - | - | +| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) | +| Glusterfs | ✓ | [Glusterfs](#glusterfs) | +| iSCSI | - | - | +| NFS | - | [NFS](#nfs) | +| RBD | ✓ | [Ceph RBD](#ceph-rbd) | +| VsphereVolume | ✓ | [vSphere](#vsphere) | +| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) | +| Local | - | [Local](#local) | + +ここにリストされている「内部」プロビジョナー(名前には「kubernetes.io」というプレフィックスが付いており、Kubernetesと共に出荷されます)を指定することに制限はありません。Kubernetesによって定義された[仕様](https://git.k8s.io/design-proposals-archive/storage/volume-provisioning.md)に従う独立したプログラムである外部プロビジョナーを実行して指定することもできます。外部プロビジョナーの作成者は、コードの保存場所、プロビジョナーの出荷方法、実行方法、使用するボリュームプラグイン(Flexを含む)などについて完全な裁量権を持っています。リポジトリ[kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner)には、仕様の大部分を実装する外部プロビジョナーを作成するためのライブラリが含まれています。一部の外部プロビジョナーは、リポジトリ[kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner)の下にリストされています。 + +たとえば、NFSは内部プロビジョナーを提供しませんが、外部プロビジョナーを使用できます。サードパーティのストレージベンダーが独自の外部プロビジョナーを提供する場合もあります。 + +### 再利用ポリシー + +StorageClassによって動的に作成されるPersistentVolumeには、クラスの`reclaimPolicy`フィールドで指定された再利用ポリシーがあり、`Delete`または`Retain`のいずれかになります。StorageClassオブジェクトの作成時に`reclaimPolicy`が指定されていない場合、デフォルトで`Delete`になります。 + +手動で作成され、StorageClassを介して管理されるPersistentVolumeには、作成時に割り当てられた再利用ポリシーが適用されます。 + +### ボリューム拡張の許可 + +{{< feature-state for_k8s_version="v1.11" state="beta" >}} + +PersistentVolumeは、拡張可能になるように構成できます。この機能を`true`に設定すると、ユーザーは対応するPVCオブジェクトを編集してボリュームのサイズを変更できます。 + +次のタイプのボリュームは、基になるStorageClassのフィールド`allowVolumeExpansion`がtrueに設定されている場合に、ボリュームの拡張をサポートします。 + +{{< table caption = "Table of Volume types and the version of Kubernetes they require" >}} + +Volume type | Required Kubernetes version +:---------- | :-------------------------- +gcePersistentDisk | 1.11 +awsElasticBlockStore | 1.11 +Cinder | 1.11 +glusterfs | 1.11 +rbd | 1.11 +Azure File | 1.11 +Azure Disk | 1.11 +Portworx | 1.11 +FlexVolume | 1.13 +CSI | 1.14 (alpha), 1.16 (beta) + +{{< /table >}} + + +{{< note >}} +ボリューム拡張機能を使用してボリュームを拡張することはできますが、縮小することはできません。 +{{< /note >}} + +### マウントオプション + +StorageClassによって動的に作成されるPersistentVolumeには、クラスの`mountOptions`フィールドで指定されたマウントオプションがあります。 + +ボリュームプラグインがマウントオプションをサポートしていないにもかかわらず、マウントオプションが指定されている場合、プロビジョニングは失敗します。マウントオプションは、クラスまたはPVのいずれでも検証されません。マウントオプションが無効な場合、PVマウントは失敗します。 + +### ボリュームバインディングモード + +`volumeBindingMode`フィールドは、[ボリュームバインディングと動的プロビジョニング](/ja/docs/concepts/storage/persistent-volumes/#provisioning)が発生するタイミングを制御します。設定を解除すると、デフォルトで"Immediate"モードが使用されます。 + +`Immediate`モードは、PersistentVolumeClaimが作成されると、ボリュームバインディングと動的プロビジョニングが発生することを示します。トポロジに制約があり、クラスター内のすべてのノードからグローバルにアクセスできないストレージバックエンドの場合、PersistentVolumeはPodのスケジューリング要件を知らなくてもバインドまたはプロビジョニングされます。これにより、Podがスケジュール不能になる可能性があります。 + +クラスター管理者は、PersistentVolumeClaimを使用するPodが作成されるまでPersistentVolumeのバインドとプロビジョニングを遅らせる`WaitForFirstConsumer`モードを指定することで、この問題に対処できます。 +PersistentVolumeは、Podのスケジュール制約によって指定されたトポロジに準拠して選択またはプロビジョニングされます。これらには、[リソース要件](/ja/docs/concepts/configuration/manage-resources-containers/)、[ノードセレクター](/ja/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)、[ポッドアフィニティとアンチアフィニティ](/ja/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)、および[taints and tolerations](/ja/docs/concepts/scheduling-eviction/taint-and-toleration)が含まれますが、これらに限定されません。 + +次のプラグインは、動的プロビジョニングで`WaitForFirstConsumer`をサポートしています。 + +* [AWSElasticBlockStore](#aws-ebs) +* [GCEPersistentDisk](#gce-pd) +* [AzureDisk](#azure-disk) + +次のプラグインは、事前に作成されたPersistentVolumeバインディングで`WaitForFirstConsumer`をサポートします。 + +* 上記のすべて +* [Local](#local) + +{{< feature-state state="stable" for_k8s_version="v1.17" >}} +[CSIボリューム](/ja/docs/concepts/storage/volumes/#csi)も動的プロビジョニングと事前作成されたPVでサポートされていますが、サポートされているトポロジーキーと例を確認するには、特定のCSIドライバーのドキュメントを参照する必要があります。 + +{{< note >}} + `WaitForFirstConsumer`の使用を選択した場合は、Pod仕様で`nodeName`を使用してノードアフィニティを指定しないでください。この場合にnodeNameを使用すると、スケジューラはバイパスされ、PVCは保留状態のままになります。 + + 代わりに、以下に示すように、この場合はホスト名にノードセレクターを使用できます。 +{{< /note >}} + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: task-pv-pod +spec: + nodeSelector: + kubernetes.io/hostname: kube-01 + volumes: + - name: task-pv-storage + persistentVolumeClaim: + claimName: task-pv-claim + containers: + - name: task-pv-container + image: nginx + ports: + - containerPort: 80 + name: "http-server" + volumeMounts: + - mountPath: "/usr/share/nginx/html" + name: task-pv-storage +``` + +### 許可されたトポロジー {#allowed-topologies} + +クラスタオペレーターが`WaitForFirstConsumer`ボリュームバインディングモードを指定すると、ほとんどの状況でプロビジョニングを特定のトポロジに制限する必要がなくなります。ただし、それでも必要な場合は、`allowedTopologies`を指定できます。 + +この例は、プロビジョニングされたボリュームのトポロジを特定のゾーンに制限する方法を示しており、サポートされているプラグインの`zone`および`zones`パラメーターの代わりとして使用する必要があります。 + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: standard +provisioner: kubernetes.io/gce-pd +parameters: + type: pd-standard +volumeBindingMode: WaitForFirstConsumer +allowedTopologies: +- matchLabelExpressions: + - key: failure-domain.beta.kubernetes.io/zone + values: + - us-central-1a + - us-central-1b +``` + +## パラメーター + +ストレージクラスには、ストレージクラスに属するボリュームを記述するパラメーターがあります。`プロビジョナー`に応じて、異なるパラメーターが受け入れられる場合があります。たとえば、パラメーター`type`の値`io1`とパラメーター`iopsPerGB`はEBSに固有です。パラメーターを省略すると、デフォルトが使用されます。 + +StorageClassに定義できるパラメーターは最大512個です。 +キーと値を含むパラメーターオブジェクトの合計の長さは、256KiBを超えることはできません。 + +### AWS EBS + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/aws-ebs +parameters: + type: io1 + iopsPerGB: "10" + fsType: ext4 +``` + +* `type`:`io1`、`gp2`、`sc1`、`st1`。詳細については、[AWSドキュメント](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)を参照してください。デフォルト:`gp2`。 +* `zone`(非推奨):AWS zone。`zone`も`zones`も指定されていない場合、ボリュームは通常、Kubernetesクラスターにノードがあるすべてのアクティブなゾーンにわたってラウンドロビン方式で処理されます。`zone`パラメーターと`zones`パラメーターを同時に使用することはできません。 +* `zones`(非推奨):AWS zoneのコンマ区切りリスト。`zone`も`zones`も指定されていない場合、ボリュームは通常、Kubernetesクラスターにノードがあるすべてのアクティブなゾーンにわたってラウンドロビン方式で処理されます。`zone`パラメーターと`zones`パラメーターを同時に使用することはできません。 +* `iopsPerGB`:`io1`ボリュームのみ。GiBごとの1秒あたりのI/O操作。AWSボリュームプラグインは、これを要求されたボリュームのサイズで乗算して、ボリュームのIOPSを計算し、上限を20,000IOPSに設定します(AWSでサポートされる最大値については、[AWSドキュメント](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)を参照してください)。ここでは文字列が必要です。つまり、`10`ではなく`"10"`です。 +* `fsType`:kubernetesでサポートされているfsType。デフォルト:`"ext4"`。 +* `encrypted`:EBSボリュームを暗号化するかどうかを示します。有効な値は`"true"`または`"false"`です。ここでは文字列が必要です。つまり、`true`ではなく`"true"`です。 +* `kmsKeyId`:オプション。ボリュームを暗号化するときに使用するキーの完全なAmazonリソースネーム。何も指定されていなくても`encrypted`がtrueの場合、AWSによってキーが生成されます。有効なARN値については、AWSドキュメントを参照してください。 + +{{< note >}} +`zone`および`zones`パラメーターは廃止され、[allowedTopologies](#allowed-topologies)に置き換えられました。 +{{< /note >}} + +### GCE PD + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/gce-pd +parameters: + type: pd-standard + fstype: ext4 + replication-type: none +``` + +* `type`:`pd-standard`または`pd-ssd`。デフォルト:`pd-standard` +* `zone`(非推奨):GCE zone。`zone`も`zones`も指定されていない場合、ボリュームは通常、Kubernetesクラスターにノードがあるすべてのアクティブなゾーンにわたってラウンドロビン方式で処理されます。`zone`パラメーターと`zones`パラメーターを同時に使用することはできません。 +* `zones`(非推奨):GCE zoneのコンマ区切りリスト。`zone`も`zones`も指定されていない場合、ボリュームは通常、Kubernetesクラスターにノードがあるすべてのアクティブなゾーンにわたってラウンドロビン方式で処理されます。`zone`パラメーターと`zones`パラメーターを同時に使用することはできません。 +* `fstype`:`ext4`または`xfs`。デフォルト:`ext4`。定義されたファイルシステムタイプは、ホストオペレーティングシステムでサポートされている必要があります。 +* `replication-type`:`none`または`regional-pd`。デフォルト:`none`。 + +`replication-type`が`none`に設定されている場合、通常の(ゾーン)PDがプロビジョニングされます。 + +`replication-type`が`regional-pd`に設定されている場合、[Regional Persistent Disk](https://cloud.google.com/compute/docs/disks/#repds)がプロビジョニングされます。`volumeBindingMode: WaitForFirstConsumer`を設定することを強くお勧めします。この場合、このStorageClassを使用するPersistentVolumeClaimを使用するPodを作成すると、Regional Persistent Diskが2つのゾーンでプロビジョニングされます。1つのゾーンは、Podがスケジュールされているゾーンと同じです。もう1つのゾーンは、クラスターで使用可能なゾーンからランダムに選択されます。ディスクゾーンは、`allowedTopologies`を使用してさらに制限できます。 + +{{< note >}} +`zone`および`zones`パラメーターは廃止され、[allowedTopologies](#allowed-topologies)に置き換えられました。 +{{< /note >}} + +### Glusterfs(非推奨) {#glusterfs} + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/glusterfs +parameters: + resturl: "http://127.0.0.1:8081" + clusterid: "630372ccdc720a92c681fb928f27b53f" + restauthenabled: "true" + restuser: "admin" + secretNamespace: "default" + secretName: "heketi-secret" + gidMin: "40000" + gidMax: "50000" + volumetype: "replicate:3" +``` + +* `resturl`:glusterボリュームをオンデマンドでプロビジョニングするGluster RESTサービス/HeketiサービスのURL。一般的な形式は`IPaddress:Port`である必要があり、これはGlusterFS動的プロビジョナーの必須パラメーターです。Heketiサービスがopenshift/kubernetesセットアップでルーティング可能なサービスとして公開されている場合、これは`http://heketi-storage-project.cloudapps.mystorage.com`のような形式になる可能性があります。ここで、fqdnは解決可能なHeketiサービスURLです。 +* `restauthenabled`:RESTサーバーへの認証を有効にするGluster RESTサービス認証ブール値。この値が`"true"`の場合、`restuser`と`restuserkey`または`secretNamespace`+`secretName`を入力する必要があります。このオプションは非推奨です。`restuser`、`restuserkey`、`secretName`、または`secretNamespace`のいずれかが指定されている場合、認証が有効になります。 +* `restuser`:Gluster Trusted Poolでボリュームを作成するためのアクセス権を持つGluster RESTサービス/Heketiユーザー。 +* `restuserkey`:RESTサーバーへの認証に使用されるGluster RESTサービス/Heketiユーザーのパスワード。このパラメーターは、`secretNamespace`+`secretName`を優先されて廃止されました。 +* `secretNamespace`、`secretName`:Gluster RESTサービスと通信するときに使用するユーザーパスワードを含むSecretインスタンスの識別。これらのパラメーターはオプションです。`secretNamespace`と`secretName`の両方が省略された場合、空のパスワードが使用されます。提供されたシークレットには、タイプ`kubernetes.io/glusterfs`が必要です。たとえば、次のように作成されます。 + ``` + kubectl create secret generic heketi-secret \ + --type="kubernetes.io/glusterfs" --from-literal=key='opensesame' \ + --namespace=default + ``` + + シークレットの例は[glusterfs-provisioning-secret.yaml](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml)にあります。 + +* `clusterid`:`630372ccdc720a92c681fb928f27b53f`は、ボリュームのプロビジョニング時にHeketiによって使用されるクラスターのIDです。また、クラスタIDのリストにすることもできます。これはオプションのパラメーターです。 +* `gidMin`、`gidMax`:StorageClassのGID範囲の最小値と最大値。この範囲内の一意の値(GID)(gidMin-gidMax)が、動的にプロビジョニングされたボリュームに使用されます。これらはオプションの値です。指定しない場合、ボリュームは、それぞれgidMinとgidMaxのデフォルトである2000から2147483647の間の値でプロビジョニングされます。 +* `volumetype`:ボリュームタイプとそのパラメーターは、このオプションの値で構成できます。ボリュームタイプが記載されていない場合、プロビジョニング担当者がボリュームタイプを決定します。 + 例えば、 + * レプリカボリューム:`volumetype: replica:3`ここで、'3'はレプリカ数です。 + * Disperse/ECボリューム:`volumetype: disperse:4:2`ここで、'4'はデータ、'2'は冗長数です。 + * ボリュームの分配:`volumetype: none` + + 利用可能なボリュームタイプと管理オプションについては、[管理ガイド](https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/)を参照してください。 + + 詳細な参考情報については、[Heketiの設定方法](https://github.com/heketi/heketi/wiki/Setting-up-the-topology)を参照してください。 + + 永続ボリュームが動的にプロビジョニングされると、Glusterプラグインはエンドポイントとヘッドレスサービスを`gluster-dynamic-`という名前で自動的に作成します。永続ボリューム要求が削除されると、動的エンドポイントとサービスは自動的に削除されます。 + +### NFS + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: example-nfs +provisioner: example.com/external-nfs +parameters: + server: nfs-server.example.com + path: /share + readOnly: "false" +``` + +* `server`:サーバーは、NFSサーバーのホスト名またはIPアドレスです。 +* `path`:NFSサーバーによってエクスポートされるパス。 +* `readOnly`:ストレージが読み取り専用としてマウントされるかどうかを示すフラグ(デフォルトはfalse)。 + +Kubernetesには、内部NFSプロビジョナーは含まれていません。NFS用のStorageClassを作成するには、外部プロビジョナーを使用する必要があります。 +ここではいくつかの例を示します。 +* [NFS Ganeshaサーバーと外部プロビジョナー](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner) +* [NFSサブディレクトリ外部プロビジョナー](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner) + +### OpenStack Cinder + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: gold +provisioner: kubernetes.io/cinder +parameters: + availability: nova +``` + +* `availability`:アベイラビリティゾーン。指定されていない場合、ボリュームは通常、Kubernetesクラスターにノードがあるすべてのアクティブなゾーンにわたってラウンドロビン方式で処理されます。 + +{{< note >}} +{{< feature-state state="deprecated" for_k8s_version="v1.11" >}} +このOpenStackの内部プロビジョナーは非推奨です。[OpenStackの外部クラウドプロバイダー](https://github.com/kubernetes/cloud-provider-openstack)をご利用ください。 +{{< /note >}} + +### vSphere + +vSphereストレージクラスのプロビジョナーには2つのタイプがあります。 + +- [CSIプロビジョナー](#vsphere-provisioner-csi):`csi.vsphere.vmware.com` +- [vCPプロビジョナー](#vcp-provisioner):`kubernetes.io/vsphere-volume` + +インツリープロビジョナーは[非推奨です](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi)。CSIプロビジョナーの詳細については、[Kubernetes vSphere CSIドライバー](https://vsphere-csi-driver.sigs.k8s.io/)および[vSphereVolume CSI移行](/ja/docs/concepts/storage/volumes/#vsphere-csi-migration)を参照してください。 + +#### CSIプロビジョナー {#vsphere-provisioner-csi} + +vSphere CSI StorageClassプロビジョナーは、Tanzu Kubernetesクラスターと連携します。例については、[vSphere CSIリポジトリ](https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/example/vanilla-k8s-RWM-filesystem-volumes/example-sc.yaml)を参照してください。 + +#### vCPプロビジョナー {#vcp-provisioner} + +次の例では、VMware Cloud Provider(vCP) StorageClassプロビジョナーを使用しています。 + +1. ユーザー指定のディスク形式でStorageClassを作成します。 + + ```yaml + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + name: fast + provisioner: kubernetes.io/vsphere-volume + parameters: + diskformat: zeroedthick + ``` + + `diskformat`:`thin`、`zeroedthick`、`eagerzeroedthick`。デフォルト:`"thin"`. + +2. ユーザー指定のデータストアにディスクフォーマットのStorageClassを作成します。 + + ```yaml + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + name: fast + provisioner: kubernetes.io/vsphere-volume + parameters: + diskformat: zeroedthick + datastore: VSANDatastore + ``` + + `datastore`:ユーザーはStorageClassでデータストアを指定することもできます。 + ボリュームは、StorageClassで指定されたデータストア(この場合は`VSANDatastore`)に作成されます。このフィールドはオプションです。データストアが指定されていない場合、vSphere Cloud Providerの初期化に使用されるvSphere構成ファイルで指定されたデータストアにボリュームが作成されます。 + +3. kubernetes内のストレージポリシー管理 + + * 既存のvCenter SPBMポリシーを使用 + + vSphere for Storage Managementの最も重要な機能の1つは、ポリシーベースの管理です。Storage Policy Based Management(SPBM)は、幅広いデータサービスとストレージソリューションにわたって単一の統合コントロールプレーンを提供するストレージポリシーフレームワークです。SPBMにより、vSphere管理者は、キャパシティプランニング、差別化されたサービスレベル、キャパシティヘッドルームの管理など、事前のストレージプロビジョニングの課題を克服できます。SPBMポリシーは、`storagePolicyName`パラメーターを使用してStorageClassで指定できます。 + + * Kubernetes内でのVirtual SANポリシーのサポート + + Vsphere Infrastructure(VI)管理者は、動的ボリュームプロビジョニング中にカスタムVirtual SANストレージ機能を指定できます。動的なボリュームプロビジョニング時に、パフォーマンスや可用性などのストレージ要件をストレージ機能の形で定義できるようになりました。ストレージ機能の要件はVirtual SANポリシーに変換され、永続ボリューム(仮想ディスク)の作成時にVirtual SANレイヤーにプッシュダウンされます。仮想ディスクは、要件を満たすためにVirtual SANデータストア全体に分散されます。 + + 永続的なボリューム管理にストレージポリシーを使用する方法の詳細については、[ボリュームの動的プロビジョニングのためのストレージポリシーベースの管理](https://github.com/vmware-archive/vsphere-storage-for-kubernetes/blob/fa4c8b8ad46a85b6555d715dd9d27ff69839df53/documentation/policy-based-mgmt.md)を参照してください。 + +[vSphereの例](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere) では、Kubernetes for vSphere内で永続的なボリューム管理を試すことができます。 + +### Ceph RBD + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: fast +provisioner: kubernetes.io/rbd +parameters: + monitors: 10.16.153.105:6789 + adminId: kube + adminSecretName: ceph-secret + adminSecretNamespace: kube-system + pool: kube + userId: kube + userSecretName: ceph-secret-user + userSecretNamespace: default + fsType: ext4 + imageFormat: "2" + imageFeatures: "layering" +``` + +* `monitors`:カンマ区切りのCephモニター。このパラメーターは必須です。 +* `adminId`:プールにイメージを作成できるCephクライアントID。デフォルトは"admin"です。 +* `adminSecretName`:`adminId`のシークレット名。このパラメーターは必須です。指定されたシークレットのタイプは"kubernetes.io/rbd"である必要があります。 +* `adminSecretNamespace`:`adminSecretName`の名前空間。デフォルトは"default"です。 +* `pool`:Ceph RBDプール。デフォルトは"rbd"です。 +* `userId`:RBDイメージのマッピングに使用されるCephクライアントID。デフォルトは`adminId`と同じです。 +* `userSecretName`:RBDイメージをマップするための`userId`のCephシークレットの名前。PVCと同じ名前空間に存在する必要があります。このパラメーターは必須です。提供されたシークレットのタイプは"kubernetes.io/rbd"である必要があります。たとえば、次のように作成されます。 + + ```shell + kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \ + --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \ + --namespace=kube-system + ``` +* `userSecretNamespace`:`userSecretName`の名前空間。 +* `fsType`:kubernetesでサポートされているfsType。デフォルト:`"ext4"`。 +* `imageFormat`:Ceph RBDイメージ形式、"1"または"2"。デフォルトは"2"です。 +* `imageFeatures`:このパラメーターはオプションであり、`imageFormat`を"2"に設定した場合にのみ使用する必要があります。現在サポートされている機能は`layering`のみです。デフォルトは""で、オンになっている機能はありません。 + +### Azure Disk + +#### Azure Unmanaged Disk storage class {#azure-unmanaged-disk-storage-class} + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/azure-disk +parameters: + skuName: Standard_LRS + location: eastus + storageAccount: azure_storage_account_name +``` + +* `skuName`:AzureストレージアカウントのSku層。デフォルトは空です。 +* `location`:Azureストレージアカウントの場所。デフォルトは空です。 +* `storageAccount`:Azureストレージアカウント名。ストレージアカウントを指定する場合、それはクラスターと同じリソースグループに存在する必要があり、`location`は無視されます。ストレージアカウントが指定されていない場合、クラスターと同じリソースグループに新しいストレージアカウントが作成されます。 + +#### Azure Disk storage class (starting from v1.7.2) {#azure-disk-storage-class} + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/azure-disk +parameters: + storageaccounttype: Standard_LRS + kind: managed +``` + +* `storageaccounttype`:AzureストレージアカウントのSku層。デフォルトは空です。 +* `kind`:可能な値は、`shared`、`dedicated`、および`managed`(デフォルト)です。`kind`が`shared`の場合、すべてのアンマネージドディスクは、クラスターと同じリソースグループ内のいくつかの共有ストレージアカウントに作成されます。`kind`が`dedicated`の場合、新しい専用ストレージアカウントが、クラスターと同じリソースグループ内の新しいアンマネージドディスク用に作成されます。`kind`が`managed`の場合、すべてのマネージドディスクはクラスターと同じリソースグループに作成されます。 +* `resourceGroup`:Azureディスクが作成されるリソースグループを指定します。これは、既存のリソースグループ名である必要があります。指定しない場合、ディスクは現在のKubernetesクラスターと同じリソースグループに配置されます。 + +- Premium VMはStandard_LRSディスクとPremium_LRSディスクの両方を接続できますが、Standard VMはStandard_LRSディスクのみを接続できます。 +- マネージドVMはマネージドディスクのみをアタッチでき、アンマネージドVMはアンマネージドディスクのみをアタッチできます。 + +### Azure File + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: azurefile +provisioner: kubernetes.io/azure-file +parameters: + skuName: Standard_LRS + location: eastus + storageAccount: azure_storage_account_name +``` + +* `skuName`:AzureストレージアカウントのSku層。デフォルトは空です。 +* `location`:Azureストレージアカウントの場所。デフォルトは空です。 +* `storageAccount`:Azureストレージアカウント名。デフォルトは空です。ストレージアカウントが指定されていない場合は、リソースグループに関連付けられているすべてのストレージアカウントが検索され、`skuName`と`location`に一致するものが見つかります。ストレージアカウントを指定する場合は、クラスターと同じリソースグループに存在する必要があり、`skuName`と`location`は無視されます。 +* `secretNamespace`:Azureストレージアカウント名とキーを含むシークレットの名前空間。デフォルトはPodと同じです。 +* `secretName`:Azureストレージアカウント名とキーを含むシークレットの名前。デフォルトは`azure-storage-account--secret`です +* `readOnly`:ストレージが読み取り専用としてマウントされるかどうかを示すフラグ。デフォルトはfalseで、読み取り/書き込みマウントを意味します。この設定は、VolumeMountsの`ReadOnly`設定にも影響します。 + +ストレージのプロビジョニング中に、`secretName`という名前のシークレットがマウント資格証明用に作成されます。クラスタで[RBAC](/ja/docs/reference/access-authn-authz/rbac/)と[Controller Roles](/ja/docs/reference/access-authn-authz/rbac/#controller-roles)の両方が有効になっている場合は、追加します。clusterrole`system:controller:persistent-volume-binder`に対するリソース`secret`の`create`パーミッション。 + +マルチテナンシーコンテキストでは、`secretNamespace`の値を明示的に設定することを強くお勧めします。そうしないと、ストレージアカウントの資格情報が他のユーザーに読み取られる可能性があります。 + + +### Portworx Volume + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: portworx-io-priority-high +provisioner: kubernetes.io/portworx-volume +parameters: + repl: "1" + snap_interval: "70" + priority_io: "high" + +``` + +* `fs`:配置するファイルシステム:`none/xfs/ext4`(デフォルト:`ext4`)。 +* `block_size`:キロバイト単位のブロックサイズ(デフォルト:`32`)。 +* `repl`:レプリケーション係数`1..3`の形式で提供される同期レプリカの数(デフォルト:`1`)。ここでは文字列が期待されます。つまり、`1`ではなく`"1"`です。 +* `priority_io`:ボリュームがパフォーマンスの高いストレージから作成されるか、優先度の低いストレージ`high/medium/low`(デフォルト:`low`)から作成されるかを決定します。 +* `snap_interval`:スナップショットをトリガーするクロック/時間間隔(分単位)。スナップショットは、前のスナップショットとの差分に基づいて増分されます。0はスナップを無効にします(デフォルト:`0`)。ここでは文字列が必要です。つまり、`70`ではなく`"70"`です。 +* `aggregation_level`:ボリュームが分散されるチャンクの数を指定します。0は非集約ボリュームを示します(デフォルト:`0`)。ここには文字列が必要です。つまり、`0`ではなく`"0"`です。 +* `ephemeral`:アンマウント後にボリュームをクリーンアップするか、永続化するかを指定します。`emptyDir`ユースケースではこの値をtrueに設定でき、Cassandraなどのデータベースのような`persistent volumes`ユースケースではfalse、`true/false`(デフォルトは`false`)に設定する必要があります。ここでは文字列が必要です。つまり、`true`ではなく`"true"`です。 + +### Local + +{{< feature-state for_k8s_version="v1.14" state="stable" >}} + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: local-storage +provisioner: kubernetes.io/no-provisioner +volumeBindingMode: WaitForFirstConsumer +``` + +ローカルボリュームは現在、動的プロビジョニングをサポートしていませんが、Podのスケジューリングまでボリュームバインドを遅らせるには、引き続きStorageClassを作成する必要があります。これは、`WaitForFirstConsumer`ボリュームバインディングモードによって指定されます。 + +ボリュームバインディングを遅延させると、PersistentVolumeClaimに適切なPersistentVolumeを選択するときに、スケジューラはPodのスケジューリング制約をすべて考慮することができます。 diff --git a/content/ja/docs/concepts/storage/volume-snapshots.md b/content/ja/docs/concepts/storage/volume-snapshots.md new file mode 100644 index 0000000000000..82c573c6512a3 --- /dev/null +++ b/content/ja/docs/concepts/storage/volume-snapshots.md @@ -0,0 +1,182 @@ +--- +title: ボリュームのスナップショット +content_type: concept +weight: 60 +--- + + + +Kubernetesでは、*VolumeSnapshot*はストレージシステム上のボリュームのスナップショットを表します。このドキュメントは、Kubernetes[永続ボリューム](/ja/docs/concepts/storage/persistent-volumes/)に既に精通していることを前提としています。 + + + +## 概要 {#introduction} + +APIリソース`PersistentVolume`と`PersistentVolumeClaim`を使用してユーザーと管理者にボリュームをプロビジョニングする方法と同様に、`VolumeSnapshotContent`と`VolumeSnapshot`APIリソースは、ユーザーと管理者のボリュームスナップショットを作成するために提供されます。 + +`VolumeSnapshotContent`は、管理者によってプロビジョニングされたクラスター内のボリュームから取得されたスナップショットです。PersistentVolumeがクラスターリソースであるように、これはクラスターのリソースです。 + +`VolumeSnapshot`は、ユーザーによるボリュームのスナップショットの要求です。PersistentVolumeClaimに似ています。 + +`VolumeSnapshotClass`を使用すると、`VolumeSnapshot`に属するさまざまな属性を指定できます。これらの属性は、ストレージシステム上の同じボリュームから取得されたスナップショット間で異なる場合があるため、`PersistentVolumeClaim`の同じ`StorageClass`を使用して表現することはできません。 + +ボリュームスナップショットは、完全に新しいボリュームを作成することなく、特定の時点でボリュームの内容をコピーするための標準化された方法をKubernetesユーザーに提供します。この機能により、たとえばデータベース管理者は、編集または削除の変更を実行する前にデータベースをバックアップできます。 + +この機能を使用する場合、ユーザーは次のことに注意する必要があります。 + +- APIオブジェクト`VolumeSnapshot`、`VolumeSnapshotContent`、および`VolumeSnapshotClass`は{{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}}であり、コアAPIの一部ではありません。 +- `VolumeSnapshot`のサポートは、CSIドライバーでのみ利用できます。 +- `VolumeSnapshot`の展開プロセスの一環として、Kubernetesチームは、コントロールプレーンに展開されるスナップショットコントローラーと、CSIドライバーと共に展開されるcsi-snapshotterと呼ばれるサイドカーヘルパーコンテナを提供します。スナップショットコントローラーは、`VolumeSnapshot`および`VolumeSnapshotContent`オブジェクトを管理し、`VolumeSnapshotContent`オブジェクトの作成と削除を担当します。サイドカーcsi-snapshotterは、`VolumeSnapshotContent`オブジェクトを監視し、CSIエンドポイントに対して`CreateSnapshot`および`DeleteSnapshot`操作をトリガーします。 +- スナップショットオブジェクトの厳密な検証を提供するvalidation Webhookサーバーもあります。これは、CSIドライバーではなく、スナップショットコントローラーおよびCRDと共にKubernetesディストリビューションによってインストールする必要があります。スナップショット機能が有効になっているすべてのKubernetesクラスターにインストールする必要があります。 +- CSIドライバーは、ボリュームスナップショット機能を実装している場合と実装していない場合があります。ボリュームスナップショットのサポートを提供するCSIドライバーは、csi-snapshotterを使用する可能性があります。詳細については、[CSIドライバーのドキュメント](https://kubernetes-csi.github.io/docs/)を参照してください。 +- CRDとスナップショットコントローラーのインストールは、Kubernetesディストリビューションの責任です。 + +## ボリュームスナップショットとボリュームスナップショットのコンテンツのライフサイクル + +`VolumeSnapshotContents`はクラスター内のリソースです。`VolumeSnapshots`は、これらのリソースに対するリクエストです。`VolumeSnapshotContents`と`VolumeSnapshots`の間の相互作用は、次のライフサイクルに従います。 + +### プロビジョニングボリュームのスナップショット + +スナップショットをプロビジョニングするには、事前プロビジョニングと動的プロビジョニングの2つの方法があります。 + +#### 事前プロビジョニング{#static} + +クラスター管理者は、多数の`VolumeSnapshotContents`を作成します。それらは、クラスターユーザーが使用できるストレージシステム上の実際のボリュームスナップショットの詳細を保持します。それらはKubernetesAPIに存在し、消費することができます。 + +#### 動的プロビジョニング + +既存のスナップショットを使用する代わりに、スナップショットをPersistentVolumeClaimから動的に取得するように要求できます。[VolumeSnapshotClass](/ja/docs/concepts/storage/volume-snapshot-classes/)は、スナップショットを作成するときに使用するストレージプロバイダー固有のパラメーターを指定します。 + +### バインディング + +スナップショットコントローラーは、事前プロビジョニングされたシナリオと動的にプロビジョニングされたシナリオの両方で、適切な`VolumeSnapshotContent`オブジェクトを使用した`VolumeSnapshot`オブジェクトのバインディングを処理します。バインディングは1対1のマッピングです。 + +事前プロビジョニングされたバインディングの場合、要求されたVolumeSnapshotContentオブジェクトが作成されるまで、VolumeSnapshotはバインドされないままになります。 + +### スナップショットソース保護としてのPersistentVolumeClaim + +この保護の目的は、スナップショットがシステムから取得されている間、使用中の{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}}APIオブジェクトがシステムから削除されないようにすることです(これにより、データが失われる可能性があります)。 + +PersistentVolumeClaimのスナップショットが作成されている間、そのPersistentVolumeClaimは使用中です。スナップショットソースとしてアクティブに使用されているPersistentVolumeClaim APIオブジェクトを削除しても、PersistentVolumeClaimオブジェクトはすぐには削除されません。代わりに、PersistentVolumeClaimオブジェクトの削除は、スナップショットがReadyToUseになるか中止されるまで延期されます。 + +### 削除 + +削除は`VolumeSnapshot`オブジェクトの削除によってトリガーされ、`DeletionPolicy`に従います。`DeletionPolicy`が`Delete`の場合、基になるストレージスナップショットは`VolumeSnapshotContent`オブジェクトとともに削除されます。`DeletionPolicy`が`Retain`の場合、基になるスナップショットと`VolumeSnapshotContent`の両方が残ります。 + +## ボリュームスナップショット + +各VolumeSnapshotには、仕様とステータスが含まれています。 + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshot +metadata: + name: new-snapshot-test +spec: + volumeSnapshotClassName: csi-hostpath-snapclass + source: + persistentVolumeClaimName: pvc-test +``` + +`persistentVolumeClaimName`は、スナップショットのPersistentVolumeClaimデータソースの名前です。このフィールドは、スナップショットを動的にプロビジョニングするために必要です。 + +ボリュームスナップショットは、属性`volumeSnapshotClassName`を使用して[VolumeSnapshotClass](/ja/docs/concepts/storage/volume-snapshot-classes/)の名前を指定することにより、特定のクラスを要求できます。何も設定されていない場合、利用可能な場合はデフォルトのクラスが使用されます。 + +事前プロビジョニングされたスナップショットの場合、次の例に示すように、スナップショットのソースとして`volumeSnapshotContentName`を指定する必要があります。事前プロビジョニングされたスナップショットには、`volumeSnapshotContentName`ソースフィールドが必要です。 + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshot +metadata: + name: test-snapshot +spec: + source: + volumeSnapshotContentName: test-content +``` + +## ボリュームスナップショットコンテンツ + +各VolumeSnapshotContentには、仕様とステータスが含まれています。動的プロビジョニングでは、スナップショット共通コントローラーが`VolumeSnapshotContent`オブジェクトを作成します。以下に例を示します。 + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshotContent +metadata: + name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455 +spec: + deletionPolicy: Delete + driver: hostpath.csi.k8s.io + source: + volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002 + sourceVolumeMode: Filesystem + volumeSnapshotClassName: csi-hostpath-snapclass + volumeSnapshotRef: + name: new-snapshot-test + namespace: default + uid: 72d9a349-aacd-42d2-a240-d775650d2455 +``` + +`volumeHandle`は、ストレージバックエンドで作成され、ボリュームの作成中にCSIドライバーによって返されるボリュームの一意の識別子です。このフィールドは、スナップショットを動的にプロビジョニングするために必要です。これは、スナップショットのボリュームソースを指定します。 +事前プロビジョニングされたスナップショットの場合、(クラスター管理者として)次のように`VolumeSnapshotContent`オブジェクトを作成する必要があります。 + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshotContent +metadata: + name: new-snapshot-content-test +spec: + deletionPolicy: Delete + driver: hostpath.csi.k8s.io + source: + snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002 + sourceVolumeMode: Filesystem + volumeSnapshotRef: + name: new-snapshot-test + namespace: default +``` + +`snapshotHandle`は、ストレージバックエンドで作成されたボリュームスナップショットの一意の識別子です。このフィールドは、事前プロビジョニングされたスナップショットに必要です。この`VolumeSnapshotContent`が表すストレージシステムのCSIスナップショットIDを指定します。 + +`sourceVolumeMode`は、スナップショットが作成されるボリュームのモードです。`sourceVolumeMode`フィールドの値は、`Filesystem`または`Block`のいずれかです。ソースボリュームモードが指定されていない場合、Kubernetesはスナップショットをソースボリュームのモードが不明であるかのように扱います。 + +`volumeSnapshotRef`は、対応する`VolumeSnapshot`の参照です。`VolumeSnapshotContent`が事前プロビジョニングされたスナップショットとして作成されている場合、`volumeSnapshotRef`で参照される`VolumeSnapshot`がまだ存在しない可能性があることに注意してください。 + +## スナップショットのボリュームモードの変換 {#convert-volume-mode} + +クラスターにインストールされている`VolumeSnapshots`APIが`sourceVolumeMode`フィールドをサポートしている場合、APIには、権限のないユーザーがボリュームのモードを変換するのを防ぐ機能があります。 + +クラスターにこの機能の機能があるかどうかを確認するには、次のコマンドを実行します。 + +```yaml +$ kubectl get crd volumesnapshotcontent -o yaml +``` + +ユーザーが既存の`VolumeSnapshot`から`PersistentVolumeClaim`を作成できるようにしたいが、ソースとは異なるボリュームモードを使用する場合は、`VolumeSnapshot`に対応する`VolumeSnapshotContent`にアノテーション`snapshot.storage.kubernetes.io/allowVolumeModeChange: "true"`を追加する必要があります。 + +事前プロビジョニングされたスナップショットの場合、クラスター管理者が`spec.sourceVolumeMode`を入力する必要があります。 + +この機能を有効にした`VolumeSnapshotContent`リソースの例は次のようになります。 + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshotContent +metadata: + name: new-snapshot-content-test + annotations: + - snapshot.storage.kubernetes.io/allowVolumeModeChange: "true" +spec: + deletionPolicy: Delete + driver: hostpath.csi.k8s.io + source: + snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002 + sourceVolumeMode: Filesystem + volumeSnapshotRef: + name: new-snapshot-test + namespace: default +``` + +## スナップショットからのボリュームのプロビジョニング + +`PersistentVolumeClaim`オブジェクトの*dataSource*フィールドを使用して、スナップショットからのデータが事前に取り込まれた新しいボリュームをプロビジョニングできます。 + +詳細については、[ボリュームのスナップショットとスナップショットからのボリュームの復元](/ja/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)を参照してください。 diff --git a/content/ja/docs/concepts/workloads/controllers/deployment.md b/content/ja/docs/concepts/workloads/controllers/deployment.md index b62151fc54583..548416d73e55b 100644 --- a/content/ja/docs/concepts/workloads/controllers/deployment.md +++ b/content/ja/docs/concepts/workloads/controllers/deployment.md @@ -145,7 +145,7 @@ Deploymentに対して適切なセレクターとPodテンプレートのラベ ## Deploymentの更新 {#updating-a-deployment} {{< note >}} -Deploymentのロールアウトは、DeploymentのPodテンプレート(この場合`.spec.template`)が変更された場合にのみトリガーされます。例えばテンプレートのラベルもしくはコンテナーイメージが更新された場合です。Deploymentのスケールのような更新では、ロールアウトはトリガーされません。 +Deploymentのロールアウトは、DeploymentのPodテンプレート(この場合`.spec.template`)が変更された場合にのみトリガーされます。例えばテンプレートのラベルもしくはコンテナイメージが更新された場合です。Deploymentのスケールのような更新では、ロールアウトはトリガーされません。 {{< /note >}} Deploymentを更新するには以下のステップに従ってください。 @@ -938,7 +938,7 @@ Deploymentを使って一部のユーザーやサーバーに対してリリー ## Deployment Specの記述 他の全てのKubernetesの設定と同様に、Deploymentは`.apiVersion`、`.kind`や`.metadata`フィールドを必要とします。 -設定ファイルの利用に関する情報は[アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)を参照してください。コンテナーの設定に関しては[リソースを管理するためのkubectlの使用](/ja/docs/concepts/overview/working-with-objects/object-management/)を参照してください。 +設定ファイルの利用に関する情報は[アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)を参照してください。コンテナの設定に関しては[リソースを管理するためのkubectlの使用](/ja/docs/concepts/overview/working-with-objects/object-management/)を参照してください。 Deploymentオブジェクトの名前は、有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)でなければなりません。 Deploymentは[`.spec`セクション](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)も必要とします。 @@ -1008,7 +1008,7 @@ Deploymentのセレクターに一致するラベルを持つPodを直接作成 ### Min Ready Seconds {#min-ready-seconds} -`.spec.minReadySeconds`はオプションのフィールドで、新しく作成されたPodが利用可能となるために、最低どれくらいの秒数コンテナーがクラッシュすることなく稼働し続ければよいかを指定するものです。デフォルトでは0です(Podは作成されるとすぐに利用可能と判断されます)。Podが利用可能と判断された場合についてさらに学ぶために[Container Probes](/ja/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)を参照してください。 +`.spec.minReadySeconds`はオプションのフィールドで、新しく作成されたPodが利用可能となるために、最低どれくらいの秒数コンテナがクラッシュすることなく稼働し続ければよいかを指定するものです。デフォルトでは0です(Podは作成されるとすぐに利用可能と判断されます)。Podが利用可能と判断された場合についてさらに学ぶために[Container Probes](/ja/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)を参照してください。 ### リビジョン履歴の保持上限 diff --git a/content/ja/docs/concepts/workloads/pods/init-containers.md b/content/ja/docs/concepts/workloads/pods/init-containers.md index 3f41a67118c93..eba6d063d416e 100644 --- a/content/ja/docs/concepts/workloads/pods/init-containers.md +++ b/content/ja/docs/concepts/workloads/pods/init-containers.md @@ -261,5 +261,4 @@ Kubernetes v1.20以降では、initコンテナのイメージが変更された ## {{% heading "whatsnext" %}} {#what-s-next} * [Initコンテナを含むPodの作成](/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container)方法について学ぶ。 -* [Initコンテナのデバッグ](/ja/docs/tasks/debug-application-cluster/debug-init-containers/)を行う方法について学ぶ。 - +* [Initコンテナのデバッグ](/ja/docs/tasks/debug/debug-application/debug-init-containers/)を行う方法について学ぶ。 diff --git a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md index e42fdc8f423dc..b633aa6d4322d 100644 --- a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md @@ -22,13 +22,13 @@ Podはその生存期間に1回だけ[スケジューリング](/docs/concepts/s 個々のアプリケーションコンテナと同様に、Podは(永続的ではなく)比較的短期間の存在と捉えられます。Podが作成されると、一意のID([UID](/ja/docs/concepts/overview/working-with-objects/names/#uids))が割り当てられ、(再起動ポリシーに従って)終了または削除されるまでNodeで実行されるようにスケジュールされます。 {{< glossary_tooltip term_id="node" >}}が停止した場合、そのNodeにスケジュールされたPodは、タイムアウト時間の経過後に[削除](#pod-garbage-collection)されます。 -Pod自体は、自己修復しません。Podが{{< glossary_tooltip text="node" term_id="node" >}}にスケジュールされ、その後に失敗、またはスケジュール操作自体が失敗した場合、Podは削除されます。同様に、リソースの不足またはNodeのメンテナンスによりPodはNodeから立ち退きます。Kubernetesは、比較的使い捨てのPodインスタンスの管理作業を処理する、{{< glossary_tooltip term_id="controller" text="controller" >}}と呼ばれる上位レベルの抽象化を使用します。 +Pod自体は、自己修復しません。Podが{{< glossary_tooltip text="node" term_id="node" >}}にスケジュールされ、その後に失敗した場合、Podは削除されます。同様に、リソースの不足またはNodeのメンテナンスによりPodはNodeから立ち退きます。Kubernetesは、比較的使い捨てのPodインスタンスの管理作業を処理する、{{< glossary_tooltip term_id="controller" text="controller" >}}と呼ばれる上位レベルの抽象化を使用します。 特定のPod(UIDで定義)は新しいNodeに"再スケジュール"されません。代わりに、必要に応じて同じ名前で、新しいUIDを持つ同一のPodに置き換えることができます。 {{< glossary_tooltip term_id="volume" text="volume" >}}など、Podと同じ存続期間を持つものがあると言われる場合、それは(そのUIDを持つ)Podが存在する限り存在することを意味します。そのPodが何らかの理由で削除された場合、たとえ同じ代替物が作成されたとしても、関連するもの(例えばボリューム)も同様に破壊されて再作成されます。 -{{< figure src="/images/docs/pod.svg" title="Podの図" width="50%" >}} +{{< figure src="/images/docs/pod.svg" title="Podの図" class="diagram-medium" >}} *file puller(ファイル取得コンテナ)とWebサーバーを含むマルチコンテナのPod。コンテナ間の共有ストレージとして永続ボリュームを使用しています。* @@ -51,6 +51,10 @@ Podの各フェーズの値と意味は厳重に守られています。ここ `Failed` | Pod内のすべてのコンテナが終了し、少なくとも1つのコンテナが異常終了しました。つまり、コンテナはゼロ以外のステータスで終了したか、システムによって終了されました。 `Unknown` | 何らかの理由によりPodの状態を取得できませんでした。このフェーズは通常はPodのホストとの通信エラーにより発生します。 +{{< note >}} +Podの削除中に、kubectlコマンドには`Terminating`が出力されることがあります。この`Terminating`ステータスは、Podのフェーズではありません。Podには、正常に終了するための期間を与えられており、デフォルトは30秒です。`--force`フラグを使用して、[Podを強制的に削除する](/ja/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced)ことができます。 +{{< /note >}} + Nodeが停止するか、クラスタの残りの部分から切断された場合、Kubernetesは失われたNode上のすべてのPodの`Phase`をFailedに設定するためのポリシーを適用します。 ## コンテナのステータス {#container-states} @@ -75,7 +79,7 @@ Podのコンテナの状態を確認するには`kubectl describe pod [POD_NAME] `Terminated`状態のコンテナは実行されて、完了したときまたは何らかの理由で失敗したことを示します。`Terminated`状態のコンテナを持つPodに対して`kubectl`コマンドを使用すると、いずれにせよ理由と終了コード、コンテナの開始時刻と終了時刻が表示されます。 -コンテナがTerminatedに入る前に`preStop`フックがあれば実行されます。 +コンテナが`Terminated`に入る前に`preStop`フックがあれば実行されます。 ## コンテナの再起動ポリシー {#restart-policy} @@ -85,17 +89,18 @@ Podの`spec`には、Always、OnFailure、またはNeverのいずれかの値を ## PodのCondition {#pod-conditions} -PodにはPodStatusがあります。それはPodが成功したかどうかの情報を持つ[PodConditions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podcondition-v1-core)の配列です。 +PodにはPodStatusがあります。それにはPodが成功したかどうかの情報を持つ[PodCondition](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podcondition-v1-core)の配列が含まれています。kubeletは、下記のPodConditionを管理します: * `PodScheduled`: PodがNodeにスケジュールされました。 +* `PodHasNetwork`: (アルファ版機能; [明示的に有効](#pod-has-network)にしなければならない) Podサンドボックスが正常に成功され、ネットワークの設定が完了しました。 * `ContainersReady`: Pod内のすべてのコンテナが準備できた状態です。 -* `Initialized`: すべての[Initコンテナ](/ja/docs/concepts/workloads/pods/init-containers)が正常に実行されました。 +* `Initialized`: すべての[Initコンテナ](/ja/docs/concepts/workloads/pods/init-containers)が正常に終了しました。 * `Ready`: Podはリクエストを処理でき、一致するすべてのサービスの負荷分散プールに追加されます。 フィールド名 | 内容 :--------------------|:----------- `type` | このPodの状態の名前です。 -`status` | その状態が適用可能かどうか示します。可能な値は"`True`"と"`False`"、"`Unknown`"のうちのいずれかです。 +`status` | その状態が適用可能かどうか示します。可能な値は"`True`"、"`False`"、"`Unknown`"のうちのいずれかです。 `lastProbeTime` | Pod Conditionが最後に確認されたときのタイムスタンプが表示されます。 `lastTransitionTime` | 最後にPodのステータスの遷移があった際のタイムスタンプが表示されます。 `reason` | 最後の状態遷移の理由を示す、機械可読のアッパーキャメルケースのテキストです。 @@ -105,7 +110,7 @@ PodにはPodStatusがあります。それはPodが成功したかどうかの {{< feature-state for_k8s_version="v1.14" state="stable" >}} -追加のフィードバックやシグナルをPodStatus:_Pod readiness_に注入できるようにします。これを使用するには、Podの`spec`で`readinessGates`を設定して、kubeletがPodのReadinessを評価する追加の状態のリストを指定します。 +追加のフィードバックやシグナルをPodStatus:*Pod readiness*に注入できるようにします。これを使用するには、Podの`spec`で`readinessGates`を設定して、kubeletがPodのReadinessを評価する追加の状態のリストを指定します。 ReadinessゲートはPodの`status.conditions`フィールドの現在の状態によって決まります。Kubernetesが`Podのstatus.conditions`フィールドでそのような状態を発見できない場合、ステータスはデフォルトで`False`になります。 @@ -146,73 +151,118 @@ PodのConditionは、Kubernetesの[label key format](/ja/docs/concepts/overview/ Podのコンテナは準備完了ですが、少なくとも1つのカスタムのConditionが欠落しているか「False」の場合、kubeletはPodの[Condition](#pod-condition)を`ContainersReady`に設定します。 +### PodのネットワークのReadiness {#pod-has-network} + +{{< feature-state for_k8s_version="v1.25" state="alpha" >}} + +Podがノードにスケジュールされた後、kubeletによって承認され、任意のボリュームがマウントされる必要があります。これらのフェーズが完了すると、kubeletはコンテナランタイム({{< glossary_tooltip term_id="cri" >}}を使用)と連携して、ランタイムサンドボックスのセットアップとPodのネットワークを構成します。もし`PodHasNetworkCondition`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)が有効になっている場合、kubeletは、Podがこの初期化の節目に到達したかどうかをPodの`status.conditions`フィールドにある`PodHasNetwork`状態を使用して報告します。 + +ネットワークが設定されたランタイムサンドボックスがPodにないことを検出すると、`PodHasNetwork`状態は、kubelet によって`False`に設定されます。これは、以下のシナリオで発生します: +* Podのライフサイクルの初期で、kubeletがコンテナランタイムを使用してPodのサンドボックスのセットアップをまだ開始していないとき +* Podのライフサイクルの後期で、Podのサンドボックスが以下のどちらかの原因で破壊された場合: + * Podを退去させず、ノードが再起動する + * コンテナランタイムの隔離に仮想マシンを使用している場合、Podサンドボックスの仮想マシンが再起動し、新しいサンドボックスと新しいコンテナネットワーク設定を作成する必要があります + +ランタイムプラグインによるサンドボックスの作成とPodのネットワーク設定が正常に完了すると、kubeletによって`PodHasNetwork`状態が`True`に設定されます。`PodHasNetwork`状態が`True`に設定された後、kubeletはコンテナイメージの取得とコンテナの作成を開始することができます。 + +initコンテナを持つPodの場合、initコンテナが正常に完了すると(ランタイムプラグインによるサンドボックスの作成とネットワーク設定が正常に行われた後に発生)、kubeletは`Initialized`状態を`True`に設定します。initコンテナがないPodの場合、サンドボックスの作成およびネットワーク設定が開始する前にkubeletは`Initialized`状態を`True`に設定します。 + ## コンテナのProbe {#container-probes} -[Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) は [kubelet](/docs/reference/command-line-tools-reference/kubelet/) により定期的に実行されるコンテナの診断です。診断を行うために、kubeletはコンテナに実装された [Handler](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#handler-v1-core)を呼びます。Handlerには次の3つの種類があります: +*Probe*は[kubelet](/docs/reference/command-line-tools-reference/kubelet/) により定期的に実行されるコンテナの診断です。診断を行うために、kubeletはコンテナ内でコードを実行するか、ネットワークリクエストします。 -* [ExecAction](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#execaction-v1-core): - コンテナ内で特定のコマンドを実行します。コマンドがステータス0で終了した場合に診断を成功と見まします。 +### チェックのメカニズム {#probe-check-methods} -* [TCPSocketAction](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#tcpsocketaction-v1-core): - PodのIPの特定のポートにTCPチェックを行います。 - そのポートが空いていれば診断を成功とみなします。 +probeを使ってコンテナをチェックする4つの異なる方法があります。 +各probeは、この4つの仕組みのうち1つを正確に定義する必要があります: + +`exec` +: コンテナ内で特定のコマンドを実行します。コマンドがステータス0で終了した場合に診断を成功と見なします。 + +`grpc` +: [gRPC](https://grpc.io/)を使ってリモートプロシージャコールを実行します。 + ターゲットは、[gRPC health checks](https://grpc.io/grpc/core/md_doc_health-checking.html)を実装する必要があります。 + レスポンスの`status`が`SERVING`の場合に診断を成功と見なします。 + gRPCはアルファ版の機能のため、`GRPCContainerProbe`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)が + 有効の場合のみ利用可能です。 -* [HTTPGetAction](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core): - PodのIPの特定のポートとパスに対して、HTTP GETのリクエストを送信します。 +`httpGet` +: PodのIPアドレスに対して、指定されたポートとパスでHTTP `GET`のリクエストを送信します。 レスポンスのステータスコードが200以上400未満の際に診断を成功とみなします。 +`tcpSocket` +: PodのIPアドレスに対して、指定されたポートでTCPチェックを行います。 + そのポートが空いていれば診断を成功とみなします。 + オープンしてすぐにリモートシステム(コンテナ)が接続を切断した場合、健全な状態としてカウントします。 + +### Probeの結果 {#probe-outcome} + 各Probe 次の3つのうちの一つの結果を持ちます: -* `Success`: コンテナの診断が成功しました。 -* `Failure`: コンテナの診断が失敗しました。 -* `Unknown`: コンテナの診断が失敗し、取れるアクションがありません。 +`Success` +: コンテナの診断が成功しました。 + +`Failure` +: コンテナの診断が失敗しました。 + +`Unknown` +: コンテナの診断が失敗しました(何も実行する必要はなく、kubeletはさらにチェックを行います)。 + +### Probeの種類 {#types-of-probe} -Kubeletは3種類のProbeを実行中のコンテナで行い、また反応することができます: +kubeletは3種類のProbeを実行中のコンテナで行い、また反応することができます: -* `livenessProbe`: コンテナが動いているかを示します。 - livenessProbe に失敗すると、kubeletはコンテナを殺します、そしてコンテナは[restart policy](#restart-policy)に従います。 - コンテナにlivenessProbeが設定されていない場合、デフォルトの状態は`Success`です。 +`livenessProbe` +: コンテナが動いているかを示します。 + livenessProbeに失敗すると、kubeletはコンテナを殺します、そしてコンテナは[restart policy](#restart-policy)に従います。 + コンテナにlivenessProbeが設定されていない場合、デフォルトの状態は`Success`です。 -* `readinessProbe`: コンテナがリクエスト応答する準備ができているかを示します。 - readinessProbeに失敗すると、エンドポイントコントローラーにより、ServiceからそのPodのIPアドレスが削除されます。 - initial delay前のデフォルトのreadinessProbeの初期値は`Failure`です。 - コンテナにreadinessProbeが設定されていない場合、デフォルトの状態は`Success`です。 +`readinessProbe` +: コンテナがリクエスト応答する準備ができているかを示します。 + readinessProbeに失敗すると、エンドポイントコントローラーにより、ServiceからそのPodのIPアドレスが削除されます。 + initial delay前のデフォルトのreadinessProbeの初期値は`Failure`です。 + コンテナにreadinessProbeが設定されていない場合、デフォルトの状態は`Success`です。 -* `startupProbe`: コンテナ内のアプリケーションが起動したかどうかを示します。 - startupProbeが設定された場合、完了するまでその他のすべてのProbeは無効になります。 - startupProbeに失敗すると、kubeletはコンテナを殺します、そしてコンテナは[restart policy](#restart-policy)に従います。 - コンテナにstartupProbeが設定されていない場合、デフォルトの状態は`Success`です。 +`startupProbe` +: コンテナ内のアプリケーションが起動したかどうかを示します。 + startupProbeが設定された場合、完了するまでその他のすべてのProbeは無効になります。 + startupProbeに失敗すると、kubeletはコンテナを殺します、そしてコンテナは[restart policy](#restart-policy)に従います。 + コンテナにstartupProbeが設定されていない場合、デフォルトの状態は`Success`です。 livenessProbe、readinessProbeまたはstartupProbeを設定する方法の詳細については、[Liveness Probe、Readiness ProbeおよびStartup Probeを使用する](/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)を参照してください。 -### livenessProbeをいつ使うべきか? {#when-should-you-use-a-liveness-probe} +#### livenessProbeをいつ使うべきか? {#when-should-you-use-a-liveness-probe} {{< feature-state for_k8s_version="v1.0" state="stable" >}} -コンテナ自体に問題が発生した場合や状態が悪くなった際にクラッシュすることができればlivenessProbeは不要です. +コンテナ自体に問題が発生した場合や状態が悪くなった際にクラッシュすることができればlivenessProbeは不要です。 この場合kubeletが自動でPodの`restartPolicy`に基づいたアクションを実行します。 Probeに失敗したときにコンテナを殺したり再起動させたりするには、livenessProbeを設定し`restartPolicy`をAlwaysまたはOnFailureにします。 -### readinessProbeをいつ使うべきか? {#when-should-you-use-a-readiness-probe} +#### readinessProbeをいつ使うべきか? {#when-should-you-use-a-readiness-probe} {{< feature-state for_k8s_version="v1.0" state="stable" >}} Probeが成功したときにのみPodにトラフィックを送信したい場合は、readinessProbeを指定します。 -この場合readinessProbeはlivenessProbeと同じになる可能性がありますが、readinessProbeが存在するということは、Podがトラフィックを受けずに開始され、Probe成功が開始した後でトラフィックを受け始めることになります。コンテナが起動時に大きなデータ、構成ファイル、またはマイグレーションを読み込む必要がある場合は、readinessProbeを指定します。 +この場合readinessProbeはlivenessProbeと同じになる可能性がありますが、readinessProbeが存在するということは、Podがトラフィックを受けずに開始され、Probe成功が開始した後でトラフィックを受け始めることになります。 コンテナがメンテナンスのために停止できるようにするには、livenessProbeとは異なる、特定のエンドポイントを確認するreadinessProbeを指定することができます。 +アプリがバックエンドサービスと厳密な依存関係にある場合、livenessProbeとreadinessProbeの両方を実装することができます。アプリ自体が健全であればlivenessProbeはパスしますが、readinessProbeはさらに、必要なバックエンドサービスが利用可能であるかどうかをチェックします。これにより、エラーメッセージでしか応答できないPodへのトラフィックの転送を避けることができます。 + +コンテナの起動中に大きなデータ、構成ファイル、またはマイグレーションを読み込む必要がある場合は、[startupProbe](#when-should-you-use-a-startup-probe)を使用できます。ただし、失敗したアプリと起動データを処理中のアプリの違いを検出したい場合は、readinessProbeを使用した方が良いかもしれません。 + {{< note >}} Podが削除されたときにリクエストを来ないようにするためには必ずしもreadinessProbeが必要というわけではありません。Podの削除時にはreadinessProbeが存在するかどうかに関係なくPodは自動的に自身をunreadyにします。Pod内のコンテナが停止するのを待つ間Podはunreadyのままです。 {{< /note >}} -### startupProbeをいつ使うべきか? {#when-should-you-use-a-startup-probe} +#### startupProbeをいつ使うべきか? {#when-should-you-use-a-startup-probe} -{{< feature-state for_k8s_version="v1.18" state="beta" >}} +{{< feature-state for_k8s_version="v1.20" state="stable" >}} startupProbeは、サービスの開始に時間がかかるコンテナを持つPodに役立ちます。livenessProbeの間隔を長く設定するのではなく、コンテナの起動時に別のProbeを構成して、livenessProbeの間隔よりも長い時間を許可できます。 -コンテナの起動時間が、`initialDelaySeconds + failureThreshold x periodSeconds`よりも長い場合は、livenessProbeと同じエンドポイントをチェックするためにstartupProbeを指定します。`periodSeconds`のデフォルトは30秒です。次に、`failureThreshold`をlivenessProbeのデフォルト値を変更せずにコンテナが起動できるように、十分に高い値を設定します。これによりデッドロックを防ぐことができます。 +コンテナの起動時間が、`initialDelaySeconds + failureThreshold x periodSeconds`よりも長い場合は、livenessProbeと同じエンドポイントをチェックするためにstartupProbeを指定します。`periodSeconds`のデフォルトは10秒です。次に、`failureThreshold`をlivenessProbeのデフォルト値を変更せずにコンテナが起動できるように、十分に高い値を設定します。これによりデッドロックを防ぐことができます。 ## Podの終了 {#pod-termination} @@ -228,7 +278,7 @@ Podは、クラスター内のNodeで実行中のプロセスを表すため、 1. API server内のPodは、猶予期間を越えるとPodが「死んでいる」と見なされるように更新される。 削除中のPodに対して`kubectl describe`コマンドを使用すると、Podは「終了中」と表示される。 Podが実行されているNode上で、Podが終了しているとマークされている(正常な終了期間が設定されている)とkubeletが認識するとすぐに、kubeletはローカルでPodの終了プロセスを開始します。 - 1. Pod内のコンテナの1つが`preStop`[フック](/ja/docs/concepts/containers/container-lifecycle-hooks/#hook-details)を定義している場合は、コンテナの内側で呼び出される。猶予期間が終了した後も `preStop`フックがまだ実行されている場合は、一度だけ猶予期間を延長される(2秒)。 + 1. Pod内のコンテナの1つが`preStop`[フック](/ja/docs/concepts/containers/container-lifecycle-hooks)を定義している場合は、コンテナの内側で呼び出される。猶予期間が終了した後も`preStop`フックがまだ実行されている場合は、一度だけ猶予期間を延長される(2秒)。 {{< note >}} `preStop`フックが完了するまでにより長い時間が必要な場合は、`terminationGracePeriodSeconds`を変更する必要があります。 {{< /note >}} @@ -236,7 +286,7 @@ Podは、クラスター内のNodeで実行中のプロセスを表すため、 {{< note >}} Pod内のすべてのコンテナが同時にTERMシグナルを受信するわけではなく、シャットダウンの順序が問題になる場合はそれぞれに`preStop`フックを使用して同期することを検討する。 {{< /note >}} -1. kubeletが正常な終了を開始すると同時に、コントロールプレーンは、終了中のPodをEndpoints(および有効な場合はEndpointSlice)オブジェクトから削除します。これらのオブジェクトは、{{< glossary_tooltip text="selector" term_id="selector" >}}が設定された{{< glossary_tooltip term_id="service" text="Service" >}}を表します。{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}}とその他のワークロードリソースは、終了中のPodを有効なサービス中のReplicaSetとして扱いません。ゆっくりと終了するPodは、(サービスプロキシーのような)ロードバランサーが終了猶予期間が_始まる_とエンドポイントからそれらのPodを削除するので、トラフィックを継続して処理できません。 +1. kubeletが正常な終了を開始すると同時に、コントロールプレーンは、終了中のPodをEndpointSlice(およびEndpoints)オブジェクトから削除します。これらのオブジェクトは、{{< glossary_tooltip text="selector" term_id="selector" >}}が設定された{{< glossary_tooltip term_id="service" text="Service" >}}を表します。{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}}とその他のワークロードリソースは、終了中のPodを有効なサービス中のReplicaSetとして扱いません。ゆっくりと終了するPodは、(サービスプロキシーのような)ロードバランサーが終了猶予期間が*始まる*とエンドポイントからそれらのPodを削除するので、トラフィックを継続して処理できません。 1. 猶予期間が終了すると、kubeletは強制削除を開始する。コンテナランタイムは、Pod内でまだ実行中のプロセスに`SIGKILL`を送信する。kubeletは、コンテナランタイムが非表示の`pause`コンテナを使用している場合、そのコンテナをクリーンアップします。 1. kubeletは猶予期間を0(即時削除)に設定することでAPI server上のPodの削除を終了する。 1. API serverはPodのAPIオブジェクトを削除し、クライアントからは見えなくなります。 @@ -258,6 +308,11 @@ Podは、クラスター内のNodeで実行中のプロセスを表すため、 強制削除が実行されると、API serverは、Podが実行されていたNode上でPodが停止されたというkubeletからの確認を待ちません。API内のPodは直ちに削除されるため、新しいPodを同じ名前で作成できるようになります。Node上では、すぐに終了するように設定されるPodは、強制終了される前にわずかな猶予期間が与えられます。 +{{< caution >}} +即時削除では、実行中のリソースの終了を待ちません。 +リソースはクラスタ上で無期限に実行し続ける可能性があります。 +{{< /caution >}} + StatefulSetのPodについては、[StatefulSetからPodを削除するためのタスクのドキュメント](/ja/docs/tasks/run-application/force-delete-stateful-set-pod/)を参照してください。 @@ -271,10 +326,10 @@ StatefulSetのPodについては、[StatefulSetからPodを削除するための ## {{% heading "whatsnext" %}} -* [attaching handlers to Container lifecycle events](/ja/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/)のハンズオンをやってみる +* [コンテナライフサイクルイベントへのハンドラー紐付け](/ja/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/)のハンズオンをやってみる -* [Configure Liveness, Readiness and Startup Probes](/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)のハンズオンをやってみる +* [Liveness Probe、Readiness ProbeおよびStartup Probeを使用する](/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)のハンズオンをやってみる -* [Container lifecycle hooks](/ja/docs/concepts/containers/container-lifecycle-hooks/)についてもっと学ぶ +* [コンテナライフサイクルフック](/ja/docs/concepts/containers/container-lifecycle-hooks/)についてもっと学ぶ -* APIのPod/コンテナステータスの詳細情報は[PodStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core)および[ContainerStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerstatus-v1-core)を参照してください +* APIにおけるPodとコンテナのステータスに関する詳細情報は、Podの[`.status`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodStatus)に書かれているAPIリファレンスドキュメントを参照してください。 diff --git a/content/ja/docs/contribute/style/content-organization.md b/content/ja/docs/contribute/style/content-organization.md index 1ecc86307f701..c69b66404ef45 100644 --- a/content/ja/docs/contribute/style/content-organization.md +++ b/content/ja/docs/contribute/style/content-organization.md @@ -1,7 +1,7 @@ --- title: コンテンツの構造化 content_type: concept -weight: 40 +weight: 90 --- @@ -28,7 +28,7 @@ weight: 10 ``` {{% note %}} -ページのweightについては、1、2、3…などの値を使用せず、10、20、30…のように一定の間隔を空けた方が賢明です。こうすることで、後で別のページを間に挿入できるようになります。 +ページのweightについては、1、2、3…などの値を使用せず、10、20、30…のように一定の間隔を空けた方が賢明です。こうすることで、後で別のページを間に挿入できるようになります。さらに、同じディレクトリ(セクション)内の各ページのweightは、重複しないようにする必要があります。これにより、特にローカライズされたコンテンツでは、コンテンツが常に正しく整列されるようになります。 {{% /note %}} ### ドキュメントのメインメニュー diff --git a/content/ja/docs/contribute/style/hugo-shortcodes/example1.md b/content/ja/docs/contribute/style/hugo-shortcodes/example1.md new file mode 100644 index 0000000000000..ba0c87fac19f3 --- /dev/null +++ b/content/ja/docs/contribute/style/hugo-shortcodes/example1.md @@ -0,0 +1,9 @@ +--- +title: 例 #1 +--- + +これは**挿入**leaf bundle内のコンテンツファイルの**例**です。 + +{{< note >}} +挿入されたコンテンツファイル内でもショートコードを使用することができます。 +{{< /note >}} diff --git a/content/ja/docs/contribute/style/hugo-shortcodes/example2.md b/content/ja/docs/contribute/style/hugo-shortcodes/example2.md new file mode 100644 index 0000000000000..630efbd919a1d --- /dev/null +++ b/content/ja/docs/contribute/style/hugo-shortcodes/example2.md @@ -0,0 +1,7 @@ +--- +title: 例 #1 +--- + +これは**挿入**leaf bundle内のコンテンツファイルのもう一つの**例**です + + diff --git a/content/ja/docs/contribute/style/hugo-shortcodes/index.md b/content/ja/docs/contribute/style/hugo-shortcodes/index.md new file mode 100644 index 0000000000000..a5596e027bb17 --- /dev/null +++ b/content/ja/docs/contribute/style/hugo-shortcodes/index.md @@ -0,0 +1,345 @@ +--- +title: カスタムHugoショートコード +content_type: concept +--- + + +このページではKubernetesのマークダウンドキュメント内で使用できるHugoショートコードについて説明します。 + +ショートコードについての詳細は[Hugoのドキュメント](https://gohugo.io/content-management/shortcodes)を読んでください。 + + + +## 機能の状態 + +このサイトのマークダウンページ(`.md`ファイル)内では、説明されている機能のバージョンや状態を表示するためにショートコードを使用することができます。 + +### 機能の状態のデモ + +最新のKubernetesバージョンで機能をstableとして表示するためのデモスニペットを次に示します。 + +``` +{{}} +``` + +これは次の様に表示されます: + +{{< feature-state state="stable" >}} + +`state`の値として妥当な値は次のいずれかです: + +* alpha +* beta +* deprecated +* stable + +### 機能の状態コード + +表示されるKubernetesのバージョンのデフォルトはそのページのデフォルトまたはサイトのデフォルトです。 +`for_k8s_version`パラメータを渡すことにより、機能の状態バージョンを変更することができます。 +例えば: + +``` +{{}} +``` + +これは次の様に表示されます: + +{{< feature-state for_k8s_version="v1.10" state="beta" >}} + +## 用語集 + +用語集に関連するショートコードとして、`glossary_tooltip`と`glossary_definition`の二つがあります。 + +コンテンツを自動的に更新し、[用語集](/ja/docs/reference/glossary/)へのリンクを付与する挿入を使用して、用語を参照することができます。 +用語がマウスオーバーされると、用語集の内容がツールチップとして表示されます。 +また、用語はリンクとして表示されます。 + +ツールチップの挿入と同様に、用語集の定義も再利用することができます。 + + +用語集の用語データは[glossaryディレクトリ](https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary)に、それぞれの用語のファイルとして保存されています。 + +### 用語集のデモ + +例えば、マークダウン内でツールチップ付きの{{< glossary_tooltip text="cluster" term_id="cluster" >}}を表示するには、次の挿入を使用します: + +``` +{{}} +``` + +用語集の定義はこのようにします: + +``` +{{}} +``` + +これは次の様に表示されます: +{{< glossary_definition prepend="A cluster is" term_id="cluster" length="short" >}} + +完全な用語定義を挿入することもできます: + +``` +{{}} +``` + +これは次の様に表示されます: +{{< glossary_definition term_id="cluster" length="all" >}} + +## APIリファレンスへのリンク + +`api-reference`ショートコードを使用することで、Kubernetes APIリファレンスへのリンクを作成することができます。 +例えば、{{< api-reference page="workload-resources/pod-v1" >}}への参照方法は次の通りです: + +``` +{{}} +``` + +`page`パラメーターの値はAPIリファレンスページのURLの末尾です。 + +`anchor`パラメーターを指定することでページ内の特定の場所へリンクすることもできます。 +例えば、{{< api-reference page="workload-resources/pod-v1" anchor="PodSpec" >}}や{{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" >}}へのリンクは次の様に書きます: + +``` +{{}} +{{}} +``` + +`text`パラメーターを指定することでリンクテキストを変更することもできます。 +例えば、{{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" text="Environment Variables">}}へのリンクは次の様に書きます: + +``` +{{}} +``` + +## テーブルキャプション + +テーブルキャプションを追加することで、表をスクリーンリーダーにとってよりアクセスしやすいものにする事ができます。 +表へ[キャプション](https://www.w3schools.com/tags/tag_caption.asp)を追加するには、表を`table`ショートコードで囲い、`caption`パラメーターにキャプションを指定します。 + +{{< note >}} +テーブルキャプションはスクリーンリーダーからは読むことができますが、標準的なHTMLでは読むことができません。 +{{< /note >}} + +例えば、次の様に書きます: + +```go-html-template +{{}} +Parameter | Description | Default +:---------|:------------|:------- +`timeout` | The timeout for requests | `30s` +`logLevel` | The log level for log output | `INFO` +{{< /table */>}} +``` + +これは次の様に表示されます: + +{{< table caption="Configuration parameters" >}} +Parameter | Description | Default +:---------|:------------|:------- +`timeout` | The timeout for requests | `30s` +`logLevel` | The log level for log output | `INFO` +{{< /table >}} + +この表に対するHTMLを検査すると、次の要素が``要素のすぐ次にあるのを見ることができるでしょう: + +```html + +``` + +## タブ + +このサイトのマークダウンページ(`.md`ファイル)内では、あるソリューションに対する複数のフレーバーを表示するためのタブセットを追加することができます。 + +`tabs`ショートコードはこれらのパラメーターを受けとります: + +* `name`: タブに表示される名前 +* `codelang`: 内側の`tab`ショートコードにこれを指定した場合、Hugoはハイライトに使用するコード言語を知ることができます。 +* `include`: タブ内で挿入するファイル。Hugo [leaf bundle](https://gohugo.io/content-management/page-bundles/#leaf-bundles)内にタブがある場合そのファイル(HugoがサポートしているどのMIMEタイプでも良い)はそのbundle自身によって探されます。 + もしそうでない場合、そのコンテントページは現在のページから相対的に探されます。 + `include`を使う場合、ショートコードの内部コンテンツはなく、自己終了構文を使用する必要があることに注意してください。 + 例えば、`{{}}`の様にします。 + `codelang`を指定するか、ファイル名から言語が特定される必要があります。 + 非コンテンツファイルはデフォルトでコードが強調表示されます。 +* もし内部コンテンツがマークダウンの場合、タブの周りに`%`デリミターを使用する必要があります。 + 例えば、`{{%/* tab name="Tab 1" %}}This is **markdown**{{% /tab */%}}`の様にします。 +* タブセット内で、上記で説明したバリエーションを組み合わせることができます。 + +タブショートコードの例を次に示します。 + +{{< note >}} +`tabs`定義内の**name**はコンテンツページ内でユニークである必要があります。 +{{< /note >}} + +### タブのデモ: コードハイライト + +```go-text-template +{{}} +{{{< tab name="Tab 1" codelang="bash" >}} +echo "これはタブ1です。" +{{< /tab >}} +{{< tab name="Tab 2" codelang="go" >}} +println "これはタブ2です。" +{{< /tab >}}} +{{< /tabs */>}} +``` + +これは次の様に表示されます: + +{{< tabs name="tab_with_code" >}} +{{< tab name="Tab 1" codelang="bash" >}} +echo "これはタブ1です。" +{{< /tab >}} +{{< tab name="Tab 2" codelang="go" >}} +println "これはタブ2です。" +{{< /tab >}} +{{< /tabs >}} + +### タブのデモ: インラインマークダウンとHTML + +```go-html-template +{{}} +{{% tab name="Markdown" %}} +これは**なにがしかのマークダウン**です。 +{{< note >}} +ショートコードを含むこともできます。 +{{< /note >}} +{{% /tab %}} +{{< tab name="HTML" >}} +
    +

    プレーンHTML

    +

    これはなにがしかのプレーンHTMLです。

    +
    +{{< /tab >}} +{{< /tabs */>}} +``` + +これは次の様に表示されます。 + +{{< tabs name="tab_with_md" >}} +{{% tab name="Markdown" %}} +これは**なにがしかのマークダウン**です。 +{{< note >}} +ショートコードを含むこともできます。 +{{< /note >}} + +{{% /tab %}} +{{< tab name="HTML" >}} +
    +

    プレーンHTML

    +

    これはなにがしかのプレーンHTMLです。

    +
    +{{< /tab >}} +{{< /tabs >}} + +### タブのデモ: ファイルの読み込み + +```go-text-template +{{}} +{{< tab name="Content File #1" include="example1" />}} +{{< tab name="Content File #2" include="example2" />}} +{{< tab name="JSON File" include="podtemplate" />}} +{{< /tabs */>}} +``` + +これは次の様に表示されます: + +{{< tabs name="tab_with_file_include" >}} +{{< tab name="Content File #1" include="example1" />}} +{{< tab name="Content File #2" include="example2" />}} +{{< tab name="JSON File" include="podtemplate.json" />}} +{{< /tabs >}} + +## サードパーティーコンテンツマーカー + +Kubernetesの実行にはサードパーティーのソフトウェアが必要です。 +例えば、名前解決を行うためにはクラスターに[DNSサーバー](/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction)を追加する必要があります。 + +私たちがサードパーティーソフトウェアにリンクするときや言及するときは、[コンテンツガイド](/ja/docs/contribute/style/content-guide/)に従い、サードパーティーのものに印をつけます。 + +これらのショートコードを使用すると、それらを使用しているドキュメントページに免責事項が追加されます。 + +### リスト {#third-party-content-list} + +サードパーティーのリストには、 +``` +{{%/* thirdparty-content */%}} +``` + +をすべてのアイテムを含むセクションのヘッダーのすぐ下に追加します。 + +### アイテム {#third-party-content-item} + +ほとんどのアイテムがプロジェクト内ソフトウェア(例えばKubernetes自体や[Descheduler](https://github.com/kubernetes-sigs/descheduler)コンポーネント)を参照している場合、違う形を使用することができます。 + + +次のショートコードをアイテムの前か、特定のアイテムのヘッダーのすぐ下に追加します: +``` +{{%/* thirdparty-content single="true" */%}} +``` + + +## バージョン文字列 + +ドキュメント内でバージョン文字列を生成して挿入するために、いくつかのバージョンショートコードから選んで使用することができます。 +それぞれのバージョンショートコードはサイトの設定ファイル(`config.toml`)から取得したバージョンパラメーターの値を使用してバージョン文字列を表示します。 +最もよく使われる二つのバージョンパラメーターは`latest`と`version`です。 + +### `{{}}` + +`{{}}`ショートコードはサイトの`version`パラメーターに設定されたKubernetesドキュメントの現在のバージョンを生成します。 +`param`ショートコードはサイトパラメーターの名前の一つを受けとり、この場合は`version`を渡しています。 + +{{< note >}} +以前にリリースされたドキュメントでは`latest`と`version`の値は同じではありません。 +新しいバージョンがリリースされると、`latest`はインクリメントされ、`version`は変更されません。 +例えば、以前にリリースされたドキュメントは`version`を`v1.19`として表示し、`latest`を`v1.20`として表示します。 +{{< /note >}} + +これは次の様に表示されます: + +{{< param "version" >}} + +### `{{}}` + +`{{}}`ショートコードはサイトの`latest`パラメーターの値を返します。 +サイトの`latest`パラメーターは新しいドキュメントのバージョンがリリースされた時に更新されます。 +このパラメーターは必ずしも`version`の値と一致しません。 + +これは次の様に表示されます: + +{{< latest-version >}} + +### `{{}}` + +`{{}}`ショートコードは`latest`から"v"接頭辞を取り除いた値を生成します。 + +これは次の様に表示されます。 + +{{< latest-semver >}} + +### `{{}}` + +`{{}}`ショートコードはページに`min-kubernetes-server-version`パラメーターがあるかどうか確認し、`version`と比較するために使用します。 + +これは次の様に表示されます: + +{{< version-check >}} + +### `{{}}` + +`{{}}`ショートコードは`latest`からバージョン文字列を生成し、"v"接頭辞を取り除きます。 +このショートコードはバージョン文字列に対応したリリースノートCHANGELOGページのURLを表示します。 + +これは次の様に表示されます: + +{{< latest-release-notes >}} + +## {{% heading "whatsnext" %}} + +* [Hugo](https://gohugo.io/)について学ぶ。 +* [新しいトピックの書き方](/docs/contribute/style/write-new-topic/)について学ぶ。 +* [ページコンテンツタイプ](/docs/contribute/style/page-content-types/)について学ぶ。 +* [Pull Requestの作り方](/docs/contribute/new-content/open-a-pr/)について学ぶ。 +* [発展的コントリビュート](/docs/contribute/advanced/)について学ぶ。 + diff --git a/content/ja/docs/contribute/style/hugo-shortcodes/podtemplate.json b/content/ja/docs/contribute/style/hugo-shortcodes/podtemplate.json new file mode 100644 index 0000000000000..bd4327414a10a --- /dev/null +++ b/content/ja/docs/contribute/style/hugo-shortcodes/podtemplate.json @@ -0,0 +1,22 @@ + { + "apiVersion": "v1", + "kind": "PodTemplate", + "metadata": { + "name": "nginx" + }, + "template": { + "metadata": { + "labels": { + "name": "nginx" + }, + "generateName": "nginx-" + }, + "spec": { + "containers": [{ + "name": "nginx", + "image": "dockerfile/nginx", + "ports": [{"containerPort": 80}] + }] + } + } + } diff --git a/content/ja/docs/reference/_index.md b/content/ja/docs/reference/_index.md index aca4c278b5a36..cafca8fe448a6 100644 --- a/content/ja/docs/reference/_index.md +++ b/content/ja/docs/reference/_index.md @@ -32,11 +32,11 @@ content_type: concept * [kubectl](/ja/docs/reference/kubectl/overview/) - コマンドの実行やKubernetesクラスターの管理に使う主要なCLIツールです。 * [JSONPath](/ja/docs/reference/kubectl/jsonpath/) - kubectlで[JSONPath記法](https://goessner.net/articles/JsonPath/)を使うための構文ガイドです。 -* [kubeadm](ja/docs/reference/setup-tools/kubeadm/) - セキュアなKubernetesクラスターを簡単にプロビジョニングするためのCLIツールです。 +* [kubeadm](/ja/docs/reference/setup-tools/kubeadm/) - セキュアなKubernetesクラスターを簡単にプロビジョニングするためのCLIツールです。 ## コンポーネントリファレンス -* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - 各ノード上で動作する最も重要なノードエージェントです。kubeletは一通りのPodSpecを受け取り、コンテナーが実行中で正常であることを確認します。 +* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - 各ノード上で動作する最も重要なノードエージェントです。kubeletは一通りのPodSpecを受け取り、コンテナが実行中で正常であることを確認します。 * [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - Pod、Service、Replication Controller等、APIオブジェクトのデータを検証・設定するREST APIサーバーです。 * [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Kubernetesに同梱された、コアのコントロールループを埋め込むデーモンです。 * [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - 単純なTCP/UDPストリームのフォワーディングや、一連のバックエンド間でTCP/UDPのラウンドロビンでのフォワーディングを実行できます。 diff --git a/content/ja/docs/reference/command-line-tools-reference/feature-gates.md b/content/ja/docs/reference/command-line-tools-reference/feature-gates.md index 9150d77b49914..76ee414f7bbb8 100644 --- a/content/ja/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/ja/docs/reference/command-line-tools-reference/feature-gates.md @@ -41,46 +41,41 @@ content_type: concept | `APIPriorityAndFairness` | `true` | Beta | 1.20 | | | `APIResponseCompression` | `false` | Alpha | 1.7 | 1.15 | | `APIResponseCompression` | `true` | Beta | 1.16 | | -| `APIServerIdentity` | `false` | Alpha | 1.20 | | +| `APISelfSubjectReview` | `false` | Alpha | 1.26 | | +| `APIServerIdentity` | `false` | Alpha | 1.20 | 1.25 | +| `APIServerIdentity` | `true` | Beta | 1.26 | | | `APIServerTracing` | `false` | Alpha | 1.22 | | -| `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | | +| `AggregatedDiscoveryEndpoint` | `false` | Alpha | 1.26 | | | `AnyVolumeDataSource` | `false` | Alpha | 1.18 | 1.23 | | `AnyVolumeDataSource` | `true` | Beta | 1.24 | | | `AppArmor` | `true` | Beta | 1.4 | | -| `CPUManager` | `false` | Alpha | 1.8 | 1.9 | -| `CPUManager` | `true` | Beta | 1.10 | | | `CPUManagerPolicyAlphaOptions` | `false` | Alpha | 1.23 | | | `CPUManagerPolicyBetaOptions` | `true` | Beta | 1.23 | | | `CPUManagerPolicyOptions` | `false` | Alpha | 1.22 | 1.22 | | `CPUManagerPolicyOptions` | `true` | Beta | 1.23 | | -| `CSIMigrationAzureFile` | `false` | Alpha | 1.15 | 1.20 | -| `CSIMigrationAzureFile` | `false` | Beta | 1.21 | 1.23 | -| `CSIMigrationAzureFile` | `true` | Beta | 1.24 | | | `CSIMigrationPortworx` | `false` | Alpha | 1.23 | 1.24 | | `CSIMigrationPortworx` | `false` | Beta | 1.25 | | | `CSIMigrationRBD` | `false` | Alpha | 1.23 | | -| `CSIMigrationvSphere` | `false` | Alpha | 1.18 | 1.18 | -| `CSIMigrationvSphere` | `false` | Beta | 1.19 | 1.24 | -| `CSIMigrationvSphere` | `true` | Beta | 1.25 | | | `CSINodeExpandSecret` | `false` | Alpha | 1.25 | | | `CSIVolumeHealth` | `false` | Alpha | 1.21 | | +| `ComponentSLIs` | `false` | Alpha | 1.26 | | | `ContainerCheckpoint` | `false` | Alpha | 1.25 | | | `ContextualLogging` | `false` | Alpha | 1.24 | | +| `CronJobTimeZone` | `false` | Alpha | 1.24 | 1.24 | +| `CronJobTimeZone` | `true` | Beta | 1.25 | | +| `CrossNamespaceVolumeDataSource` | `false` | Alpha| 1.26 | | | `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | | | `CustomResourceValidationExpressions` | `false` | Alpha | 1.23 | 1.24 | | `CustomResourceValidationExpressions` | `true` | Beta | 1.25 | | -| `DelegateFSGroupToCSIDriver` | `false` | Alpha | 1.22 | 1.22 | -| `DelegateFSGroupToCSIDriver` | `true` | Beta | 1.23 | | -| `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 | -| `DevicePlugins` | `true` | Beta | 1.10 | | | `DisableCloudProviders` | `false` | Alpha | 1.22 | | | `DisableKubeletCloudCredentialProviders` | `false` | Alpha | 1.23 | | | `DownwardAPIHugePages` | `false` | Alpha | 1.20 | 1.20 | | `DownwardAPIHugePages` | `false` | Beta | 1.21 | 1.21 | | `DownwardAPIHugePages` | `true` | Beta | 1.22 | | -| `EndpointSliceTerminatingCondition` | `false` | Alpha | 1.20 | 1.21 | -| `EndpointSliceTerminatingCondition` | `true` | Beta | 1.22 | | -| `ExpandedDNSConfig` | `false` | Alpha | 1.22 | | +| `DynamicResourceAllocation` | `false` | Alpha | 1.26 | | +| `EventedPLEG` | `false` | Alpha | 1.26 | - | +| `ExpandedDNSConfig` | `false` | Alpha | 1.22 | 1.25 | +| `ExpandedDNSConfig` | `true` | Beta | 1.26 | | | `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | | | `GRPCContainerProbe` | `false` | Alpha | 1.23 | 1.23 | | `GRPCContainerProbe` | `true` | Beta | 1.24 | | @@ -91,6 +86,7 @@ content_type: concept | `HPAContainerMetrics` | `false` | Alpha | 1.20 | | | `HPAScaleToZero` | `false` | Alpha | 1.16 | | | `HonorPVReclaimPolicy` | `false` | Alpha | 1.23 | | +| `IPTablesOwnershipCleanup` | `false` | Alpha | 1.25 | | | `InTreePluginAWSUnregister` | `false` | Alpha | 1.21 | | | `InTreePluginAzureDiskUnregister` | `false` | Alpha | 1.21 | | | `InTreePluginAzureFileUnregister` | `false` | Alpha | 1.21 | | @@ -99,27 +95,24 @@ content_type: concept | `InTreePluginPortworxUnregister` | `false` | Alpha | 1.23 | | | `InTreePluginRBDUnregister` | `false` | Alpha | 1.23 | | | `InTreePluginvSphereUnregister` | `false` | Alpha | 1.21 | | -| `IPTablesOwnershipCleanup` | `false` | Alpha | 1.25 | | | `JobMutableNodeSchedulingDirectives` | `true` | Beta | 1.23 | | -| `JobPodFailurePolicy` | `false` | Alpha | 1.25 | - | +| `JobPodFailurePolicy` | `false` | Alpha | 1.25 | 1.25 | +| `JobPodFailurePolicy` | `true` | Beta | 1.26 | | | `JobReadyPods` | `false` | Alpha | 1.23 | 1.23 | | `JobReadyPods` | `true` | Beta | 1.24 | | -| `JobTrackingWithFinalizers` | `false` | Alpha | 1.22 | 1.22 | -| `JobTrackingWithFinalizers` | `false` | Beta | 1.23 | 1.24 | -| `JobTrackingWithFinalizers` | `true` | Beta | 1.25 | | -| `KubeletCredentialProviders` | `false` | Alpha | 1.20 | 1.23 | -| `KubeletCredentialProviders` | `true` | Beta | 1.24 | | +| `KMSv2` | `false` | Alpha | 1.25 | | | `KubeletInUserNamespace` | `false` | Alpha | 1.22 | | | `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 | | `KubeletPodResources` | `true` | Beta | 1.15 | | | `KubeletPodResourcesGetAllocatable` | `false` | Alpha | 1.21 | 1.22 | | `KubeletPodResourcesGetAllocatable` | `true` | Beta | 1.23 | | | `KubeletTracing` | `false` | Alpha | 1.25 | | -| `LegacyServiceAccountTokenNoAutoGeneration` | `true` | Beta | 1.24 | | -| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | 1.24 | -| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `true` | Beta | 1.25 | | +| `LegacyServiceAccountTokenTracking` | `false` | Alpha | 1.25 | | +| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | - | | `LogarithmicScaleDown` | `false` | Alpha | 1.21 | 1.21 | | `LogarithmicScaleDown` | `true` | Beta | 1.22 | | +| `LoggingAlphaOptions` | `false` | Alpha | 1.24 | - | +| `LoggingBetaOptions` | `true` | Beta | 1.24 | - | | `MatchLabelKeysInPodTopologySpread` | `false` | Alpha | 1.25 | | | `MaxUnavailableStatefulSet` | `false` | Alpha | 1.24 | | | `MemoryManager` | `false` | Alpha | 1.21 | 1.21 | @@ -127,33 +120,39 @@ content_type: concept | `MemoryQoS` | `false` | Alpha | 1.22 | | | `MinDomainsInPodTopologySpread` | `false` | Alpha | 1.24 | 1.24 | | `MinDomainsInPodTopologySpread` | `false` | Beta | 1.25 | | -| `MixedProtocolLBService` | `false` | Alpha | 1.20 | 1.23 | -| `MixedProtocolLBService` | `true` | Beta | 1.24 | | +| `MinimizeIPTablesRestore` | `false` | Alpha | 1.26 | - | | `MultiCIDRRangeAllocator` | `false` | Alpha | 1.25 | | | `NetworkPolicyStatus` | `false` | Alpha | 1.24 | | -| `NodeInclusionPolicyInPodTopologySpread` | `false` | Alpha | 1.25 | | -| `NodeOutOfServiceVolumeDetach` | `false` | Alpha | 1.24 | | +| `NodeInclusionPolicyInPodTopologySpread` | `false` | Alpha | 1.25 | 1.25 | +| `NodeInclusionPolicyInPodTopologySpread` | `true` | Beta | 1.26 | | +| `NodeOutOfServiceVolumeDetach` | `false` | Alpha | 1.24 | 1.25 | +| `NodeOutOfServiceVolumeDetach` | `true` | Beta | 1.26 | | | `NodeSwap` | `false` | Alpha | 1.22 | | | `OpenAPIEnums` | `false` | Alpha | 1.23 | 1.23 | | `OpenAPIEnums` | `true` | Beta | 1.24 | | | `OpenAPIV3` | `false` | Alpha | 1.23 | 1.23 | | `OpenAPIV3` | `true` | Beta | 1.24 | | +| `PDBUnhealthyPodEvictionPolicy` | `false` | Alpha | 1.26 | | | `PodAndContainerStatsFromCRI` | `false` | Alpha | 1.23 | | | `PodDeletionCost` | `false` | Alpha | 1.21 | 1.21 | | `PodDeletionCost` | `true` | Beta | 1.22 | | -| `PodDisruptionConditions` | `false` | Alpha | 1.25 | - | +| `PodDisruptionConditions` | `false` | Alpha | 1.25 | 1.25 | +| `PodDisruptionConditions` | `true` | Beta | 1.26 | | | `PodHasNetworkCondition` | `false` | Alpha | 1.25 | | +| `PodSchedulingReadiness` | `false` | Alpha | 1.26 | | | `ProbeTerminationGracePeriod` | `false` | Alpha | 1.21 | 1.21 | | `ProbeTerminationGracePeriod` | `false` | Beta | 1.22 | 1.24 | | `ProbeTerminationGracePeriod` | `true` | Beta | 1.25 | | | `ProcMountType` | `false` | Alpha | 1.12 | | -| `ProxyTerminatingEndpoints` | `false` | Alpha | 1.22 | | +| `ProxyTerminatingEndpoints` | `false` | Alpha | 1.22 | 1.25 | +| `ProxyTerminatingEndpoints` | `true` | Beta | 1.26 | | | `QOSReserved` | `false` | Alpha | 1.11 | | | `ReadWriteOncePod` | `false` | Alpha | 1.22 | | | `RecoverVolumeExpansionFailure` | `false` | Alpha | 1.23 | | | `RemainingItemCount` | `false` | Alpha | 1.15 | 1.15 | | `RemainingItemCount` | `true` | Beta | 1.16 | | -| `RetroactiveDefaultStorageClass` | `false` | Alpha | 1.25 | | +| `RetroactiveDefaultStorageClass` | `false` | Alpha | 1.25 | 1.25 | +| `RetroactiveDefaultStorageClass` | `true` | Beta | 1.26 | | | `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 | | `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | | | `SELinuxMountReadWriteOncePod` | `false` | Alpha | 1.25 | | @@ -161,13 +160,10 @@ content_type: concept | `SeccompDefault` | `true` | Beta | 1.25 | | | `ServerSideFieldValidation` | `false` | Alpha | 1.23 | 1.24 | | `ServerSideFieldValidation` | `true` | Beta | 1.25 | | -| `ServiceIPStaticSubrange` | `false` | Alpha | 1.24 | 1.24 | -| `ServiceIPStaticSubrange` | `true` | Beta | 1.25 | | -| `ServiceInternalTrafficPolicy` | `false` | Alpha | 1.21 | 1.21 | -| `ServiceInternalTrafficPolicy` | `true` | Beta | 1.22 | | | `SizeMemoryBackedVolumes` | `false` | Alpha | 1.20 | 1.21 | | `SizeMemoryBackedVolumes` | `true` | Beta | 1.22 | | | `StatefulSetAutoDeletePVC` | `false` | Alpha | 1.22 | | +| `StatefulSetStartOrdinal` | `false` | Alpha | 1.26 | | | `StorageVersionAPI` | `false` | Alpha | 1.20 | | | `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 | | `StorageVersionHash` | `true` | Beta | 1.15 | | @@ -176,13 +172,16 @@ content_type: concept | `TopologyAwareHints` | `true` | Beta | 1.24 | | | `TopologyManager` | `false` | Alpha | 1.16 | 1.17 | | `TopologyManager` | `true` | Beta | 1.18 | | +| `TopologyManagerPolicyAlphaOptions` | `false` | Alpha | 1.26 | | +| `TopologyManagerPolicyBetaOptions` | `false` | Beta | 1.26 | | +| `TopologyManagerPolicyOptions` | `false` | Alpha | 1.26 | | | `UserNamespacesStatelessPodsSupport` | `false` | Alpha | 1.25 | | +| `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | | | `VolumeCapacityPriority` | `false` | Alpha | 1.21 | - | | `WinDSR` | `false` | Alpha | 1.14 | | | `WinOverlay` | `false` | Alpha | 1.14 | 1.19 | | `WinOverlay` | `true` | Beta | 1.20 | | -| `WindowsHostProcessContainers` | `false` | Alpha | 1.22 | 1.22 | -| `WindowsHostProcessContainers` | `true` | Beta | 1.23 | | +| `WindowsHostNetwork` | `false` | Alpha | 1.26| | {{< /table >}} ### GraduatedまたはDeprecatedのフィーチャーゲート {#feature-gates-for-graduated-or-deprecated-features} @@ -191,133 +190,124 @@ content_type: concept | 機能名 | デフォルト値 | ステージ | 導入開始バージョン | 最終利用可能バージョン | |---------|---------|-------|-------|-------| -| `Accelerators` | `false` | Alpha | 1.6 | 1.10 | -| `Accelerators` | - | Deprecated | 1.11 | - | | `AdvancedAuditing` | `false` | Alpha | 1.7 | 1.7 | | `AdvancedAuditing` | `true` | Beta | 1.8 | 1.11 | | `AdvancedAuditing` | `true` | GA | 1.12 | - | -| `AffinityInAnnotations` | `false` | Alpha | 1.6 | 1.7 | -| `AffinityInAnnotations` | - | Deprecated | 1.8 | - | -| `AllowExtTrafficLocalEndpoints` | `false` | Beta | 1.4 | 1.6 | -| `AllowExtTrafficLocalEndpoints` | `true` | GA | 1.7 | - | -| `BlockVolume` | `false` | Alpha | 1.9 | 1.12 | -| `BlockVolume` | `true` | Beta | 1.13 | 1.17 | -| `BlockVolume` | `true` | GA | 1.18 | - | -| `CSIBlockVolume` | `false` | Alpha | 1.11 | 1.13 | -| `CSIBlockVolume` | `true` | Beta | 1.14 | 1.17 | -| `CSIBlockVolume` | `true` | GA | 1.18 | - | -| `CSIDriverRegistry` | `false` | Alpha | 1.12 | 1.13 | -| `CSIDriverRegistry` | `true` | Beta | 1.14 | 1.17 | -| `CSIDriverRegistry` | `true` | GA | 1.18 | | -| `CSINodeInfo` | `false` | Alpha | 1.12 | 1.13 | -| `CSINodeInfo` | `true` | Beta | 1.14 | 1.16 | -| `CSINodeInfo` | `true` | GA | 1.17 | | -| `AttachVolumeLimit` | `false` | Alpha | 1.11 | 1.11 | -| `AttachVolumeLimit` | `true` | Beta | 1.12 | 1.16 | -| `AttachVolumeLimit` | `true` | GA | 1.17 | - | -| `CSIPersistentVolume` | `false` | Alpha | 1.9 | 1.9 | -| `CSIPersistentVolume` | `true` | Beta | 1.10 | 1.12 | -| `CSIPersistentVolume` | `true` | GA | 1.13 | - | -| `CustomPodDNS` | `false` | Alpha | 1.9 | 1.9 | -| `CustomPodDNS` | `true` | Beta| 1.10 | 1.13 | -| `CustomPodDNS` | `true` | GA | 1.14 | - | -| `CustomResourcePublishOpenAPI` | `false` | Alpha| 1.14 | 1.14 | -| `CustomResourcePublishOpenAPI` | `true` | Beta| 1.15 | 1.15 | -| `CustomResourcePublishOpenAPI` | `true` | GA | 1.16 | - | -| `CustomResourceSubresources` | `false` | Alpha | 1.10 | 1.10 | -| `CustomResourceSubresources` | `true` | Beta | 1.11 | 1.15 | -| `CustomResourceSubresources` | `true` | GA | 1.16 | - | -| `CustomResourceValidation` | `false` | Alpha | 1.8 | 1.8 | -| `CustomResourceValidation` | `true` | Beta | 1.9 | 1.15 | -| `CustomResourceValidation` | `true` | GA | 1.16 | - | -| `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 | -| `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | 1.15 | -| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | - | -| `DynamicProvisioningScheduling` | `false` | Alpha | 1.11 | 1.11 | -| `DynamicProvisioningScheduling` | - | Deprecated| 1.12 | - | -| `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 | -| `DynamicVolumeProvisioning` | `true` | GA | 1.8 | - | -| `EnableAggregatedDiscoveryTimeout` | `true` | Deprecated | 1.16 | - | -| `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | 1.14 | -| `EnableEquivalenceClassCache` | - | Deprecated | 1.15 | - | -| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 | -| `ExperimentalCriticalPodAnnotation` | `false` | Deprecated | 1.13 | - | -| `GCERegionalPersistentDisk` | `true` | Beta | 1.10 | 1.12 | -| `GCERegionalPersistentDisk` | `true` | GA | 1.13 | - | -| `HugePages` | `false` | Alpha | 1.8 | 1.9 | -| `HugePages` | `true` | Beta| 1.10 | 1.13 | -| `HugePages` | `true` | GA | 1.14 | - | -| `Initializers` | `false` | Alpha | 1.7 | 1.13 | -| `Initializers` | - | Deprecated | 1.14 | - | -| `KubeletConfigFile` | `false` | Alpha | 1.8 | 1.9 | -| `KubeletConfigFile` | - | Deprecated | 1.10 | - | -| `KubeletPluginsWatcher` | `false` | Alpha | 1.11 | 1.11 | -| `KubeletPluginsWatcher` | `true` | Beta | 1.12 | 1.12 | -| `KubeletPluginsWatcher` | `true` | GA | 1.13 | - | -| `MountPropagation` | `false` | Alpha | 1.8 | 1.9 | -| `MountPropagation` | `true` | Beta | 1.10 | 1.11 | -| `MountPropagation` | `true` | GA | 1.12 | - | -| `NodeLease` | `false` | Alpha | 1.12 | 1.13 | -| `NodeLease` | `true` | Beta | 1.14 | 1.16 | -| `NodeLease` | `true` | GA | 1.17 | - | -| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 | -| `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 | -| `PersistentLocalVolumes` | `true` | GA | 1.14 | - | -| `PodPriority` | `false` | Alpha | 1.8 | 1.10 | -| `PodPriority` | `true` | Beta | 1.11 | 1.13 | -| `PodPriority` | `true` | GA | 1.14 | - | -| `PodReadinessGates` | `false` | Alpha | 1.11 | 1.11 | -| `PodReadinessGates` | `true` | Beta | 1.12 | 1.13 | -| `PodReadinessGates` | `true` | GA | 1.14 | - | -| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | 1.11 | -| `PodShareProcessNamespace` | `true` | Beta | 1.12 | 1.16 | -| `PodShareProcessNamespace` | `true` | GA | 1.17 | - | -| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 | -| `PVCProtection` | - | Deprecated | 1.10 | - | -| `RequestManagement` | `false` | Alpha | 1.15 | 1.16 | -| `ResourceQuotaScopeSelectors` | `false` | Alpha | 1.11 | 1.11 | -| `ResourceQuotaScopeSelectors` | `true` | Beta | 1.12 | 1.16 | -| `ResourceQuotaScopeSelectors` | `true` | GA | 1.17 | - | -| `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 | -| `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | 1.16 | -| `ScheduleDaemonSetPods` | `true` | GA | 1.17 | - | -| `ServiceLoadBalancerFinalizer` | `false` | Alpha | 1.15 | 1.15 | -| `ServiceLoadBalancerFinalizer` | `true` | Beta | 1.16 | 1.16 | -| `ServiceLoadBalancerFinalizer` | `true` | GA | 1.17 | - | -| `StorageObjectInUseProtection` | `true` | Beta | 1.10 | 1.10 | -| `StorageObjectInUseProtection` | `true` | GA | 1.11 | - | -| `SupportIPVSProxyMode` | `false` | Alpha | 1.8 | 1.8 | -| `SupportIPVSProxyMode` | `false` | Beta | 1.9 | 1.9 | -| `SupportIPVSProxyMode` | `true` | Beta | 1.10 | 1.10 | -| `SupportIPVSProxyMode` | `true` | GA | 1.11 | - | -| `TaintBasedEvictions` | `false` | Alpha | 1.6 | 1.12 | -| `TaintBasedEvictions` | `true` | Beta | 1.13 | 1.17 | -| `TaintBasedEvictions` | `true` | GA | 1.18 | - | -| `TaintNodesByCondition` | `false` | Alpha | 1.8 | 1.11 | -| `TaintNodesByCondition` | `true` | Beta | 1.12 | 1.16 | -| `TaintNodesByCondition` | `true` | GA | 1.17 | - | -| `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 | -| `VolumePVCDataSource` | `true` | Beta | 1.16 | 1.17 | -| `VolumePVCDataSource` | `true` | GA | 1.18 | - | -| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | -| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 | -| `VolumeScheduling` | `true` | GA | 1.13 | - | -| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | -| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 | -| `VolumeScheduling` | `true` | GA | 1.13 | - | -| `VolumeSubpath` | `true` | GA | 1.13 | - | -| `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.14 | 1.14 | -| `VolumeSubpathEnvExpansion` | `true` | Beta | 1.15 | 1.16 | -| `VolumeSubpathEnvExpansion` | `true` | GA | 1.17 | - | +| `CPUManager` | `false` | Alpha | 1.8 | 1.9 | +| `CPUManager` | `true` | Beta | 1.10 | 1.25 | +| `CPUManager` | `true` | GA | 1.26 | - | +| `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 | +| `CSIInlineVolume` | `true` | Beta | 1.16 | 1.24 | +| `CSIInlineVolume` | `true` | GA | 1.25 | - | +| `CSIMigration` | `false` | Alpha | 1.14 | 1.16 | +| `CSIMigration` | `true` | Beta | 1.17 | 1.24 | +| `CSIMigration` | `true` | GA | 1.25 | - | +| `CSIMigrationAWS` | `false` | Alpha | 1.14 | 1.16 | +| `CSIMigrationAWS` | `false` | Beta | 1.17 | 1.22 | +| `CSIMigrationAWS` | `true` | Beta | 1.23 | 1.24 | +| `CSIMigrationAWS` | `true` | GA | 1.25 | - | +| `CSIMigrationAzureDisk` | `false` | Alpha | 1.15 | 1.18 | +| `CSIMigrationAzureDisk` | `false` | Beta | 1.19 | 1.22 | +| `CSIMigrationAzureDisk` | `true` | Beta | 1.23 | 1.23 | +| `CSIMigrationAzureDisk` | `true` | GA | 1.24 | | +| `CSIMigrationAzureFile` | `false` | Alpha | 1.15 | 1.20 | +| `CSIMigrationAzureFile` | `false` | Beta | 1.21 | 1.23 | +| `CSIMigrationAzureFile` | `true` | Beta | 1.24 | 1.25 | +| `CSIMigrationAzureFile` | `true` | GA | 1.26 | | +| `CSIMigrationGCE` | `false` | Alpha | 1.14 | 1.16 | +| `CSIMigrationGCE` | `false` | Beta | 1.17 | 1.22 | +| `CSIMigrationGCE` | `true` | Beta | 1.23 | 1.24 | +| `CSIMigrationGCE` | `true` | GA | 1.25 | - | +| `CSIMigrationvSphere` | `false` | Alpha | 1.18 | 1.18 | +| `CSIMigrationvSphere` | `false` | Beta | 1.19 | 1.24 | +| `CSIMigrationvSphere` | `true` | Beta | 1.25 | 1.25 | +| `CSIMigrationvSphere` | `true` | GA | 1.26 | - | +| `CSIStorageCapacity` | `false` | Alpha | 1.19 | 1.20 | +| `CSIStorageCapacity` | `true` | Beta | 1.21 | 1.23 | +| `CSIStorageCapacity` | `true` | GA | 1.24 | - | +| `ConsistentHTTPGetHandlers` | `true` | GA | 1.25 | - | +| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | 1.21 | +| `ControllerManagerLeaderMigration` | `true` | Beta | 1.22 | 1.23 | +| `ControllerManagerLeaderMigration` | `true` | GA | 1.24 | - | +| `DaemonSetUpdateSurge` | `false` | Alpha | 1.21 | 1.21 | +| `DaemonSetUpdateSurge` | `true` | Beta | 1.22 | 1.24 | +| `DaemonSetUpdateSurge` | `true` | GA | 1.25 | - | +| `DelegateFSGroupToCSIDriver` | `false` | Alpha | 1.22 | 1.22 | +| `DelegateFSGroupToCSIDriver` | `true` | Beta | 1.23 | 1.25 | +| `DelegateFSGroupToCSIDriver` | `true` | GA | 1.26 |-| +| `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 | +| `DevicePlugins` | `true` | Beta | 1.10 | 1.25 | +| `DevicePlugins` | `true` | GA | 1.26 | - | +| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 | +| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.24 | +| `DisableAcceleratorUsageMetrics` | `true` | GA | 1.25 |- | +| `DryRun` | `false` | Alpha | 1.12 | 1.12 | +| `DryRun` | `true` | Beta | 1.13 | 1.18 | +| `DryRun` | `true` | GA | 1.19 | - | +| `EfficientWatchResumption` | `false` | Alpha | 1.20 | 1.20 | +| `EfficientWatchResumption` | `true` | Beta | 1.21 | 1.23 | +| `EfficientWatchResumption` | `true` | GA | 1.24 | - | +| `EndpointSliceTerminatingCondition` | `false` | Alpha | 1.20 | 1.21 | +| `EndpointSliceTerminatingCondition` | `true` | Beta | 1.22 | 1.25 | +| `EndpointSliceTerminatingCondition` | `true` | GA | 1.26 | | +| `EphemeralContainers` | `false` | Alpha | 1.16 | 1.22 | +| `EphemeralContainers` | `true` | Beta | 1.23 | 1.24 | +| `EphemeralContainers` | `true` | GA | 1.25 | - | +| `ExecProbeTimeout` | `true` | GA | 1.20 | - | +| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 | +| `ExpandCSIVolumes` | `true` | Beta | 1.16 | 1.23 | +| `ExpandCSIVolumes` | `true` | GA | 1.24 | - | +| `ExpandInUsePersistentVolumes` | `false` | Alpha | 1.11 | 1.14 | +| `ExpandInUsePersistentVolumes` | `true` | Beta | 1.15 | 1.23 | +| `ExpandInUsePersistentVolumes` | `true` | GA | 1.24 | - | +| `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.10 | +| `ExpandPersistentVolumes` | `true` | Beta | 1.11 | 1.23 | +| `ExpandPersistentVolumes` | `true` | GA | 1.24 |- | +| `IdentifyPodOS` | `false` | Alpha | 1.23 | 1.23 | +| `IdentifyPodOS` | `true` | Beta | 1.24 | 1.24 | +| `IdentifyPodOS` | `true` | GA | 1.25 | - | +| `JobTrackingWithFinalizers` | `false` | Alpha | 1.22 | 1.22 | +| `JobTrackingWithFinalizers` | `false` | Beta | 1.23 | 1.24 | +| `JobTrackingWithFinalizers` | `true` | Beta | 1.25 | 1.25 | +| `JobTrackingWithFinalizers` | `true` | GA | 1.26 | - | +| `KubeletCredentialProviders` | `false` | Alpha | 1.20 | 1.23 | +| `KubeletCredentialProviders` | `true` | Beta | 1.24 | 1.25 | +| `KubeletCredentialProviders` | `true` | GA | 1.26 | - | +| `LegacyServiceAccountTokenNoAutoGeneration` | `true` | Beta | 1.24 | 1.25 | +| `LegacyServiceAccountTokenNoAutoGeneration` | `true` | GA | 1.26 | - | +| `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 | +| `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | 1.24 | +| `LocalStorageCapacityIsolation` | `true` | GA | 1.25 | - | +| `MixedProtocolLBService` | `false` | Alpha | 1.20 | 1.23 | +| `MixedProtocolLBService` | `true` | Beta | 1.24 | 1.25 | +| `MixedProtocolLBService` | `true` | GA | 1.26 | - | +| `NetworkPolicyEndPort` | `false` | Alpha | 1.21 | 1.21 | +| `NetworkPolicyEndPort` | `true` | Beta | 1.22 | 1.24 | +| `NetworkPolicyEndPort` | `true` | GA | 1.25 | - | +| `PodSecurity` | `false` | Alpha | 1.22 | 1.22 | +| `PodSecurity` | `true` | Beta | 1.23 | 1.24 | +| `PodSecurity` | `true` | GA | 1.25 | | +| `RemoveSelfLink` | `false` | Alpha | 1.16 | 1.19 | +| `RemoveSelfLink` | `true` | Beta | 1.20 | 1.23 | +| `RemoveSelfLink` | `true` | GA | 1.24 | - | +| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 | +| `ServerSideApply` | `true` | Beta | 1.16 | 1.21 | +| `ServerSideApply` | `true` | GA | 1.22 | - | +| `ServiceIPStaticSubrange` | `false` | Alpha | 1.24 | 1.24 | +| `ServiceIPStaticSubrange` | `true` | Beta | 1.25 | 1.25 | +| `ServiceIPStaticSubrange` | `true` | GA | 1.26 | - | +| `ServiceInternalTrafficPolicy` | `false` | Alpha | 1.21 | 1.21 | +| `ServiceInternalTrafficPolicy` | `true` | Beta | 1.22 | 1.25 | +| `ServiceInternalTrafficPolicy` | `true` | GA | 1.26 | - | +| `StatefulSetMinReadySeconds` | `false` | Alpha | 1.22 | 1.22 | +| `StatefulSetMinReadySeconds` | `true` | Beta | 1.23 | 1.24 | +| `StatefulSetMinReadySeconds` | `true` | GA | 1.25 | - | | `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 | | `WatchBookmark` | `true` | Beta | 1.16 | 1.16 | | `WatchBookmark` | `true` | GA | 1.17 | - | -| `WindowsGMSA` | `false` | Alpha | 1.14 | 1.15 | -| `WindowsGMSA` | `true` | Beta | 1.16 | 1.17 | -| `WindowsGMSA` | `true` | GA | 1.18 | - | -| `WindowsRunAsUserName` | `false` | Alpha | 1.16 | 1.16 | -| `WindowsRunAsUserName` | `true` | Beta | 1.17 | 1.17 | -| `WindowsRunAsUserName` | `true` | GA | 1.18 | - | +| `WindowsHostProcessContainers` | `false` | Alpha | 1.22 | 1.22 | +| `WindowsHostProcessContainers` | `true` | Beta | 1.23 | 1.25 | +| `WindowsHostProcessContainers` | `true` | GA | 1.26 | - | {{< /table >}} ## 機能を使用する diff --git a/content/ja/docs/reference/glossary/istio.md b/content/ja/docs/reference/glossary/istio.md new file mode 100644 index 0000000000000..ba3f1a1c4b19a --- /dev/null +++ b/content/ja/docs/reference/glossary/istio.md @@ -0,0 +1,19 @@ +--- +title: Istio +id: istio +date: 2018-04-12 +full_link: https://istio.io/latest/about/service-mesh/#what-is-istio +short_description: > + Istioは、マイクロサービスの統合やトラフィックフローの管理、ポリシーの適用、そしてテレメトリーデータの集約を行うための一様な方法を提供するオープンソースのプラットフォームです(Kubernetesに特化したものではありません)。 + +aka: +tags: +- networking +- architecture +- extension +--- + Istioは、マイクロサービスの統合やトラフィックフローの管理、ポリシーの適用、そしてテレメトリーデータの集約を行うための一様な方法を提供するオープンソースのプラットフォームです(Kubernetesに特化したものではありません)。 + + + +Istioの追加にはアプリケーションコードの変更は必要ありません。Istioは、サービスとネットワークの間のインフラストラクチャーレイヤーになります。Istioのコントロールプレーンは、KubernetesやMesosphereなどのクラスター管理プラットフォームを抽象化します。 diff --git a/content/ja/docs/setup/best-practices/node-conformance.md b/content/ja/docs/setup/best-practices/node-conformance.md index 355dfdf3665ca..919e64ff66784 100644 --- a/content/ja/docs/setup/best-practices/node-conformance.md +++ b/content/ja/docs/setup/best-practices/node-conformance.md @@ -8,12 +8,6 @@ weight: 30 *ノード適合テスト* は、システムの検証とノードに対する機能テストを提供するコンテナ型のテストフレームワークです。このテストは、ノードがKubernetesの最小要件を満たしているかどうかを検証するもので、テストに合格したノードはKubernetesクラスタに参加する資格があることになります。 -## 制約 - -Kubernetesのバージョン1.5ではノード適合テストには以下の制約があります: - -* ノード適合テストはコンテナのランタイムとしてDockerのみをサポートします。 - ## ノードの前提条件 適合テストを実行するにはノードは通常のKubernetesノードと同じ前提条件を満たしている必要があります。 最低でもノードに以下のデーモンがインストールされている必要があります: @@ -25,10 +19,11 @@ Kubernetesのバージョン1.5ではノード適合テストには以下の制 ノード適合テストを実行するには、以下の手順に従います: -1. Kubeletをlocalhostに指定します(`--api-servers="http://localhost:8080"`)、 -このテストフレームワークはKubeletのテストにローカルマスターを起動するため、Kubeletをローカルホストに設定します(`--api-servers="http://localhost:8080"`)。他にも配慮するべきKubeletフラグがいくつかあります: - * `--pod-cidr`: `kubenet`を利用している場合は、Kubeletに任意のCIDR(例: `--pod-cidr=10.180.0.0/24`)を指定する必要があります。 - * `--cloud-provider`: `--cloud-provider=gce`を指定している場合は、テストを実行する前にこのフラグを取り除いてください。 +1. kubeletの`--kubeconfig`オプションの値を調べます。例:`--kubeconfig=/var/lib/kubelet/config.yaml`。 + このテストフレームワークはKubeletのテスト用にローカルコントロールプレーンを起動するため、APIサーバーのURLとして`http://localhost:8080`を使用します。 + 他にも使用できるkubeletコマンドラインパラメーターがいくつかあります: + + * `--cloud-provider`: `--cloud-provider=gce`を指定している場合は、テストを実行する前にこのフラグを取り除いてください。 2. 以下のコマンドでノード適合テストを実行します: @@ -37,7 +32,7 @@ Kubernetesのバージョン1.5ではノード適合テストには以下の制 # $LOG_DIRはテスト出力のパスです。 sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ - k8s.gcr.io/node-test:0.2 + registry.k8s.io/node-test:0.2 ``` ## 他アーキテクチャ向けのノード適合テストの実行 @@ -58,7 +53,7 @@ Kubernetesは他のアーキテクチャ用のノード適合テストのdocker sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ -e FOCUS=MirrorPod \ # MirrorPodテストのみを実行します - k8s.gcr.io/node-test:0.2 + registry.k8s.io/node-test:0.2 ``` 特定のテストをスキップするには、環境変数`SKIP`をスキップしたいテストの正規表現で上書きします。 @@ -67,7 +62,7 @@ sudo docker run -it --rm --privileged --net=host \ sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ -e SKIP=MirrorPod \ # MirrorPodテスト以外のすべてのノード適合テストを実行します - k8s.gcr.io/node-test:0.2 + registry.k8s.io/node-test:0.2 ``` ノード適合テストは、[node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/e2e-node-tests.md)のコンテナ化されたバージョンです。 diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 56e0df68ea170..d1ab5ef49b6bf 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -430,7 +430,7 @@ kubeletとコントロールプレーンの間や、他のKubernetesコンポー 対処方法: -* 定期的に[etcdをバックアップ](https://coreos.com/etcd/docs/latest/admin_guide.html)する。kubeadmが設定するetcdのデータディレクトリは、コントロールプレーンノードの`/var/lib/etcd`にあります。 +* 定期的に[etcdをバックアップ](https://etcd.io/docs/v3.5/op-guide/recovery/)する。kubeadmが設定するetcdのデータディレクトリは、コントロールプレーンノードの`/var/lib/etcd`にあります。 * 複数のコントロールプレーンノードを使用する。[高可用性トポロジーのオプション](/ja/docs/setup/production-environment/tools/kubeadm/ha-topology/)では、[より高い可用性](/ja/docs/setup/production-environment/tools/kubeadm/high-availability/)を提供するクラスターのトポロジーの選択について説明してます。 diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 07cbdfa4a6443..a9f5a5571f9c4 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -64,32 +64,6 @@ sysctl --system 詳細は[ネットワークプラグインの要件](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements)を参照してください。 -## iptablesがnftablesバックエンドを使用しないようにする - -Linuxでは、カーネルのiptablesサブシステムの最新の代替品としてnftablesが利用できます。`iptables`ツールは互換性レイヤーとして機能し、iptablesのように動作しますが、実際にはnftablesを設定します。このnftablesバックエンドは現在のkubeadmパッケージと互換性がありません。(ファイアウォールルールが重複し、`kube-proxy`を破壊するためです。) - -もしあなたのシステムの`iptables`ツールがnftablesバックエンドを使用している場合、これらの問題を避けるために`iptables`ツールをレガシーモードに切り替える必要があります。これは、少なくともDebian 10(Buster)、Ubuntu 19.04、Fedora 29、およびこれらのディストリビューションの新しいリリースでのデフォルトです。RHEL 8はレガシーモードへの切り替えをサポートしていないため、現在のkubeadmパッケージと互換性がありません。 - -{{< tabs name="iptables_legacy" >}} -{{% tab name="DebianまたはUbuntu" %}} -```bash -# レガシーバイナリがインストールされていることを確認してください -sudo apt-get install -y iptables arptables ebtables - -# レガシーバージョンに切り替えてください。 -sudo update-alternatives --set iptables /usr/sbin/iptables-legacy -sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy -sudo update-alternatives --set arptables /usr/sbin/arptables-legacy -sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy -``` -{{% /tab %}} -{{% tab name="Fedora" %}} -```bash -update-alternatives --set iptables /usr/sbin/iptables-legacy -``` -{{% /tab %}} -{{< /tabs >}} - ## 必須ポートの確認 ### コントロールプレーンノード diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index c7877449f640f..b183e75dbffe2 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -147,7 +147,7 @@ Calico、Canal、FlannelのCNIプロバイダは、HostPortをサポートして ## サービスIP経由でPodにアクセスすることができない -- 多くのネットワークアドオンは、PodがサービスIPを介して自分自身にアクセスできるようにする[ヘアピンモード](/ja/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip)を有効にしていません。これは[CNI](https://github.com/containernetworking/cni/issues/476)に関連する問題です。ヘアピンモードのサポート状況については、ネットワークアドオンプロバイダにお問い合わせください。 +- 多くのネットワークアドオンは、PodがサービスIPを介して自分自身にアクセスできるようにする[ヘアピンモード](/ja/docs/tasks/debug/debug-application/debug-service/#a-pod-cannot-reach-itself-via-service-ip)を有効にしていません。これは[CNI](https://github.com/containernetworking/cni/issues/476)に関連する問題です。ヘアピンモードのサポート状況については、ネットワークアドオンプロバイダにお問い合わせください。 - VirtualBoxを使用している場合(直接またはVagrant経由)は、`hostname -i`がルーティング可能なIPアドレスを返すことを確認する必要があります。デフォルトでは、最初のインターフェースはルーティング可能でないホスト専用のネットワークに接続されています。これを回避するには`/etc/hosts`を修正する必要があります。例としてはこの[Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)を参照してください。 diff --git a/content/ja/docs/setup/production-environment/tools/kubespray.md b/content/ja/docs/setup/production-environment/tools/kubespray.md index 9d6497c0547af..6307652e1f9ee 100644 --- a/content/ja/docs/setup/production-environment/tools/kubespray.md +++ b/content/ja/docs/setup/production-environment/tools/kubespray.md @@ -1,124 +1,132 @@ --- -title: kubesprayを使ったオンプレミス/クラウドプロバイダへのKubernetesのインストール +title: kubesprayを使ったKubernetesのインストール content_type: concept weight: 30 --- -This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Equinix Metal (formerly Packet), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray). +このクイックスタートは、[Kubespray](https://github.com/kubernetes-sigs/kubespray)を使用して、GCE、Azure、OpenStack、AWS、vSphere、Equinix Metal(以前のPacket)、Oracle Cloud Infrastructure(実験的)またはベアメタルにホストされたKubernetesクラスターをインストールするためのものです。 -Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides: +Kubesprayは、汎用的なOSやKubernetesクラスターの構成管理タスクのための[Ansible](https://docs.ansible.com/)プレイブック、[インベントリー](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#inventory)、プロビジョニングツール、ドメインナレッジをまとめたものです。 -* a highly available cluster -* composable attributes -* support for most popular Linux distributions - * Ubuntu 16.04, 18.04, 20.04, 22.04 - * CentOS/RHEL/Oracle Linux 7, 8 - * Debian Buster, Jessie, Stretch, Wheezy - * Fedora 34, 35 - * Fedora CoreOS - * openSUSE Leap 15 - * Flatcar Container Linux by Kinvolk -* continuous integration tests +Kubesprayは次を提供します: -To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to -[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/ja/docs/setup/production-environment/tools/kops/). +* 高可用性クラスター。 +* 構成可能(例えばネットワークプラグインの選択)。 +* 最もポピュラーなLinuxディストリビューションのサポート: + - Flatcar Container Linux by Kinvolk + - Debian Bullseye, Buster, Jessie, Stretch + - Ubuntu 16.04, 18.04, 20.04, 22.04 + - CentOS/RHEL 7, 8, 9 + - Fedora 35, 36 + - Fedora CoreOS + - openSUSE Leap 15.x/Tumbleweed + - Oracle Linux 7, 8, 9 + - Alma Linux 8, 9 + - Rocky Linux 8, 9 + - Kylin Linux Advanced Server V10 + - Amazon Linux 2 +* 継続的インテグレーションテスト。 + +あなたのユースケースに最適なツールの選択には、[kubeadm](/docs/reference/setup-tools/kubeadm/)や[kops](/docs/setup/production-environment/tools/kops/)と[比較したドキュメント](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)を参照してください。 -## クラスタの作成 +## クラスターの作成 ### (1/5) 下地の要件の確認 -Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements): +次の[要件](https://github.com/kubernetes-sigs/kubespray#requirements)に従ってサーバーをプロビジョニングします: -* **Ansible v2.11 and python-netaddr are installed on the machine that will run Ansible commands** -* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks** -* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md)) -* The target servers are configured to allow **IPv4 forwarding** -* **Your ssh key must be copied** to all the servers in your inventory -* **Firewalls are not managed by kubespray**. You'll need to implement appropriate rules as needed. You should disable your firewall in order to avoid any issues during deployment -* If kubespray is run from a non-root user account, correct privilege escalation method should be configured in the target servers and the `ansible_become` flag or command parameters `--become` or `-b` should be specified +* **Kubernetesの最低必要バージョンはv1.22** +* **Ansibleのコマンドを実行するマシン上にAnsible v2.11+、Jinja 2.11+とpython-netaddrがインストールされていること** +* ターゲットサーバーはdockerイメージをpullするために**インターネットにアクセスできる**必要があります。そうでは無い場合は追加の構成が必要です([オフライン環境](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md)を参照) +* ターゲットのサーバーは**IPv4フォワーディング**ができるように構成されていること。 +* PodとServiceにIPv6を使用している場合は、ターゲットサーバーは**IPv6フォワーディング**ができるように構成されていること。 +* **ファイアウォールは管理されないため**、従来のように独自のルールを実装しなければなりません。デプロイ中の問題を避けるためには、ファイアウォールを無効にすべきです +* root以外のユーザーアカウントでkubesprayを実行する場合は、ターゲットサーバー上で特権昇格の方法を正しく構成されている必要があります。そして、`ansible_become`フラグ、またはコマンドパラメーター`--become`、`-b`を指定する必要があります -Kubespray provides the following utilities to help provision your environment: +Kubesprayは環境のプロビジョニングを支援するために次のユーティリティを提供します: -* [Terraform](https://www.terraform.io/) scripts for the following cloud providers: +* 下記のクラウドプロバイダー用の[Terraform](https://www.terraform.io/)スクリプト: * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws) * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack) - * [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet) + * [Equinix Metal](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/metal) + -### (2/5) インベントリファイルの用意 +### (2/5) インベントリーファイルの用意 -After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/latest/network/getting_started/first_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)". +サーバーをプロビジョニングした後、[Ansibleのインベントリーファイル](https://docs.ansible.com/ansible/latest/network/getting_started/first_inventory.html)を作成します。これは手動またはダイナミックインベントリースクリプトによって行うことができます。詳細については、"[独自のインベントリーを構築する](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)"を参照してください。 -### (3/5) クラスタ作成の計画 +### (3/5) クラスター作成の計画 -Kubespray provides the ability to customize many aspects of the deployment: +Kubesprayは多くの点でデプロイメントをカスタマイズする機能を提供します: -* Choice deployment mode: kubeadm or non-kubeadm -* CNI (networking) plugins -* DNS configuration -* Choice of control plane: native/binary or containerized -* Component versions -* Calico route reflectors -* Component runtime options +* デプロイメントモードの選択: kubeadmまたはnon-kubeadm +* CNI(ネットワーク)プラグイン +* DNS設定 +* コントロールプレーンの選択: ネイティブ/バイナリまたはコンテナ化 +* コンポーネントバージョン +* Calicoルートリフレクター +* コンポーネントランタイムオプション * {{< glossary_tooltip term_id="docker" >}} * {{< glossary_tooltip term_id="containerd" >}} * {{< glossary_tooltip term_id="cri-o" >}} -* Certificate generation methods +* 証明書の生成方法 -Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. +Kubesprayは[variableファイル](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html)によってカスタマイズできます。Kubesprayを使い始めたばかりであれば、Kubesprayのデフォルト設定を使用してクラスターをデプロイし、Kubernetesを探索することを検討してください。 -### (4/5) クラスタのデプロイ +### (4/5) クラスターのデプロイ -Next, deploy your cluster: +次にクラスターをデプロイします: -Cluster deployment using [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment). +クラスターのデプロイメントには[ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment)を使用します。 ```shell ansible-playbook -i your/inventory/inventory.ini cluster.yml -b -v \ --private-key=~/.ssh/private_key ``` -Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md) for best results. +大規模なデプロイメント(100以上のノード)では、最適な結果を得るために[個別の調整](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md)が必要な場合があります。 ### (5/5) デプロイの確認 -Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators. +Kubesprayは、[Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md)によるPod間の接続とDNSの解決の検証を行う機能を提供します。Netcheckerは、netchecker-agents Podがdefault名前空間内でDNSリクエストを解決し、互いにpingを送信できることを確かめます。これらのPodは他のワークロードと同様の動作を再現し、クラスターの健全性を示す指標として機能します。 -## クラスタの操作 +## クラスターの操作 -Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_. +Kubesprayはクラスターを管理する追加のプレイブックを提供します: _scale_ と _upgrade_。 -### クラスタのスケール +### クラスターのスケール -You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)". -You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)". +scaleプレイブックを実行することで、クラスターにワーカーノードを追加することができます。詳細については、"[ノードの追加](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)"を参照してください。 +remove-nodeプレイブックを実行することで、クラスターからワーカーノードを削除することができます。詳細については、"[ノードの削除](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)"を参照してください。 -### クラスタのアップグレード +### クラスターのアップグレード -You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md)". +upgrade-clusterプレイブックを実行することで、クラスターのアップグレードができます。詳細については、"[アップグレード](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md)"を参照してください。 ## クリーンアップ -You can reset your nodes and wipe out all components installed with Kubespray via the [reset playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/reset.yml). + +[resetプレイブック](https://github.com/kubernetes-sigs/kubespray/blob/master/reset.yml)を使用して、ノードをリセットし、Kubesprayでインストールした全てのコンポーネントを消すことができます。 {{< caution >}} -When running the reset playbook, be sure not to accidentally target your production cluster! +resetプレイブックを実行する際は、誤ってプロダクションのクラスターを対象にしないように気をつけること! {{< /caution >}} ## フィードバック -* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/)) -* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues) +* Slackチャンネル: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) ([ここ](https://slack.k8s.io/)から招待をもらうことができます)。 +* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues)。 ## {{% heading "whatsnext" %}} -Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md). - +* Kubesprayの[ロードマップ](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md)にある作業計画を確認してください。 +* [Kubespray](https://github.com/kubernetes-sigs/kubespray)についてさらに学ぶ。 diff --git a/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 577d38a5a891d..2fa71ad2fc77b 100644 --- a/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -123,7 +123,7 @@ Kubernetes v1.18におけるWindows上でのContainerDは以下の既知の欠 Kubernetes[ボリューム](/docs/concepts/storage/volumes/)を使用すると、データの永続性とPodボリュームの共有要件を備えた複雑なアプリケーションをKubernetesにデプロイできます。特定のストレージバックエンドまたはプロトコルに関連付けられた永続ボリュームの管理には、ボリュームのプロビジョニング/プロビジョニング解除/サイズ変更、Kubernetesノードへのボリュームのアタッチ/デタッチ、およびデータを永続化する必要があるPod内の個別のコンテナへのボリュームのマウント/マウント解除などのアクションが含まれます。特定のストレージバックエンドまたはプロトコルに対してこれらのボリューム管理アクションを実装するコードは、Kubernetesボリューム[プラグイン](/docs/concepts/storage/volumes/#types-of-volumes)の形式で出荷されます。次の幅広いクラスのKubernetesボリュームプラグインがWindowsでサポートされています。: ##### In-treeボリュームプラグイン -In-treeボリュームプラグインに関連付けられたコードは、コアKubernetesコードベースの一部として提供されます。In-treeボリュームプラグインのデプロイでは、追加のスクリプトをインストールしたり、個別のコンテナ化されたプラグインコンポーネントをデプロイしたりする必要はありません。これらのプラグインは、ストレージバックエンドでのボリュームのプロビジョニング/プロビジョニング解除とサイズ変更、Kubernetesノードへのボリュームのアタッチ/アタッチ解除、Pod内の個々のコンテナーへのボリュームのマウント/マウント解除を処理できます。次のIn-treeプラグインは、Windowsノードをサポートしています。: +In-treeボリュームプラグインに関連付けられたコードは、コアKubernetesコードベースの一部として提供されます。In-treeボリュームプラグインのデプロイでは、追加のスクリプトをインストールしたり、個別のコンテナ化されたプラグインコンポーネントをデプロイしたりする必要はありません。これらのプラグインは、ストレージバックエンドでのボリュームのプロビジョニング/プロビジョニング解除とサイズ変更、Kubernetesノードへのボリュームのアタッチ/アタッチ解除、Pod内の個々のコンテナへのボリュームのマウント/マウント解除を処理できます。次のIn-treeプラグインは、Windowsノードをサポートしています。: * [awsElasticBlockStore](/docs/concepts/storage/volumes/#awselasticblockstore) * [azureDisk](/docs/concepts/storage/volumes/#azuredisk) @@ -167,7 +167,7 @@ Windowsは、L2bridge、L2tunnel、Overlay、Transparent、NATの5つの異な | -------------- | ----------- | ------------------------------ | --------------- | ------------------------------ | | L2bridge | コンテナは外部のvSwitchに接続されます。コンテナはアンダーレイネットワークに接続されますが、物理ネットワークはコンテナのMACを上り/下りで書き換えるため、MACを学習する必要はありません。コンテナ間トラフィックは、コンテナホスト内でブリッジされます。 | MACはホストのMACに書き換えられ、IPは変わりません。| [win-bridge](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-bridge)、[Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md)、Flannelホストゲートウェイは、win-bridgeを使用します。 | win-bridgeはL2bridgeネットワークモードを使用して、コンテナをホストのアンダーレイに接続して、最高のパフォーマンスを提供します。ノード間接続にはユーザー定義ルート(UDR)が必要です。 | | L2Tunnel | これはl2bridgeの特殊なケースですが、Azureでのみ使用されます。すべてのパケットは、SDNポリシーが適用されている仮想化ホストに送信されます。| MACが書き換えられ、IPがアンダーレイネットワークで表示されます。 | [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) | Azure-CNIを使用すると、コンテナをAzure vNETと統合し、[Azure Virtual Networkが提供](https://azure.microsoft.com/en-us/services/virtual-network/)する一連の機能を活用できます。たとえば、Azureサービスに安全に接続するか、Azure NSGを使用します。[azure-cniのいくつかの例](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking)を参照してください。| -| オーバーレイ(KubernetesのWindows用のオーバーレイネットワークは *アルファ* 段階です) | コンテナには、外部のvSwitchに接続されたvNICが付与されます。各オーバーレイネットワークは、カスタムIPプレフィックスで定義された独自のIPサブネットを取得します。オーバーレイネットワークドライバーは、VXLANを使用してカプセル化します。 | 外部ヘッダーでカプセル化されます。 | [Win-overlay](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-overlay)、Flannel VXLAN (win-overlayを使用) | win-overlayは、仮想コンテナーネットワークをホストのアンダーレイから分離する必要がある場合に使用する必要があります(セキュリティ上の理由など)。データセンター内のIPが制限されている場合に、(異なるVNIDタグを持つ)異なるオーバーレイネットワークでIPを再利用できるようにします。このオプションには、Windows Server 2019で[KB4489899](https://support.microsoft.com/help/4489899)が必要です。| +| オーバーレイ(KubernetesのWindows用のオーバーレイネットワークは *アルファ* 段階です) | コンテナには、外部のvSwitchに接続されたvNICが付与されます。各オーバーレイネットワークは、カスタムIPプレフィックスで定義された独自のIPサブネットを取得します。オーバーレイネットワークドライバーは、VXLANを使用してカプセル化します。 | 外部ヘッダーでカプセル化されます。 | [Win-overlay](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-overlay)、Flannel VXLAN (win-overlayを使用) | win-overlayは、仮想コンテナネットワークをホストのアンダーレイから分離する必要がある場合に使用する必要があります(セキュリティ上の理由など)。データセンター内のIPが制限されている場合に、(異なるVNIDタグを持つ)異なるオーバーレイネットワークでIPを再利用できるようにします。このオプションには、Windows Server 2019で[KB4489899](https://support.microsoft.com/help/4489899)が必要です。| | 透過的([ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)の特別な使用例) | 外部のvSwitchが必要です。コンテナは外部のvSwitchに接続され、論理ネットワーク(論理スイッチおよびルーター)を介したPod内通信を可能にします。 | パケットは、[GENEVE](https://datatracker.ietf.org/doc/draft-gross-geneve/)または[STT](https://datatracker.ietf.org/doc/draft-davie-stt/)トンネリングを介してカプセル化され、同じホスト上にないポッドに到達します。パケットは、ovnネットワークコントローラーによって提供されるトンネルメタデータ情報を介して転送またはドロップされます。NATは南北通信のために行われます。 | [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes) | [ansible経由でデプロイ](https://github.com/openvswitch/ovn-kubernetes/tree/master/contrib)します。分散ACLは、Kubernetesポリシーを介して適用できます。 IPAMをサポートします。負荷分散は、kube-proxyなしで実現できます。 NATは、ip​​tables/netshを使用せずに行われます。 | | NAT(*Kubernetesでは使用されません*) | コンテナには、内部のvSwitchに接続されたvNICが付与されます。DNS/DHCPは、[WinNAT](https://blogs.technet.microsoft.com/virtualization/2016/05/25/windows-nat-winnat-capabilities-and-limitations/)と呼ばれる内部コンポーネントを使用して提供されます。 | MACおよびIPはホストMAC/IPに書き換えられます。 | [nat](https://github.com/Microsoft/windows-container-networking/tree/master/plugins/nat) | 完全を期すためにここに含まれています。 | diff --git a/content/ja/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md b/content/ja/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md index 8266904358b1f..772a33ba228c7 100644 --- a/content/ja/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md +++ b/content/ja/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md @@ -178,7 +178,7 @@ kubectl delete namespace default-mem-example * [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) -* [Namespaceに対する最小および最大メモリー制約の構成](ja/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) +* [Namespaceに対する最小および最大メモリー制約の構成](/ja/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) * [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) @@ -190,8 +190,8 @@ kubectl delete namespace default-mem-example ### アプリケーション開発者向け -* [コンテナおよびPodへのメモリーリソースの割り当て](ja/docs/tasks/configure-pod-container/assign-memory-resource/) +* [コンテナおよびPodへのメモリーリソースの割り当て](/ja/docs/tasks/configure-pod-container/assign-memory-resource/) -* [コンテナおよびPodへのCPUリソースの割り当て](ja/docs/tasks/configure-pod-container/assign-cpu-resource/) +* [コンテナおよびPodへのCPUリソースの割り当て](/ja/docs/tasks/configure-pod-container/assign-cpu-resource/) -* [PodにQuality of Serviceを設定する](ja/docs/tasks/configure-pod-container/quality-service-pod/) +* [PodにQuality of Serviceを設定する](/ja/docs/tasks/configure-pod-container/quality-service-pod/) diff --git a/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 2ac539bf06c34..0fe0ffe410a28 100644 --- a/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -295,7 +295,7 @@ Liveness ProbeおよびReadiness Probeのチェック動作をより正確に制 * `periodSeconds`: Probeが実行される頻度(秒数)。デフォルトは10秒。最小値は1。 * `timeoutSeconds`: Probeがタイムアウトになるまでの秒数。デフォルトは1秒。最小値は1。 * `successThreshold`: 一度Probeが失敗した後、次のProbeが成功したとみなされるための最小連続成功数。 -デフォルトは1。Liveness Probeには1を設定する必要があります。最小値は1。 +デフォルトは1。Liveness ProbeおよびStartup Probeには1を設定する必要があります。最小値は1。 * `failureThreshold`: Probeが失敗した場合、Kubernetesは`failureThreshold`に設定した回数までProbeを試行します。 Liveness Probeにおいて、試行回数に到達することはコンテナを再起動することを意味します。 Readiness Probeの場合は、Podが準備できていない状態として通知されます。デフォルトは3。最小値は1。 diff --git a/content/ja/docs/tasks/configure-pod-container/security-context.md b/content/ja/docs/tasks/configure-pod-container/security-context.md new file mode 100644 index 0000000000000..c4e1523af4ff5 --- /dev/null +++ b/content/ja/docs/tasks/configure-pod-container/security-context.md @@ -0,0 +1,444 @@ +--- +title: Podとコンテナにセキュリティコンテキストを設定する +content_type: task +weight: 80 +--- + + + +セキュリティコンテキストはPod・コンテナの特権やアクセスコントロールの設定を定義します。 +セキュリティコンテキストの設定には以下のものが含まれますが、これらに限定はされません。 + +* 任意アクセス制御: [user ID (UID) と group ID (GID)](https://wiki.archlinux.org/index.php/users_and_groups)に基づいて、ファイルなどのオブジェクトに対する許可を行います。 + +* [Security Enhanced Linux (SELinux)](https://ja.wikipedia.org/wiki/Security-Enhanced_Linux): + オブジェクトにセキュリティラベルを付与します。 + +* 特権または非特権として実行します。 + +* [Linux Capabilities](https://linux-audit.com/linux-capabilities-hardening-linux-binaries-by-removing-setuid/): + rootユーザーのすべての特権ではなく、一部の特権をプロセスに与えます。 + +* [AppArmor](/docs/tutorials/security/apparmor/): + プロファイルを用いて、個々のプログラムのcapabilityを制限します。 + +* [Seccomp](/docs/tutorials/security/seccomp/): プロセスのシステムコールを限定します。 + +* `allowPrivilegeEscalation`: あるプロセスが親プロセスよりも多くの特権を得ることができるかを制御します。 この真偽値は、コンテナプロセスに + [`no_new_privs`](https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt) + フラグが設定されるかどうかを直接制御します。 + コンテナが以下の場合、`allowPrivilegeEscalation`は常にtrueになります。 + - コンテナが特権で動いている + - `CAP_SYS_ADMIN`を持っている + +* `readOnlyRootFilesystem`: コンテナのルートファイルシステムが読み取り専用でマウントされます。 + +上記の項目は全てのセキュリティコンテキスト設定を記載しているわけではありません。 +より広範囲なリストは[SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)を確認してください。 + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + + + +## Podにセキュリティコンテキストを設定する + +Podにセキュリティ設定を行うには、Podの設定に`securityContext`フィールドを追加してください。 +`securityContext`フィールドは[PodSecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritycontext-v1-core)オブジェクトが入ります。 +Podに設定したセキュリティ設定はPod内の全てのコンテナに適用されます。こちらは`securityContext`と`emptyDir`ボリュームを持ったPodの設定ファイルです。 + +{{< codenew file="pods/security/security-context.yaml" >}} + +設定ファイルの中の`runAsUser`フィールドは、Pod内のコンテナに対して全てのプロセスをユーザーID 1000で実行するように指定します。 +`runAsGroup`フィールドはPod内のコンテナに対して全てのプロセスをプライマリーグループID 3000で実行するように指定します。このフィールドが省略されたときは、コンテナのプライマリーグループIDはroot(0)になります。`runAsGroup`が指定されている場合、作成されたファイルもユーザー1000とグループ3000の所有物になります。 +また`fsGroup`も指定されているため、全てのコンテナ内のプロセスは補助グループID 2000にも含まれます。`/data/demo`ボリュームとこのボリュームに作成されたファイルはグループID 2000になります。 + +Podを作成してみましょう。 + +```shell +kubectl apply -f https://k8s.io/examples/pods/security/security-context.yaml +``` + +Podのコンテナが実行されていることを確認します。 + +```shell +kubectl get pod security-context-demo +``` + +実行中のコンテナでshellを取ります。 + +```shell +kubectl exec -it security-context-demo -- sh +``` + +shellで、実行中のプロセスの一覧を確認します。 + +```shell +ps +``` + +`runAsUser`で指定した値である、ユーザー1000でプロセスが実行されていることが確認できます。 + +```none +PID USER TIME COMMAND + 1 1000 0:00 sleep 1h + 6 1000 0:00 sh +... +``` + +shellで`/data`に入り、ディレクトリの一覧を確認します。 + +```shell +cd /data +ls -l +``` + +`fsGroup`で指定した値であるグループID 2000で`/data/demo`ディレクトリが作成されていることが確認できます。 + +```none +drwxrwsrwx 2 root 2000 4096 Jun 6 20:08 demo +``` + +shellで`/data/demo`に入り、ファイルを作成します。 + +```shell +cd demo +echo hello > testfile +``` + +`/data/demo`ディレクトリでファイルの一覧を確認します。 + +```shell +ls -l +``` + +`fsGroup`で指定した値であるグループID 2000で`testfile`が作成されていることが確認できます。 + +```none +-rw-r--r-- 1 1000 2000 6 Jun 6 20:08 testfile +``` + +以下のコマンドを実行してください。 + +```shell +id +``` + +出力はこのようになります。 + +```none +uid=1000 gid=3000 groups=2000 +``` + +出力から`runAsGroup`フィールドと同じく`gid`が3000になっていることが確認できるでしょう。`runAsGroup`が省略された場合、`gid`は0(root)になり、そのプロセスはグループroot(0)とグループroot(0)に必要なグループパーミッションを持つグループが所有しているファイルを操作することができるようになります。 + +shellから抜けましょう。 + +```shell +exit +``` + +## Podのボリュームパーミッションと所有権変更ポリシーを設定する + +{{< feature-state for_k8s_version="v1.23" state="stable" >}} + +デフォルトでは、Kubernetesはボリュームがマウントされたときに、Podの`securityContext`で指定された`fsGroup`に合わせて再帰的に各ボリュームの中の所有権とパーミッションを変更します。 +大きなボリュームでは所有権の確認と変更に時間がかかり、Podの起動が遅くなります。 +`securityContext`の中の`fsGroupChangePolicy`フィールドを設定することで、Kubernetesがボリュームの所有権・パーミッションの確認と変更をどう行うかを管理することができます。 + +**fsGroupChangePolicy** - `fsGroupChangePolicy`は、ボリュームがPod内部で公開される前に所有権とパーミッションを変更するための動作を定義します。 + このフィールドは`fsGroup`で所有権とパーミッションを制御することができるボリュームタイプにのみ適用されます。このフィールドは以下の2つの値を取ります。 + +* _OnRootMismatch_: ルートディレクトリのパーミッションと所有権がボリュームに設定したパーミッションと一致しない場合のみ、パーミッションと所有権を変更します。ボリュームの所有権とパーミッションを変更するのにかかる時間が短くなる可能性があります。 +* _Always_: ボリュームがマウントされたときに必ずパーミッションと所有権を変更します。 + +例: + +```yaml +securityContext: + runAsUser: 1000 + runAsGroup: 3000 + fsGroup: 2000 + fsGroupChangePolicy: "OnRootMismatch" +``` + +{{< note >}} +このフィールドは +[`secret`](/docs/concepts/storage/volumes/#secret)、 +[`configMap`](/docs/concepts/storage/volumes/#configmap)、 +[`emptydir`](/docs/concepts/storage/volumes/#emptydir) +のようなエフェメラルボリュームタイプに対しては効果がありません。 +{{< /note >}} + +## CSIドライバーにボリュームパーミッションと所有権を移譲する + +{{< feature-state for_k8s_version="v1.26" state="stable" >}} + +`VOLUME_MOUNT_GROUP` `NodeServiceCapability`をサポートしている[Container Storage Interface (CSI)](https://github.com/container-storage-interface/spec/blob/master/spec.md)ドライバーをデプロイした場合、`securityContext`の`fsGroup`で指定された値に基づいてKubernetesの代わりにCSIドライバーがファイルの所有権とパーミッションの設定処理を行います。 +この場合Kubernetesは所有権とパーミッションの設定を行わないため`fsGroupChangePolicy`は無効となり、CSIで指定されている通りドライバーは`fsGroup`に従ってボリュームをマウントすると考えられるため、ボリュームは`fsGroup`に従って読み取り・書き込み可能になります。 + +## コンテナにセキュリティコンテキストを設定する + +コンテナに対してセキュリティ設定を行うには、コンテナマニフェストに`securityContext`フィールドを含めてください。`securityContext`フィールドには[SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)オブジェクトが入ります。 +コンテナに指定したセキュリティ設定は個々のコンテナに対してのみ反映され、Podレベルの設定を上書きします。コンテナの設定はPodのボリュームに対しては影響しません。 + +こちらは一つのコンテナを持つPodの設定ファイルです。Podもコンテナも`securityContext`フィールドを含んでいます。 + +{{< codenew file="pods/security/security-context-2.yaml" >}} + +Podを作成します。 + +```shell +kubectl apply -f https://k8s.io/examples/pods/security/security-context-2.yaml +``` + +Podのコンテナが実行されていることを確認します。 + +```shell +kubectl get pod security-context-demo-2 +``` + +実行中のコンテナでshellを取ります。 + +```shell +kubectl exec -it security-context-demo-2 -- sh +``` + +shellの中で、実行中のプロセスの一覧を表示します。 + +```shell +ps aux +``` + +ユーザー2000として実行されているプロセスが表示されました。これはコンテナの`runAsUser`で指定された値です。Podで指定された値である1000を上書きしています。 + +``` +USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND +2000 1 0.0 0.0 4336 764 ? Ss 20:36 0:00 /bin/sh -c node server.js +2000 8 0.1 0.5 772124 22604 ? Sl 20:36 0:00 node server.js +... +``` + +shellから抜けます。 + +```shell +exit +``` + +## コンテナにケーパビリティを設定する + +[Linuxケーパビリティ](https://man7.org/linux/man-pages/man7/capabilities.7.html)を用いると、プロセスに対してrootユーザーの全権を渡すことなく特定の権限を与えることができます。 +コンテナに対してLinuxケーパビリティを追加したり削除したりするには、コンテナマニフェストの`securityContext`セクションの`capabilities`フィールドに追加してください。 + +まず、`capabilities`フィールドを含まない場合どうなるかを見てみましょう。 +こちらはコンテナに対してケーパビリティを渡していない設定ファイルです。 + +{{< codenew file="pods/security/security-context-3.yaml" >}} + +Podを作成します。 + +```shell +kubectl apply -f https://k8s.io/examples/pods/security/security-context-3.yaml +``` + +Podが実行されていることを確認します。 + +```shell +kubectl get pod security-context-demo-3 +``` + +実行中のコンテナでshellを取ります。 + +```shell +kubectl exec -it security-context-demo-3 -- sh +``` + +shellの中で、実行中のプロセスの一覧を表示します。 + +```shell +ps aux +``` + +コンテナのプロセスID(PID)が出力されます。 + +``` +USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND +root 1 0.0 0.0 4336 796 ? Ss 18:17 0:00 /bin/sh -c node server.js +root 5 0.1 0.5 772124 22700 ? Sl 18:17 0:00 node server.js +``` + +shellの中で、プロセス1のステータスを確認します。 + +```shell +cd /proc/1 +cat status +``` + +プロセスのケーパビリティビットマップが表示されます。 + +``` +... +CapPrm: 00000000a80425fb +CapEff: 00000000a80425fb +... +``` + +ケーパビリティビットマップのメモを取った後、shellから抜けます。 + +```shell +exit +``` + +次に、追加のケーパビリティを除いて上と同じ設定のコンテナを実行します。 + +こちらは1つのコンテナを実行するPodの設定ファイルです。 +`CAP_NET_ADMIN`と`CAP_SYS_TIME`ケーパビリティを設定に追加しました。 + +{{< codenew file="pods/security/security-context-4.yaml" >}} + +Podを作成します。 + +```shell +kubectl apply -f https://k8s.io/examples/pods/security/security-context-4.yaml +``` + +実行中のコンテナでshellを取ります。 + +```shell +kubectl exec -it security-context-demo-4 -- sh +``` + +shellの中で、プロセス1のケーパビリティを確認します。 + +```shell +cd /proc/1 +cat status +``` + +プロセスのケーパビリティビットマップが表示されます。 + +``` +... +CapPrm: 00000000aa0435fb +CapEff: 00000000aa0435fb +... +``` + +2つのコンテナのケーパビリティを比較します。 + +``` +00000000a80425fb +00000000aa0435fb +``` + +1つ目のコンテナのケーパビリティビットマップでは、12, 25ビット目がクリアされています。2つ目のコンテナでは12, 25ビット目がセットされています。12ビット目は`CAP_NET_ADMIN`、25ビット目は`CAP_SYS_TIME`です。 +ケーパビリティの定数の定義は[capability.h](https://github.com/torvalds/linux/blob/master/include/uapi/linux/capability.h)を確認してください。 + +{{< note >}} +Linuxケーパビリティの定数は`CAP_XXX`形式です。 +ただしコンテナのマニフェストでケーパビリティを記述する際は、定数の`CAP_`の部分を省いてください。 +例えば、`CAP_SYS_TIME`を追加したい場合はケーパビリティに`SYS_TIME`を追加してください。 +{{< /note >}} + +## コンテナにSeccompプロフィールを設定する + +コンテナにSeccompプロフィールを設定するには、Pod・コンテナマニフェストの`securityContext`に`seccompProfile`フィールドを追加してください。 +`seccompProfile`フィールドは[SeccompProfile](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#seccompprofile-v1-core)オブジェクトで、`type`と`localhostProfile`で構成されています。 +`type`では`RuntimeDefault`、`Unconfined`、`Localhost`が有効です。 +`localhostProfile`は`type: Localhost`のときのみ指定可能です。こちらはノード上で事前に設定されたプロファイルのパスを示していて、kubeletのSeccompプロファイルの場所(`--root-dir`フラグで設定したもの)からの相対パスです。 + +こちらはノードのコンテナランタイムのデフォルトプロフィールをSeccompプロフィールとして設定した例です。 + +```yaml +... +securityContext: + seccompProfile: + type: RuntimeDefault +``` + +こちらは`/seccomp/my-profiles/profile-allow.json`で事前に設定したファイルをSeccompプロフィールに設定した例です。 + +```yaml +... +securityContext: + seccompProfile: + type: Localhost + localhostProfile: my-profiles/profile-allow.json +``` + +## コンテナにSELinuxラベルをつける + +コンテナにSELinuxラベルをつけるには、Pod・コンテナマニフェストの`securityContext`セクションに`seLinuxOptions`フィールドを追加してください。 +`seLinuxOptions`フィールドは[SELinuxOptions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#selinuxoptions-v1-core)オブジェクトが入ります。 +こちらはSELinuxレベルを適用する例です。 + +```yaml +... +securityContext: + seLinuxOptions: + level: "s0:c123,c456" +``` + +{{< note >}} +SELinuxラベルを適用するには、ホストOSにSELinuxセキュリティモジュールが含まれている必要があります。 +{{< /note >}} + +### 効率的なSELinuxのボリューム再ラベル付け + +{{< feature-state for_k8s_version="v1.25" state="alpha" >}} + +デフォルトでは、コンテナランタイムは全てのPodのボリュームの全てのファイルに再帰的にSELinuxラベルを付与します。処理速度を上げるために、Kubernetesはマウントオプションで`-o context=
    Configuration parameters
    ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    --apiserver-advertise-address string

    +Se o nó hospedar uma nova instância da camada de gerenciamento, este é o endereço IP que servidor da API irá anunciar que +está aguardando conexões. Quando não especificado, a interface de rede padrão é utilizada. +

    --apiserver-bind-port int32     Default: 6443

    +Se o nó hospedar uma nova instância da camada de gerenciamento, a porta que o servidor da API deve conectar-se. +

    --certificate-key string

    +Chave utilizada para decriptar as credenciais do certificado enviadas pelo comando init. +

    --config string

    +Caminho para um arquivo de configuração do kubeadm. +

    --control-plane

    +Cria uma nova instância da camada de gerenciamento neste nó. +

    --cri-socket string

    +Caminho para o soquete CRI conectar-se. Se vazio, o kubeadm tentará autodetectar este valor; utilize esta opção somente se você possui mais que um CRI instalado ou se você possui um soquete CRI fora do padrão.

    --discovery-file string

    +Para descoberta baseada em arquivo, um caminho de arquivo ou uma URL de onde a informação do cluster deve ser carregada. +

    --discovery-token string

    +Para descoberta baseada em token, o token utilizado para validar a informação do cluster obtida do servidor da API. +

    --discovery-token-ca-cert-hash strings

    +Para descoberta baseada em token, verifica que a chave pública do CA raiz corresponde a este hash +(formato: "<tipo>:<valor>"). +

    --discovery-token-unsafe-skip-ca-verification

    +Para descoberta baseada em token, permite associar-se ao cluster sem fixação da +autoridade de certificação (opção --discovery-token-ca-cert-hash). +

    --dry-run

    +Não aplica as modificações; apenas imprime as alterações que seriam efetuadas. +

    -h, --help

    ajuda para join

    --ignore-preflight-errors strings

    Uma lista de verificações para as quais erros serão exibidos como avisos. Exemplos: 'IsPrivilegedUser,Swap'. O valor 'all' ignora erros de todas as verificações.

    --node-name string

    Especifica o nome do nó.

    --patches string

    Caminho para um diretório contendo arquivos nomeados no padrão "target[suffix][+patchtype].extension". Por exemplo, "kube-apiserver0+merge.yaml" ou somente "etcd.json". "target" pode ser um dos seguintes valores: "kube-apiserver", "kube-controller-manager", "kube-scheduler", "etcd". "patchtype" pode ser "strategic", "merge" ou "json" e corresponde aos formatos de patch suportados pelo kubectl. O valor padrão para "patchtype" é "strategic". "extension" deve ser "json" ou "yaml". "suffix" é uma string opcional utilizada para determinar quais patches são aplicados primeiro em ordem alfanumérica.

    --skip-phases strings

    Lista de fases a serem ignoradas.

    --tls-bootstrap-token string

    +Especifica o token a ser utilizado para autenticar temporariamente com a camada de gerenciamento do Kubernetes durante +o processo de associação do nó ao cluster. +

    --token string

    +Utiliza este token em ambas as opções discovery-token e tls-bootstrap-token quando tais valores não são informados. +

    + + + +### Opções herdadas dos comandos superiores + + ++++ + + + + + + + + + + +
    --rootfs string

    [EXPERIMENTAL] O caminho para o sistema de arquivos raiz 'real' do host. +

    diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-join.md b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-join.md new file mode 100644 index 0000000000000..7ecbda061d5b3 --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-join.md @@ -0,0 +1,349 @@ +--- +title: kubeadm join +content_type: concept +weight: 30 +--- + +Este comando inicializa um nó de processamento do Kubernetes e o associa ao +cluster. + + +{{< include "generated/kubeadm_join.md" >}} + +### Fluxo do comando `join` {#join-workflow} + +O comando `kubeadm join` inicializa um nó de processamento ou um nó da camada +de gerenciamento e o adiciona ao cluster. Esta ação consiste nos seguintes passos +para nós de processamento: + +1. O kubeadm baixa as informações necessárias do cluster através servidor da API. + Por padrão, o token de autoinicialização e o _hash_ da chave da autoridade de + certificação (CA) são utilizados para verificar a autenticidade dos dados + baixados. O certificado raiz também pode ser descoberto diretamente através + de um arquivo ou URL. + +1. Uma vez que as informações do cluster são conhecidas, o kubelet pode começar + o processo de inicialização TLS. + + A inicialização TLS utiliza o token compartilhado para autenticar + temporariamente com o servidor da API do Kubernetes a fim de submeter uma + requisição de assinatura de certificado (_certificate signing request_, ou + CSR); por padrão, a camada de gerenciamento assina essa requisição CSR + automaticamente. + +1. Por fim, o kubeadm configura o kubelet local para conectar no servidor da API + com a identidade definitiva atribuída ao nó. + +Para nós da camada de gerenciamento, passos adicionais são executados: + +1. O download de certificados compartilhados por todos os nós da camada de + gerenciamento (quando explicitamente solicitado pelo usuário). + +1. Geração de manifestos, certificados e arquivo kubeconfig para os componentes + da camada de gerenciamento. + +1. Adição de um novo membro local do etcd. + +### Utilizando fases de associação com o kubeadm {#join-phases} + +O kubeadm permite que você associe um nó a um cluster em fases utilizando +`kubeadm join phase`. + +Para visualizar a lista ordenada de fases e subfases disponíveis, você pode +executar o comando `kubeadm join --help`. A lista estará localizada no topo da +tela da ajuda e cada fase terá uma descrição ao lado. Note que ao chamar +`kubeadm join` todas as fases e subfases serão executadas nesta ordem exata. + +Algumas fases possuem opções únicas, portanto, se você desejar ver uma lista das +opções disponíveis, adicione a _flag_ `--help`. Por exemplo: + +```shell +kubeadm join phase kubelet-start --help +``` + +De forma semelhante ao comando +[`kubeadm init phase`](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-phases), +`kubeadm join phase` permite que você ignore uma lista de fases utilizando a +opção `--skip-phases`. + +Por exemplo: + +```shell +sudo kubeadm join --skip-phases=preflight --config=config.yaml +``` + +{{< feature-state for_k8s_version="v1.22" state="beta" >}} + +Alternativamente, você pode utilizar o campo `skipPhases` no manifesto +`JoinConfiguration`. + +### Descobrindo em qual autoridade de certificação (CA) do cluster confiar + +A descoberta do kubeadm tem diversas opções, cada uma com suas próprias +contrapartidas de segurança. O método correto para o seu ambiente depende de +como você aprovisiona seus nós e as expectativas de segurança que você tem a +respeito da rede e ciclo de vida dos seus nós. + +#### Descoberta baseada em token com fixação da autoridade de certificação (CA) + +Este é o modo padrão do kubeadm. Neste modo, o kubeadm baixa a configuração do +cluster (incluindo a CA raiz) e a valida, utilizando o token, além de verificar +que a chave pública da CA raiz corresponda ao _hash_ fornecido e que o +certificado do servidor da API seja válido sob a CA raiz. + +O _hash_ da chave pública da CA tem o formato `sha256:`. +Por padrão, o valor do _hash_ é retornado no comando `kubeadm join` impresso ao +final da execução de `kubeadm init` ou na saída do comando +`kubeadm token create --print-join-command`. Este _hash_ é gerado em um formato +padronizado (veja a [RFC7469](https://tools.ietf.org/html/rfc7469#section-2.4)) +e pode também ser calculado com ferramentas de terceiros ou sistemas de +provisionamento. Por exemplo, caso deseje utilizar a ferramenta de linha de +comando do OpenSSL: + +```shell +openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' +``` + +**Exemplos de comandos `kubeadm join`:** + +Para nós de processamento: + +```shell +kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443 +``` + +Para nós da camada de gerenciamento: + +```shell +kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef --control-plane 1.2.3.4:6443 +``` + +Você também pode executar o comando `join` para um nó da camada de gerenciamento +com a opção `--certificate-key` para copiar certificados para este nó, caso o +comando `kubeadm init` tenha sido executado com a opção `--upload-certs`. + +**Vantagens:** + +- Permite à inicialização dos nós descobrir uma raiz de confiança para a camada + de gerenciamento mesmo que outros nós de processamento ou a rede estejam + comprometidos. + +- É conveniente para ser executado manualmente pois toda a informação requerida + cabe num único comando `kubeadm join`. + +**Desvantagens:** + +- O _hash_ da autoridade de certificação normalmente não está disponível até que + a camada de gerenciamento seja aprovisionada, o que pode tornar mais difícil + a criação de ferramentas de aprovisionamento automatizadas que utilizem o + kubeadm. Uma alternativa para evitar esta limitação é gerar sua autoridade de + certificação de antemão. + +#### Descoberta baseada em token sem fixação da autoridade de certificação (CA) + +Este modo depende apenas do token simétrico para assinar (HMAC-SHA256) a +informação de descoberta que estabelece a raiz de confiança para a camada de +gerenciamento. Para utilizar este modo, os nós que estão se associando ao cluster +devem ignorar a validação do _hash_ da chave pública da autoridade de +certificação, utilizando a opção `--discovery-token-unsafe-skip-ca-verification`. +Você deve considerar o uso de um dos outros modos quando possível. + +**Exemplo de comando `kubeadm join`:** + +```shell +kubeadm join --token abcdef.1234567890abcdef --discovery-token-unsafe-skip-ca-verification 1.2.3.4:6443 +``` + +**Vantagens:** + +- Ainda protege de muitos ataques a nível de rede. + +- O token pode ser gerado de antemão e compartilhado com os nós da camada de + gerenciamento e de processamento, que por sua vez podem inicializar-se em + paralelo, sem coordenação. Isto permite que este modo seja utilizado em muitos + cenários de aprovisionamento. + +**Desvantagens:** + +- Se um mau ator conseguir roubar um token de inicialização através de algum tipo + de vulnerabilidade, este mau ator conseguirá utilizar o token (juntamente com + accesso a nível de rede) para personificar um nó da camada de gerenciamento + perante os outros nós de processamento. Esta contrapartida pode ou não ser + aceitável no seu ambiente. + +#### Descoberta baseada em arquivos ou HTTPS + +Este modo fornece uma maneira alternativa de estabelecer uma raiz de confiança +entre os nós da camada de gerenciamento e os nós de processamento. Considere +utilizar este modo se você estiver construindo uma infraestrutura de +aprovisionamento automático utilizando o kubeadm. O formato do arquivo de +descoberta é um arquivo [kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) +comum do Kubernetes. + +Caso o arquivo de descoberta não contenha credenciais, o token de descoberta TLS +será utilizado. + +**Exemplos de comandos `kubeadm join`:** + +- `kubeadm join --discovery-file caminho/para/arquivo.conf` (arquivo local) + +- `kubeadm join --discovery-file https://endereco/arquivo.conf` (URL HTTPS remota) + +**Vantagens:** + +- Permite à inicialização dos nós descobrir uma raiz de confiança de forma segura + para que a camada de gerenciamento utilize mesmo que a rede ou outros nós de + processamento estejam comprometidos. + +**Desvantagens:** + +- Requer que você tenha uma forma de carregar a informação do nó da camada de + gerenciamento para outros nós em inicialização. Se o arquivo de descoberta + contém credenciais, você precisa mantê-lo secreto e transferi-lo através de + um canal de comunicação seguro. Isto pode ser possível através do seu provedor + de nuvem ou ferramenta de aprovisionamento. + +### Tornando sua instalação ainda mais segura {#securing-more} + +Os valores padrão de instalação do kubeadm podem não funcionar para todos os +casos de uso. Esta seção documenta como tornar uma instalação mais segura, ao +custo de usabilidade. + +#### Desligando a auto-aprovação de certificados de cliente para nós + +Por padrão, um auto-aprovador de requisições CSR está habilitado. Este +auto-aprovador irá aprovar quaisquer requisições de certificado de cliente para +um kubelet quando um token de autoinicialização for utilizado para autenticação. +Se você não deseja que o cluster aprove automaticamente certificados de cliente +para os kubelets, você pode desligar a auto-aprovação com o seguinte comando: + +```shell +kubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap +``` + +Após o desligamento da auto-aprovação, o comando `kubeadm join` irá aguardar até +que o administrador do cluster aprove a requisição CSR: + +1. Utilizando o comando `kubeadm get csr`, você verá que o CSR original está em + estado pendente. + + ```shell + kubectl get csr + ``` + + A saída é semelhante a: + + ``` + NAME AGE REQUESTOR CONDITION + node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 18s system:bootstrap:878f07 Pending + ``` + +1. O comando `kubectl certificate approve` permite ao administrador aprovar o + CSR. Esta ação informa ao controlador de assinatura de certificados que este + deve emitir um certificado para o requerente com os atributos requeridos no + CSR. + + ```shell + kubectl certificate approve node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ + ``` + + A saída é semelhante a: + + ``` + certificatesigningrequest "node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ" approved + ``` + +1. Este comando muda o estado do objeto CSR para o estado ativo. + + ```shell + kubectl get csr + ``` + + A saída é semelhante a: + + ``` + NAME AGE REQUESTOR CONDITION + node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 1m system:bootstrap:878f07 Approved,Issued + ``` + +Esta mudança força com que o fluxo do comando `kubeadm join` seja bem-sucedido +somente quando o comando `kubectl certificate approve` for executado. + +#### Desligando o acesso público ao ConfigMap `cluster-info` + +Para que o fluxo de associação de um nó ao cluster seja possível utilizando +somente um token como a única informação necessária para validação, um ConfigMap +com alguns dados necessários para validação da identidade do nó da camada de +gerenciamento é exposto publicamente por padrão. Embora nenhum dado deste +ConfigMap seja privado, alguns usuários ainda podem preferir bloquear este +acesso. Mudar este acesso bloqueia a habilidade de utilizar a opção +`--discovery-token` do fluxo do comando `kubeadm join`. Para desabilitar este +acesso: + +* Obtenha o arquivo `cluster-info` do servidor da API: + +```shell +kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/ //" | tee cluster-info.yaml +``` + +A saída é semelhante a: + +```yaml +apiVersion: v1 +kind: Config +clusters: +- cluster: + certificate-authority-data: + server: https://: + name: "" +contexts: [] +current-context: "" +preferences: {} +users: [] +``` + +* Utilize o arquivo `cluster-info.yaml` como um argumento para o comando +`kubeadm join --discovery-file`. + +* Desligue o acesso público ao ConfigMap `cluster-info`: + +```shell +kubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo +``` + +Estes comandos devem ser executados após `kubeadm init`, mas antes de +`kubeadm join`. + +### Utilizando `kubeadm join` com um arquivo de configuração {#config-file} + +{{< caution >}} +O arquivo de configuração ainda é considerado beta e pode mudar em versões +futuras. +{{< /caution >}} + +É possível configurar o comando `kubeadm join` apenas com um arquivo de +configuração, em vez de utilizar opções de linha de comando, e algumas +funcionalidades avançadas podem estar disponíveis somente como opções no arquivo +de configuração. Este arquivo é passado através da opção `--config` e deve conter +uma estrutura `JoinConfiguration`. A utilização da opção `--config` com outras +opções da linha de comando pode não ser permitida em alguns casos. + +A configuração padrão pode ser emitida utilizando o comando +[kubeadm config print](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-config/#cmd-config-print). + +Caso sua configuração não esteja utilizando a versão mais recente, é +**recomendado** que você migre utilizando o comando +[kubeadm config migrate](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-config/#cmd-config-migrate). + +Para mais informações sobre os campos e utilização da configuração você pode +consultar a [referência da API](/docs/reference/config-api/kubeadm-config.v1beta3/). + +## {{% heading "whatsnext" %}} + +* [kubeadm init](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-init/) para + inicializar um nó da camada de gerenciamento do Kubernetes. +* [kubeadm token](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token/) para + gerenciar tokens utilizados no comando `kubeadm join`. +* [kubeadm reset](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-reset/) para + reverter quaisquer mudanças feitas nesta máquina pelos comandos `kubeadm init` + ou `kubeadm join`. diff --git a/content/pt-br/docs/tasks/access-application-cluster/_index.md b/content/pt-br/docs/tasks/access-application-cluster/_index.md new file mode 100644 index 0000000000000..60282a982a088 --- /dev/null +++ b/content/pt-br/docs/tasks/access-application-cluster/_index.md @@ -0,0 +1,6 @@ +--- +title: "Acessando Aplicações em um Cluster" +description: Configurar balanceamento de carga, redirecionamento de porta, ou configuração de firewall ou DNS para acessar aplicativos em um cluster. +weight: 60 +--- + diff --git a/content/pt-br/docs/tasks/access-application-cluster/configure-dns-cluster.md b/content/pt-br/docs/tasks/access-application-cluster/configure-dns-cluster.md new file mode 100644 index 0000000000000..5707855689591 --- /dev/null +++ b/content/pt-br/docs/tasks/access-application-cluster/configure-dns-cluster.md @@ -0,0 +1,13 @@ +--- +title: Configurar DNS em um cluster +weight: 120 +content_type: concept +--- + + +O Kubernetes oferece um complemento de DNS para os clusters, que a maioria dos ambientes suportados habilitam por padrão. Na versão do Kubernetes 1.11 e posterior, o CoreDNS é recomendado e instalado por padrão com o kubeadm. + + +Para mais informações sobre como configurar o CoreDNS para um cluster Kubernetes, veja [Personalização do Serviço de DNS](/docs/tasks/administer-cluster/dns-custom-nameservers/). Para ver um exemplo que demonstra como usar o DNS do Kubernetes com o kube-dns, consulte [Plugin de exemplo para DNS](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns). + + diff --git a/content/pt-br/docs/tutorials/hello-minikube.md b/content/pt-br/docs/tutorials/hello-minikube.md index 0db5d20ddcea5..55d8fc6c5c283 100644 --- a/content/pt-br/docs/tutorials/hello-minikube.md +++ b/content/pt-br/docs/tutorials/hello-minikube.md @@ -40,7 +40,7 @@ Este tutorial disponibiliza uma imagem de contêiner que utiliza o NGINX para re {{< kat-button >}} {{< note >}} -Se você instalou o Minikube localmente, execute: `minikube start`. +Se você instalou o Minikube localmente, execute: `minikube start`. Antes de executar `minikube dashboard`, abra um novo terminal, execute `minikube dashboard` nele, e retorne para o terminal anterior. {{< /note >}} 2. Abra o painel do Kubernetes em um navegador: @@ -49,7 +49,32 @@ Se você instalou o Minikube localmente, execute: `minikube start`. minikube dashboard ``` -3. Apenas no ambiente do Katacoda: Na parte superior do terminal, clique em **Preview Port 30000**. +3. Apenas no ambiente do Katacoda: Na parte superior to painel do terminal, clique no sinal de mais (+), e selecione **Select port to view on Host 1**. + +4. Apenas no ambiente do Katacoda: Digite `30000`, e clique em **Display + Port**. + +{{< note >}} +O comando `dashboard` habilita o complemento (_addon_) de dashboard e abre o proxy no navegador padrão. +Voce pode criar recursos no Kubernetes, como Deployment e Service, pela dashboard. + +Se você está executando em um ambiente como administrador (_root_), veja [Acessando a Dashboard via URL](#acessando-a-dashboard-via-url). + +Por padrão, a dashboard só é accesível internamente pela rede virtual do Kubernetes. +O comando `dashboard` cria um proxy temporário que permite que a dashboard seja acessada externamente à rede virtual do Kubernetes. + +Para parar o proxy, execute `Ctrl+C` para terminar o processo. +A dashboard permanece sendo executada no cluster Kubernetes depois do comando ter sido terminado. +Você pode executar o comando `dashboard` novamente para criar outro proxy para accessar a dashboard +{{< /note >}} + +## Acessando a Dashboard via URL + +Caso não queira abrir o navegador, execute o comando `dashboard` com a flag `--url` para ver a URL: + +```shell +minikube dashboard --url +``` ## Criando um Deployment @@ -144,7 +169,7 @@ Por padrão, um Pod só é acessível utilizando o seu endereço IP interno no c 5. (**Apenas no ambiente do Katacoda**) Observe o número da porta com 5 dígitos exibido ao lado de `8080` na saída do serviço. Este número de porta é gerado aleatoriamente e pode ser diferente para você. Digite seu número na caixa de texto do número da porta e clique em **Display Port**. Usando o exemplo anterior, você digitaria `30369`. -Isso abre uma janela do navegador, acessa o seu aplicativo e mostra o retorno da requisição. + Isso abre uma janela do navegador, acessa o seu aplicativo e mostra o retorno da requisição. ## Habilitando Complementos (addons) @@ -255,4 +280,3 @@ minikube delete * Aprender mais sobre [Deployment objects](/docs/concepts/workloads/controllers/deployment/). * Aprender mais sobre [Deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/). * Aprender mais sobre [Service objects](/docs/concepts/services-networking/service/). - diff --git a/content/pt-br/examples/controllers/frontend.yaml b/content/pt-br/examples/controllers/frontend.yaml new file mode 100644 index 0000000000000..53be03c176312 --- /dev/null +++ b/content/pt-br/examples/controllers/frontend.yaml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: ReplicaSet +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # modifique o número de replicas de acordo com o seu caso + replicas: 3 + selector: + matchLabels: + tier: frontend + template: + metadata: + labels: + tier: frontend + spec: + containers: + - name: php-redis + image: gcr.io/google_samples/gb-frontend:v3 diff --git a/content/pt-br/examples/controllers/hpa-rs.yaml b/content/pt-br/examples/controllers/hpa-rs.yaml new file mode 100644 index 0000000000000..a8388530dcba1 --- /dev/null +++ b/content/pt-br/examples/controllers/hpa-rs.yaml @@ -0,0 +1,11 @@ +apiVersion: autoscaling/v1 +kind: HorizontalPodAutoscaler +metadata: + name: frontend-scaler +spec: + scaleTargetRef: + kind: ReplicaSet + name: frontend + minReplicas: 3 + maxReplicas: 10 + targetCPUUtilizationPercentage: 50 diff --git a/content/pt-br/examples/pods/pod-rs.yaml b/content/pt-br/examples/pods/pod-rs.yaml new file mode 100644 index 0000000000000..df7b390597c49 --- /dev/null +++ b/content/pt-br/examples/pods/pod-rs.yaml @@ -0,0 +1,23 @@ +apiVersion: v1 +kind: Pod +metadata: + name: pod1 + labels: + tier: frontend +spec: + containers: + - name: hello1 + image: gcr.io/google-samples/hello-app:2.0 + +--- + +apiVersion: v1 +kind: Pod +metadata: + name: pod2 + labels: + tier: frontend +spec: + containers: + - name: hello2 + image: gcr.io/google-samples/hello-app:1.0 diff --git a/content/pt-br/includes/task-tutorial-prereqs.md b/content/pt-br/includes/task-tutorial-prereqs.md index 66b20b849f2f1..be00c38c8b0d0 100644 --- a/content/pt-br/includes/task-tutorial-prereqs.md +++ b/content/pt-br/includes/task-tutorial-prereqs.md @@ -1,6 +1,4 @@ -Você precisa de um cluster Kubernetes e a ferramenta de linha de comando kubectl -precisa estar configurada para acessar o seu cluster. Se você ainda não tem um -cluster, pode criar um usando o [minikube](/docs/tasks/tools/#minikube) -ou você pode usar um dos seguintes ambientes: +Você precisa ter um cluster do Kubernetes e a ferramenta de linha de comando kubectl deve estar configurada para se comunicar com seu cluster. É recomendado executar esse tutorial em um cluster com pelo menos dois nós que não estejam atuando como hosts de camada de gerenciamento. Se você ainda não possui um cluster, pode criar um usando o [minikube](/docs/tasks/tools/#minikube) ou pode usar um dos seguintes ambientes: + * [Killercoda](https://killercoda.com/playgrounds/scenario/kubernetes) * [Play with Kubernetes](http://labs.play-with-k8s.com/) diff --git a/content/ru/_index.html b/content/ru/_index.html index 8763c548ecaaa..305f652aea368 100644 --- a/content/ru/_index.html +++ b/content/ru/_index.html @@ -43,12 +43,12 @@

    О сложности миграции 150+ микросервисов в Ku

    - Посетите KubeCon в Северной Америке, 24-28 октября 2022 года + Посетите KubeCon + CloudNativeCon в Европе, 18-21 апреля 2023 года



    - Посетите KubeCon в Европе, 17-21 апреля 2023 года + Посетите KubeCon + CloudNativeCon в Северной Америке, 6-9 ноября 2023 года

    diff --git a/content/ru/docs/concepts/architecture/control-plane-node-communication.md b/content/ru/docs/concepts/architecture/control-plane-node-communication.md index fa0db524af926..7b2f43991f854 100644 --- a/content/ru/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/ru/docs/concepts/architecture/control-plane-node-communication.md @@ -13,16 +13,15 @@ aliases: Этот документ описывает связь между плоскостью управления (apiserver) и кластером Kubernetes. Цель состоит в том, чтобы позволить пользователям настраивать свою установку для усиления сетевой конфигурации, чтобы кластер мог работать в ненадежной сети (или на полностью общедоступных IP-адресах облачного провайдера). - - ## Связь между плоскостью управления и узлом + В Kubernetes имеется API шаблон «ступица и спица» (hub-and-spoke). Все используемые API из узлов (или которые запускают pod-ы) завершает apiserver. Ни один из других компонентов плоскости управления не предназначен для предоставления удаленных сервисов. Apiserver настроен на прослушивание удаленных подключений через безопасный порт HTTPS (обычно 443) с одной или несколькими включенными формами [аутентификации](/docs/reference/access-authn-authz/authentication/) клиента. Должна быть включена одна или несколько форм [авторизации](/docs/reference/access-authn-authz/authorization/), особенно, если разрешены [анонимные запросы](/docs/reference/access-authn-authz/authentication/#anonymous-requests) или [ServiceAccount токены](/docs/reference/access-authn-authz/authentication/#service-account-tokens). -Узлы должны быть снабжены публичным корневым сертификатом для кластера, чтобы они могли безопасно подключаться к apiserver-у вместе с действительными учетными данными клиента. Хороший подход заключается в том, чтобы учетные данные клиента, предоставляемые kubelet-у, имели форму клиентского сертификата. См. Информацию о загрузке kubelet TLS [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) для автоматической подготовки клиентских сертификатов kubelet. +Узлы должны быть снабжены публичным корневым сертификатом для кластера, чтобы они могли безопасно подключаться к apiserver-у вместе с действительными учетными данными клиента. Хороший подход заключается в том, чтобы учетные данные клиента, предоставляемые kubelet-у, имели форму клиентского сертификата. См. Информацию о загрузке [kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) для автоматической подготовки клиентских сертификатов kubelet. Pod-ы, которые хотят подключиться к apiserver, могут сделать это безопасно, используя ServiceAccount, чтобы Kubernetes автоматически вводил общедоступный корневой сертификат и действительный токен-носитель в pod при его создании. Служба `kubernetes` (в пространстве имен `default`) настроена с виртуальным IP-адресом, который перенаправляет (через kube-proxy) на HTTPS эндпоинт apiserver-а. @@ -49,7 +48,7 @@ Pod-ы, которые хотят подключиться к apiserver, мог Если это не возможно, используйте [SSH-тунелирование](#ssh-tunnels) между apiserver-ом и kubelet, если это необходимо, чтобы избежать подключения по ненадежной или общедоступной сети. -Наконец, должны быть включены [аутентификация или авторизация kubelet](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) для защиты kubelet API. +Наконец, должны быть включены [аутентификация или авторизация kubelet](/docs/reference/access-authn-authz/kubelet-authn-authz/) для защиты kubelet API. ### apiserver для узлов, pod-ов, и служб diff --git a/content/ru/docs/concepts/cluster-administration/proxies.md b/content/ru/docs/concepts/cluster-administration/proxies.md new file mode 100644 index 0000000000000..a2aa66a4aae2a --- /dev/null +++ b/content/ru/docs/concepts/cluster-administration/proxies.md @@ -0,0 +1,62 @@ +--- +title: Типы прокси-серверов в Kubernetes +content_type: concept +weight: 90 +--- + + +На этой странице рассказывается о различных типах прокси-серверов, которые используются в Kubernetes. + + + + +## Прокси-серверы + +При работе с Kubernetes можно столкнуться со следующими типами прокси-серверов: + +1. [kubectl](/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api): + + - работает на локальной машине или в Pod'е; + - поднимает канал связи от локальной машины к интерфейсу API-сервера Kubernetes; + - данные от клиента к прокси-серверу передаются по HTTP; + - данные от прокси к серверу API передаются по HTTPS; + - отвечает за обнаружение сервера API; + - добавляет заголовки аутентификации. + +1. [Прокси-сервер API](/docs/tasks/access-application-cluster/access-cluster-services/#discovering-builtin-services): + + - бастион, встроенный в API-сервер; + - подключает пользователя за пределами кластера к IP-адресам кластера, которые в ином случае могут оказаться недоступными; + - входит в процессы сервера API; + - данные от клиента к прокси-серверу передаются по HTTPS (или по HTTP, если сервер API настроен соответствующим образом); + - данные от прокси-сервера к цели передаются по HTTP или HTTPS в зависимости от настроек прокси; + - используется для доступа к узлам, Pod'ам или сервисам; + - при подключении к сервису выступает балансировщиком нагрузки. + +1. [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips): + + - работает на каждом узле; + - обрабатывает трафик UDP, TCP и SCTP; + - "не понимает" HTTP; + - выполняет функции балансировщика нагрузки; + - используется только для доступа к сервисам. + +1. Прокси-сервер/балансировщик нагрузки перед API-сервером(-ами): + + - наличие и тип (например, nginx) определяется конфигурацией кластера; + - располагается между клиентами и одним или несколькими серверами API; + - балансирует запросы при наличии нескольких серверов API. + +1. Облачные балансировщики нагрузки на внешних сервисах: + + - предоставляются некоторыми облачными провайдерами (например, AWS ELB, Google Cloud Load Balancer); + - создаются автоматически для сервисов Kubernetes с типом `LoadBalancer`; + - как правило, поддерживают только UDP/TCP; + - наличие поддержки SCTP зависит от реализации балансировщика нагрузки облачного провайдера; + - реализация варьируется в зависимости от поставщика облачных услуг. + +Пользователи Kubernetes, как правило, в своей работе сталкиваются только с прокси-серверами первых двух типов. За настройку остальных типов обычно отвечает администратор кластера. + +## Запросы на перенаправления + +На смену функциям перенаправления (редиректам) пришли прокси-серверы. Перенаправления устарели. diff --git a/content/uk/docs/concepts/_index.md b/content/uk/docs/concepts/_index.md index 873e8ab2aefbf..ab9d757e99679 100644 --- a/content/uk/docs/concepts/_index.md +++ b/content/uk/docs/concepts/_index.md @@ -24,9 +24,9 @@ weight: 40 --> Для роботи з Kubernetes ви використовуєте *об'єкти API Kubernetes* для того, щоб описати *бажаний стан* вашого кластера: які застосунки або інші робочі навантаження ви плануєте запускати, які образи контейнерів вони використовують, кількість реплік, скільки ресурсів мережі та диску ви хочете виділити тощо. Ви задаєте бажаний стан, створюючи об'єкти в Kubernetes API, зазвичай через інтерфейс командного рядка `kubectl`. Ви також можете взаємодіяти із кластером, задавати або змінювати його бажаний стан безпосередньо через Kubernetes API. - -Після того, як ви задали бажаний стан, *площина управління Kubernetes* приводить поточний стан кластера до бажаного за допомогою Pod Lifecycle Event Generator ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). Для цього Kubernetes автоматично виконує ряд задач: запускає або перезапускає контейнери, масштабує кількість реплік у певному застосунку тощо. Площина управління Kubernetes складається із набору процесів, що виконуються у вашому кластері: +Після того, як ви задали бажаний стан, *площина управління Kubernetes* приводить поточний стан кластера до бажаного за допомогою Pod Lifecycle Event Generator ([PLEG](https://github.com/kubernetes/design-proposals-archive/blob/main/node/pod-lifecycle-event-generator.md)). Для цього Kubernetes автоматично виконує ряд задач: запускає або перезапускає контейнери, масштабує кількість реплік у певному застосунку тощо. Площина управління Kubernetes складається із набору процесів, що виконуються у вашому кластері: - 参加 2022 年 10 月 24-28 日的北美 KubeCon + + 参加 2023 年 4 月 18-21 日的欧洲 KubeCon + CloudNativeCon



    - - 参加 2023 年 4 月 17-21 日的欧洲 KubeCon + + 参加 2023 年 11 月 6-9 日的北美 KubeCon + CloudNativeCon
    diff --git a/content/zh-cn/blog/_posts/2022-10-04-introducing-kueue.md b/content/zh-cn/blog/_posts/2022-10-04-introducing-kueue.md new file mode 100644 index 0000000000000..12a993eddc288 --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-10-04-introducing-kueue.md @@ -0,0 +1,399 @@ +--- +layout: blog +title: "Kueue 介绍" +date: 2022-10-04 +slug: introducing-kueue +--- + + + +**作者:** Abdullah Gharaibeh(谷歌),Aldo Culquicondor(谷歌) + + +无论是在本地还是在云端,集群都面临着资源使用、配额和成本管理方面的实际限制。 +无论自动扩缩容能力如何,集群的容量都是有限的。 +因此,用户需要一种简单的方法来公平有效地共享资源。 + + +在本文中,我们介绍了 [Kueue](https://github.com/kubernetes-sigs/kueue/tree/main/docs#readme), +这是一个开源作业队列控制器,旨在将批处理作业作为一个单元进行管理。 +Kueue 将 Pod 级编排留给 Kubernetes 现有的稳定组件。 +Kueue 原生支持 Kubernetes [Job](/zh-cn/docs/concepts/workloads/controllers/job/) API, +并提供用于集成其他定制 API 以进行批处理作业的钩子。 + + +## 为什么是 Kueue? + +作业队列是在本地和云环境中大规模运行批处理工作负载的关键功能。 +作业队列的主要目标是管理对多个租户共享的有限资源池的访问。 +作业排队决定了哪些作业应该等待,哪些可以立即启动,以及它们可以使用哪些资源。 + + +一些最需要的作业队列要求包括: + +- 用配额和预算来控制谁可以使用什么以及达到什么限制。 + 这不仅在具有静态资源(如本地)的集群中需要,而且在云环境中也需要控制稀缺资源的支出或用量。 + +- 租户之间公平共享资源。 + 为了最大限度地利用可用资源,应允许活动租户公平共享那些分配给非活动租户的未使用配额。 + +- 根据可用性,在不同资源类型之间灵活放置作业。 + 这在具有异构资源的云环境中很重要,例如不同的架构(GPU 或 CPU 模型)和不同的供应模式(即用与按需)。 + +- 支持可按需配置资源的自动扩缩容环境。 + + +普通的 Kubernetes 不能满足上述要求。 +在正常情况下,一旦创建了 Job,Job 控制器会立即创建 Pod,并且 kube-scheduler 会不断尝试将 Pod 分配给节点。 +大规模使用时,这种情况可能会使控制平面死机。 +目前也没有好的办法在 Job 层面控制哪些 Job 应该先获得哪些资源,也没有办法标明顺序或公平共享。 +当前的 ResourceQuota 模型不太适合这些需求,因为配额是在资源创建时强制执行的,并且没有请求排队。 +ResourceQuotas 的目的是提供一种内置的可靠性机制,其中包含管理员所需的策略,以防止集群发生故障转移。 + + +在 Kubernetes 生态系统中,Job 调度有多种解决方案。但是,我们发现这些替代方案存在以下一个或多个问题: + +- 它们取代了 Kubernetes 的现有稳定组件,例如 kube-scheduler 或 Job 控制器。 + 这不仅从操作的角度看是有问题的,而且重复的 Job API 也会导致生态系统的碎片化并降低可移植性。 + +- 它们没有集成自动扩缩容,或者 + +- 它们缺乏对资源灵活性的支持。 + + +## Kueue 的工作原理 {#overview} + +借助 Kueue,我们决定采用不同的方法在 Kubernetes 上进行 Job 排队,该方法基于以下方面: + +- 不复制已建立的 Kubernetes 组件提供的用于 Pod 调度、自动扩缩容和 Job 生命周期管理的现有功能。 + +- 将缺少的关键特性添加到现有组件中。例如,我们投资了 Job API 以涵盖更多用例,像 [IndexedJob](/blog/2021/04/19/introducing-indexed-jobs), + 并[修复了与 Pod 跟踪相关的长期存在的问题](/zh-cn/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers)。 + 虽然离特性落地还有很长一段路,但我们相信这是可持续的长期解决方案。 + +- 确保与具有弹性和异构性的计算资源云环境兼容。 + + +为了使这种方法可行,Kueue 需要旋钮来影响那些已建立组件的行为,以便它可以有效地管理何时何地启动一个 Job。 +我们以两个特性的方式将这些旋钮添加到 Job API: + +- [Suspend 字段](/zh-cn/docs/concepts/workloads/controllers/job/#suspending-a-job), + 它允许在 Job 启动或停止时,Kueue 向 Job 控制器发出信号。 + +- [可变调度指令](/zh-cn/docs/concepts/workloads/controllers/job/#mutable-scheduling-directives), + 允许在启动 Job 之前更新 Job 的 `.spec.template.spec.nodeSelector`。 + 这样,Kueue 可以控制 Pod 放置,同时仍将 Pod 到节点的实际调度委托给 kube-scheduler。 + + +请注意,任何自定义的 Job API 都可以由 Kueue 管理,只要该 API 提供上述两种能力。 + + +### 资源模型 + +Kueue 定义了新的 API 来解决本文开头提到的需求。三个主要的 API 是: + +- ResourceFlavor:一个集群范围的 API,用于定义可供消费的资源模板,如 GPU 模型。 + ResourceFlavor 的核心是一组标签,这些标签反映了提供这些资源的节点上的标签。 + +- ClusterQueue: 一种集群范围的 API,通过为一个或多个 ResourceFlavor 设置配额来定义资源池。 + +- LocalQueue: 用于分组和管理单租户 Jobs 的命名空间 API。 + 在最简单的形式中,LocalQueue 是指向集群队列的指针,租户(建模为命名空间)可以使用它来启动他们的 Jobs。 + + +有关更多详细信息,请查看 [API 概念文档](https://sigs.k8s.io/kueue/docs/concepts)。 +虽然这三个 API 看起来无法抗拒,但 Kueue 的大部分操作都以 ClusterQueue 为中心; +ResourceFlavor 和 LocalQueue API 主要是组织包装器。 + + +### 用例样例 + +想象一下在云上的 Kubernetes 集群上运行批处理工作负载的以下设置: + +- 你在集群中安装了 [cluster-autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) 以自动调整集群的大小。 + +- 有两种类型的自动缩放节点组,它们的供应策略不同:即用和按需。 + 分别对应标签:`instance-type=spot` 或者 `instance-type=ondemand`。 + 此外,并非所有作业都可以容忍在即用节点上运行,节点可以用 `spot=true:NoSchedule` 污染。 + +- 为了在成本和资源可用性之间取得平衡,假设你希望 Jobs 使用最多 1000 个核心按需节点,最多 2000 个核心即用节点。 + + +作为批处理系统的管理员,你定义了两个 ResourceFlavor,它们代表两种类型的节点: + +```yaml +--- +apiVersion: kueue.x-k8s.io/v1alpha2 +kind: ResourceFlavor +metadata: + name: ondemand + labels: + instance-type: ondemand +--- +apiVersion: kueue.x-k8s.io/v1alpha2 +kind: ResourceFlavor +metadata: + name: spot + labels: + instance-type: spot +taints: +- effect: NoSchedule + key: spot + value: "true" +``` + +然后通过创建 ClusterQueue 来定义配额,如下所示: +```yaml +apiVersion: kueue.x-k8s.io/v1alpha2 +kind: ClusterQueue +metadata: + name: research-pool +spec: + namespaceSelector: {} + resources: + - name: "cpu" + flavors: + - name: ondemand + quota: + min: 1000 + - name: spot + quota: + min: 2000 +``` + + +注意 ClusterQueue 资源中的模板顺序很重要:Kueue 将尝试根据该顺序为 Job 分配可用配额,除非这些 Job 与特定模板有明确的关联。 + + +对于每个命名空间,定义一个指向上述 ClusterQueue 的 LocalQueue: + +```yaml +apiVersion: kueue.x-k8s.io/v1alpha2 +kind: LocalQueue +metadata: + name: training + namespace: team-ml +spec: + clusterQueue: research-pool +``` + + +管理员创建一次上述配置。批处理用户可以通过在他们的命名空间中列出 LocalQueues 来找到他们被允许提交的队列。 +该命令类似于:`kubectl get -n my-namespace localqueues` + + +要提交作业,需要创建一个 Job 并设置 `kueue.x-k8s.io/queue-name` 注解,如下所示: + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + generateName: sample-job- + annotations: + kueue.x-k8s.io/queue-name: training +spec: + parallelism: 3 + completions: 3 + template: + spec: + tolerations: + - key: spot + operator: "Exists" + effect: "NoSchedule" + containers: + - name: example-batch-workload + image: registry.example/batch/calculate-pi:3.14 + args: ["30s"] + resources: + requests: + cpu: 1 + restartPolicy: Never +``` + + +Kueue 在创建 Job 后立即进行干预以暂停 Job。 +一旦 Job 位于 ClusterQueue 的头部,Kueue 就会通过检查 Job 请求的资源是否符合可用配额来评估它是否可以启动。 + + +在上面的例子中,Job 容忍了 Spot 资源。如果之前承认的 Job 消耗了所有现有的按需配额, +但不是所有 Spot 配额,则 Kueue 承认使用 Spot 配额的 Job。Kueue 通过向 Job 对象发出单个更新来做到这一点: + +- 更改 `.spec.suspend` 标志位为 false +- 将 `instance-type: spot` 添加到 Job 的 `.spec.template.spec.nodeSelector` 中, +以便在 Job 控制器创建 Pod 时,这些 Pod 只能调度到 Spot 节点上。 + + +最后,如果有可用的空节点与节点选择器条件匹配,那么 kube-scheduler 将直接调度 Pod。 +如果不是,那么 kube-scheduler 将 pod 初始化标记为不可调度,这将触发 cluster-autoscaler 配置新节点。 + + +## 未来工作以及参与方式 + +上面的示例提供了 Kueue 的一些功能简介,包括支持配额、资源灵活性以及与集群自动缩放器的集成。 +Kueue 还支持公平共享、Job 优先级和不同的排队策略。 +查看 [Kueue 文档](https://github.com/kubernetes-sigs/kueue/tree/main/docs)以了解这些特性以及如何使用 Kueue 的更多信息。 + + +我们计划将许多特性添加到 Kueue 中,例如分层配额、预算和对动态大小 Job 的支持。 +在不久的将来,我们将专注于增加对 Job 抢占的支持。 + + +最新的 [Kueue 版本](https://github.com/kubernetes-sigs/kueue/releases)在 Github 上可用; +如果你在 Kubernetes 上运行批处理工作负载(需要 v1.22 或更高版本),可以尝试一下。 +这个项目还处于早期阶段,我们正在搜集大大小小各个方面的反馈,请不要犹豫,快来联系我们吧! +无论是修复或报告错误,还是帮助添加新特性或编写文档,我们欢迎一切形式的贡献者。 +你可以通过我们的[仓库](http://sigs.k8s.io/kueue)、[邮件列表](https://groups.google.com/a/kubernetes.io/g/wg-batch)或者 +[Slack](https://kubernetes.slack.com/messages/wg-batch) 与我们联系。 + + +最后是很重要的一点,感谢所有促使这个项目成为可能的[贡献者们](https://github.com/kubernetes-sigs/kueue/graphs/contributors)! diff --git a/content/zh-cn/blog/_posts/2022-10-18-kubernetes-1.26-deprecations-and-removals.md b/content/zh-cn/blog/_posts/2022-10-18-kubernetes-1.26-deprecations-and-removals.md new file mode 100644 index 0000000000000..9128b56736c28 --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-10-18-kubernetes-1.26-deprecations-and-removals.md @@ -0,0 +1,307 @@ +--- +layout: blog +title: "Kubernetes 1.26 中的移除、弃用和主要变更" +date: 2022-11-18 +slug: upcoming-changes-in-kubernetes-1-26 +--- + + + +**作者** :Frederico Muñoz (SAS) + + +变化是 Kubernetes 生命周期不可分割的一部分:随着 Kubernetes 成长和日趋成熟, +为了此项目的健康发展,某些功能特性可能会被弃用、移除或替换为优化过的功能特性。 +Kubernetes v1.26 也做了若干规划:根据 v1.26 发布流程中期获得的信息, +本文将列举并描述其中一些变更,这些变更目前仍在进行中,可能会引入更多变更。 + + +## Kubernetes API 移除和弃用流程 {#k8s-api-deprecation-process} + +Kubernetes 项目对功能特性有一个[文档完备的弃用策略](/zh-cn/docs/reference/using-api/deprecation-policy/)。 +该策略规定,只有当较新的、稳定的相同 API 可用时,原有的稳定 API 才可以被弃用, +每个稳定级别的 API 都有一个最短的生命周期。弃用的 API 指的是已标记为将在后续发行某个 +Kubernetes 版本时移除的 API;移除之前该 API 将继续发挥作用(从弃用起至少一年时间), +但使用时会显示一条警告。被移除的 API 将在当前版本中不再可用,此时你必须迁移以使用替换的 API。 + + +* 正式发布(GA)或稳定的 API 版本可能被标记为已弃用,但只有在 Kubernetes 大版本更新时才会被移除。 +* 测试版(Beta)或预发布 API 版本在弃用后必须在后续 3 个版本中继续支持。 +* Alpha 或实验性 API 版本可以在任何版本中被移除,不另行通知。 + + +无论一个 API 是因为某功能特性从 Beta 进入稳定阶段而被移除,还是因为该 API 根本没有成功, +所有移除均遵从上述弃用策略。无论何时移除一个 API,文档中都会列出迁移选项。 + + +## 有关移除 CRI `v1alpha2` API 和 containerd 1.5 支持的说明 {#cri-api-removal} + +在 v1.24 中采用[容器运行时接口](/zh-cn/docs/concepts/architecture/cri/) (CRI) +并[移除 dockershim] 之后,CRI 是 Kubernetes 与不同容器运行时交互所支持和记录的方式。 +每个 kubelet 会协商使用哪个版本的 CRI 来配合该节点上的容器运行时。 + + +Kubernetes 项目推荐使用 CRI `v1` 版本;在 Kubernetes v1.25 中,kubelet 也可以协商使用 +CRI `v1alpha2`(在添加对稳定的 `v1` 接口的支持同时此项被弃用)。 + +Kubernetes v1.26 将不支持 CRI `v1alpha2`。如果容器运行时不支持 CRI `v1`, +则本次[移除](https://github.com/kubernetes/kubernetes/pull/110618)将导致 kubelet 不注册节点。 +这意味着 Kubernetes 1.26 将不支持 containerd 1.5 小版本及更早的版本;如果你使用 containerd, +则需要升级到 containerd v1.6.0 或更高版本,然后才能将该节点升级到 Kubernetes v1.26。其他仅支持 +`v1alpha2` 的容器运行时同样受到影响。如果此项移除影响到你, +你应该联系容器运行时供应商寻求建议或查阅他们的网站以获取有关如何继续使用的更多说明。 + + +如果你既想从 v1.26 特性中获益又想保持使用较旧的容器运行时,你可以运行较旧的 kubelet。 +kubelet [支持的版本偏差](/zh-cn/releases/version-skew-policy/#kubelet)允许你运行 +v1.25 的 kubelet,即使你将控制平面升级到了 Kubernetes 1.26 的某个次要版本,kubelet +仍然能兼容 `v1alpha2` CRI。 + +除了容器运行时本身,还有像 [stargz-snapshotter](https://github.com/containerd/stargz-snapshotter) +这样的工具充当 kubelet 和容器运行时之间的代理,这些工具也可能会受到影响。 + + +## Kubernetes v1.26 中的弃用和移除 {#deprecations-removals} + +除了上述移除外,Kubernetes v1.26 还准备包含更多移除和弃用。 + + +### 移除 `v1beta1` 流量控制 API 组 {#removal-of-v1beta1-flow-control-api-group} + +FlowSchema 和 PriorityLevelConfiguration 的 `flowcontrol.apiserver.k8s.io/v1beta1` API +版本[将不再在 v1.26 中提供](/zh-cn/docs/reference/using-api/deprecation-guide/#flowcontrol-resources-v126)。 +用户应迁移清单和 API 客户端才能使用自 v1.23 起可用的 `flowcontrol.apiserver.k8s.io/v1beta2` API 版本。 + + +### 移除 `v2beta2` HorizontalPodAutoscaler API {#removal-of-v2beta2-hpa-api} + +HorizontalPodAutoscaler 的 `autoscaling/v2beta2` API +版本[将不再在 v1.26 中提供](/zh-cn/docs/reference/using-api/deprecation-guide/#horizontalpodautoscaler-v126)。 +用户应迁移清单和 API 客户端以使用自 v1.23 起可用的 `autoscaling/v2` API 版本。 + + +### 移除树内凭证管理代码 {#removal-of-in-tree-credential-management-code} + +在即将发布的版本中,原来作为 Kubernetes 一部分的、特定于供应商的身份验证代码将从 `client-go` 和 `kubectl` +中[移除](https://github.com/kubernetes/kubernetes/pull/112341)。 +现有机制为两个特定云供应商提供身份验证支持:Azure 和 Google Cloud。 +作为替代方案,Kubernetes 在发布 v1.26 +之前已提供了供应商中立的[身份验证插件机制](/zh-cn/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins), +你现在就可以切换身份验证机制。如果你受到影响,你可以查阅有关如何继续使用 +[Azure](https://github.com/Azure/kubelogin#readme) 和 +[Google Cloud](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke) +的更多指导信息。 + + +### 移除 `kube-proxy` userspace 模式 {#removal-of-kube-proxy-userspace-modes} + +已弃用一年多的 `userspace` 代理模式[不再受 Linux 或 Windows 支持](https://github.com/kubernetes/kubernetes/pull/112133), +并将在 v1.26 中被移除。Linux 用户应使用 `iptables` 或 `ipvs`,Windows 用户应使用 `kernelspace`: +现在使用 `--mode userspace` 会失败。 + + +### 移除树内 OpenStack 云驱动 {#removal-of-in-treee-openstack-cloud-provider} + +针对存储集成,Kubernetes 正在从使用树内代码转向使用容器存储接口 (CSI)。 +作为这个转变的一部分,Kubernetes v1.26 将移除已弃用的 OpenStack 树内存储集成(`cinder` 卷类型)。 +你应该迁移到外部云驱动或者位于 https://github.com/kubernetes/cloud-provider-openstack 的 CSI 驱动。 +有关详细信息,请访问 +[Cinder in-tree to CSI driver migration](https://github.com/kubernetes/enhancements/issues/1489)。 + + +### 移除 GlusterFS 树内驱动 {#removal-of-glusterfs-in-tree-driver} + +树内 GlusterFS 驱动在 [v1.25 中被弃用](/zh-cn/blog/2022/08/23/kubernetes-v1-25-release/#deprecations-and-removals), +且从 Kubernetes v1.26 起将被移除。 + + +### 弃用非包容性的 `kubectl` 标志 {#deprecation-of-non-inclusive-kubectl-flag} + +作为[包容性命名倡议(Inclusive Naming Initiative)](https://www.cncf.io/announcements/2021/10/13/inclusive-naming-initiative-announces-new-community-resources-for-a-more-inclusive-future/)的实现工作的一部分, +`--prune-whitelist` 标志将被[弃用](https://github.com/kubernetes/kubernetes/pull/113116),并替换为 `--prune-allowlist`。 +强烈建议使用此标志的用户在未来某个版本中最终移除该标志之前进行必要的变更。 + + +### 移除动态 kubelet 配置 {#removal-of-dynamic-kubelet-config} + +**动态 kubelet 配置** +允许[通过 Kubernetes API 推出新的 kubelet 配置](https://github.com/kubernetes/enhancements/tree/2cd758cc6ab617a93f578b40e97728261ab886ed/keps/sig-node/281-dynamic-kubelet-configuration), +甚至能在运作中集群上完成此操作。集群操作员可以通过指定包含 kubelet 应使用的配置数据的 ConfigMap +来重新配置节点上的 kubelet。动态 kubelet 配置已在 v1.24 中从 kubelet 中移除,并将在 v1.26 +版本中[从 API 服务器中移除](https://github.com/kubernetes/kubernetes/pull/112643)。 + + +### 弃用 `kube-apiserver` 命令行参数 {#deprecations-for-kube-apiserver-command-line-arg} + +`--master-service-namespace` 命令行参数对 kube-apiserver 没有任何效果, +并且已经被非正式地[被弃用](https://github.com/kubernetes/kubernetes/pull/38186)。 +该命令行参数将在 v1.26 中正式标记为弃用,准备在未来某个版本中移除。 +Kubernetes 项目预期不会因此项弃用和移除受到任何影响。 + + +### 弃用 `kubectl run` 命令行参数 {#deprecations-for-kubectl-run-command-line-arg} + +针对 `kubectl run` +子命令若干未使用的选项参数将[被标记为弃用](https://github.com/kubernetes/kubernetes/pull/112261),这包括: + +* `--cascade` +* `--filename` +* `--force` +* `--grace-period` +* `--kustomize` +* `--recursive` +* `--timeout` +* `--wait` + + +这些参数已被忽略,因此预计不会产生任何影响:显式的弃用会设置一条警告消息并准备在未来的某个版本中移除这些参数。 + + +### 移除与日志相关的原有命令行参数 {#removal-of-legacy-command-line-arg-relating-to-logging} + +Kubernetes v1.26 将[移除](https://github.com/kubernetes/kubernetes/pull/112120)一些与日志相关的命令行参数。 +这些命令行参数之前已被弃用。有关详细信息, +请参阅[弃用 Kubernetes 组件中的 klog 特定标志](https://github.com/kubernetes/enhancements/tree/3cb66bd0a1ef973ebcc974f935f0ac5cba9db4b2/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)。 + + +## 展望未来 {#looking-ahead} + +Kubernetes 1.27 计划[移除的 API 官方列表](/zh-cn/docs/reference/using-api/deprecation-guide/#v1-27)包括: + +* 所有 Beta 版的 CSIStorageCapacity API;特别是 `storage.k8s.io/v1beta1` + + +### 了解更多 {#want-to-know-more} + +Kubernetes 发行说明中宣告了弃用信息。你可以在以下版本的发行说明中看到待弃用的公告: + +* [Kubernetes 1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#deprecation) +* [Kubernetes 1.22](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#deprecation) +* [Kubernetes 1.23](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#deprecation) +* [Kubernetes 1.24](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#deprecation) +* [Kubernetes 1.25](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#deprecation) + + +我们将在 +[Kubernetes 1.26](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#deprecation) +的 CHANGELOG 中正式宣布该版本的弃用信息。 diff --git a/content/zh-cn/blog/_posts/2022-12-15-dynamic-resource-allocation-alpha/index.md b/content/zh-cn/blog/_posts/2022-12-15-dynamic-resource-allocation-alpha/index.md new file mode 100644 index 0000000000000..55b5485d416d8 --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-12-15-dynamic-resource-allocation-alpha/index.md @@ -0,0 +1,552 @@ +--- +layout: blog +title: "Kubernetes 1.26: 动态资源分配 Alpha API" +date: 2022-12-15 +slug: dynamic-resource-allocation +--- + + + +**作者:** Patrick Ohly (Intel)、Kevin Klues (NVIDIA) + +**译者:** 空桐 + + +动态资源分配是一个用于请求资源的新 API。 +它是对为通用资源所提供的持久卷 API 的泛化。它可以: + +- 在不同的 pod 和容器中访问相同的资源实例, +- 将任意约束附加到资源请求以获取你正在寻找的确切资源, +- 通过用户提供的参数初始化资源。 + + +第三方资源驱动程序负责解释这些参数,并在资源请求到来时跟踪和分配资源。 + + +动态资源分配是一个 **alpha 特性**,只有在启用 `DynamicResourceAllocation` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) +和 `resource.k8s.io/v1alpha1` {{< glossary_tooltip text="API 组" +term_id="api-group" >}} 时才启用。 +有关详细信息,参阅 `--feature-gates` 和 `--runtime-config` +[kube-apiserver 参数](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)。 +kube-scheduler、kube-controller-manager 和 kubelet 也需要设置该特性门控。 + + +kube-scheduler 的默认配置仅在启用特性门控时才启用 `DynamicResources` 插件。 +自定义配置可能需要被修改才能启用它。 + + +一旦启用动态资源分配,就可以安装资源驱动程序来管理某些类型的硬件。 +Kubernetes 有一个用于端到端测试的测试驱动程序,但也可以手动运行。 +逐步说明参见[下文](#running-the-test-driver)。 + + +## API + + +新的 `resource.k8s.io/v1alpha1` +{{< glossary_tooltip text="API 组" term_id="api-group" >}}提供了四种新类型: + + +ResourceClass +: 定义由哪个资源驱动程序处理哪种资源,并为其提供通用参数。 + 在安装资源驱动程序时,由集群管理员创建 ResourceClass。 + +ResourceClaim +: 定义工作负载所需的特定资源实例。 + 由用户创建(手动管理生命周期,可以在不同的 Pod 之间共享), + 或者由控制平面基于 ResourceClaimTemplate 为特定 Pod 创建 + (自动管理生命周期,通常仅由一个 Pod 使用)。 + + +ResourceClaimTemplate +: 定义用于创建 ResourceClaim 的 spec 和一些元数据。 + 部署工作负载时由用户创建。 + +PodScheduling +: 供控制平面和资源驱动程序内部使用, + 在需要为 Pod 分配 ResourceClaim 时协调 Pod 调度。 + + +ResourceClass 和 ResourceClaim 的参数存储在单独的对象中, +通常使用安装资源驱动程序时创建的 {{< glossary_tooltip +term_id="CustomResourceDefinition" text="CRD" >}} 所定义的类型。 + + +启用此 Alpha 特性后,Pod 的 `spec` 定义 Pod 运行所需的 ResourceClaim: +此信息放入新的 `resourceClaims` 字段。 +该列表中的条目引用 ResourceClaim 或 ResourceClaimTemplate。 +当引用 ResourceClaim 时,使用此 `.spec` 的所有 Pod +(例如 Deployment 或 StatefulSet 中的 Pod)共享相同的 ResourceClaim 实例。 +引用 ResourceClaimTemplate 时,每个 Pod 都有自己的实例。 + + +对于 Pod 中定义的容器,`resources.claims` 列表定义该容器可以访问的资源实例, +从而可以在同一 Pod 中的一个或多个容器之间共享资源。 +例如,init 容器可以在应用程序使用资源之前设置资源。 + + +下面是一个虚构的资源驱动程序的示例。 +此 Pod 将创建两个 ResourceClaim 对象,每个容器都可以访问其中一个。 + +假设已安装名为 `resource-driver.example.com` 的资源驱动程序和以下资源类: +``` +apiVersion: resource.k8s.io/v1alpha1 +kind: ResourceClass +name: resource.example.com +driverName: resource-driver.example.com +``` + + +这样,终端用户可以按如下方式分配两个类型为 +`resource.example.com` 的特定资源: +```yaml +--- +apiVersion: cats.resource.example.com/v1 +kind: ClaimParameters +name: large-black-cats +spec: + color: black + size: large +--- +apiVersion: resource.k8s.io/v1alpha1 +kind: ResourceClaimTemplate +metadata: + name: large-black-cats +spec: + spec: + resourceClassName: resource.example.com + parametersRef: + apiGroup: cats.resource.example.com + kind: ClaimParameters + name: large-black-cats +–-- +apiVersion: v1 +kind: Pod +metadata: + name: pod-with-cats +spec: + containers: # 两个示例容器;每个容器申领一个 cat 资源 + - name: first-example + image: ubuntu:22.04 + command: ["sleep", "9999"] + resources: + claims: + - name: cat-0 + - name: second-example + image: ubuntu:22.04 + command: ["sleep", "9999"] + resources: + claims: + - name: cat-1 + resourceClaims: + - name: cat-0 + source: + resourceClaimTemplateName: large-black-cats + - name: cat-1 + source: + resourceClaimTemplateName: large-black-cats +``` + + +## 调度 {#scheduling} + + +与原生资源(CPU、RAM)和[扩展资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/#extended-resources) +(由设备插件管理,并由 kubelet 公布)不同,调度器不知道集群中有哪些动态资源, +也不知道如何将它们拆分以满足特定 ResourceClaim 的要求。 +资源驱动程序负责这些任务。 +资源驱动程序在为 ResourceClaim 保留资源后将其标记为**已分配(Allocated)**。 +然后告诉调度器集群中可用的 ResourceClaim 的位置。 + + +ResourceClaim 可以在创建时就进行分配(**立即分配**),不用考虑哪些 Pod 将使用该资源。 +默认情况下采用延迟分配(**等待第一个消费者**), +直到依赖于 ResourceClaim 的 Pod 有资格调度时再进行分配。 +这种两种分配选项的设计与 Kubernetes 处理 PersistentVolume 和 +PersistentVolumeClaim 供应的存储类似。 + + +在等待第一个消费者模式下,调度器检查 Pod 所需的所有 ResourceClaim。 +如果 Pod 有 ResourceClaim,则调度器会创建一个 PodScheduling +对象(一种特殊对象,代表 Pod 请求调度详细信息)。 +PodScheduling 的名称和命名空间与 Pod 相同,Pod 是它的所有者。 +调度器使用 PodScheduling 通知负责这些 ResourceClaim +的资源驱动程序,告知它们调度器认为适合该 Pod 的节点。 +资源驱动程序通过排除没有足够剩余资源的节点来响应调度器。 + + +一旦调度器有了资源信息,它就会选择一个节点,并将该选择存储在 PodScheduling 对象中。 +然后,资源驱动程序分配其 ResourceClaim,以便资源可用于选中的节点。 +一旦完成资源分配,调度器尝试将 Pod 调度到合适的节点。这时候调度仍然可能失败; +例如,不同的 Pod 可能同时被调度到同一个节点。如果发生这种情况,已分配的 +ResourceClaim 可能会被取消分配,从而让 Pod 可以被调度到不同的节点。 + + +作为此过程的一部分,ResourceClaim 会为 Pod 保留。 +目前,ResourceClaim 可以由单个 Pod 独占使用或不限数量的多个 Pod 使用。 + + +除非 Pod 的所有资源都已分配和保留,否则 Pod 不会被调度到节点,这是一个重要特性。 +这避免了 Pod 被调度到一个节点但无法在那里运行的情况, +这种情况很糟糕,因为被挂起 Pod 也会阻塞为其保留的其他资源,如 RAM 或 CPU。 + + +## 限制 {#limitations} + + +调度器插件必须参与调度那些使用 ResourceClaim 的 Pod。 +通过设置 `nodeName` 字段绕过调度器会导致 kubelet 拒绝启动 Pod, +因为 ResourceClaim 没有被保留或甚至根本没有被分配。 +未来可能去除此[限制](https://github.com/kubernetes/kubernetes/issues/114005)。 + + +## 编写资源驱动程序 {#writing-a-resource-driver} + + +动态资源分配驱动程序通常由两个独立但相互协调的组件组成: +一个集中控制器和一个节点本地 kubelet 插件的 DaemonSet。 +集中控制器与调度器协调所需的大部分工作都可以由样板代码处理。 +只有针对插件所拥有的 ResourceClass 实际分配 ResourceClaim 时所需的业务逻辑才需要自定义。 +因此,Kubernetes 提供了以下软件包,其中包括用于调用此样板代码的 API, +以及可以实现自定义业务逻辑的 `Driver` 接口: +- [k8s.io/dynamic-resource-allocation/controller](https://github.com/kubernetes/dynamic-resource-allocation/tree/release-1.26/controller) + + +同样,样板代码可用于向 kubelet 注册节点本地插件, +也可以启动 gRPC 服务器来实现 kubelet 插件 API。 +对于用 Go 编写的驱动程序,推荐使用以下软件包: +- [k8s.io/dynamic-resource-allocation/kubeletplugin](https://github.com/kubernetes/dynamic-resource-allocation/tree/release-1.26/kubeletplugin) + + +驱动程序开发人员决定这两个组件如何通信。 +[KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md) +详细介绍了[使用 CRD 的方法](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/3063-dynamic-resource-allocation#implementing-a-plugin-for-node-resources)。 + + +在 SIG Node 中,我们还计划提供一个完整的[示例驱动程序](https://github.com/kubernetes-sigs/dra-example-driver), +它可以当作其他驱动程序的模板。 + + +## 运行测试驱动程序 {#running-the-test-driver} + + +下面的步骤直接使用 Kubernetes 源代码启一个本地单节点集群。 +前提是,你的集群必须具有支持[容器设备接口](https://github.com/container-orchestrated-devices/container-device-interface) +(CDI)的容器运行时。 +例如,你可以运行 CRI-O [v1.23.2](https://github.com/cri-o/cri-o/releases/tag/v1.23.2) +或更高版本。containerd v1.7.0 发布后,我们期望你可以运行该版本或更高版本。 +在下面的示例中,我们使用 CRI-O。 + + +首先,克隆 Kubernetes 源代码。在其目录中,运行: +```console +$ hack/install-etcd.sh +... + +$ RUNTIME_CONFIG=resource.k8s.io/v1alpha1 \ + FEATURE_GATES=DynamicResourceAllocation=true \ + DNS_ADDON="coredns" \ + CGROUP_DRIVER=systemd \ + CONTAINER_RUNTIME_ENDPOINT=unix:///var/run/crio/crio.sock \ + LOG_LEVEL=6 \ + ENABLE_CSI_SNAPSHOTTER=false \ + API_SECURE_PORT=6444 \ + ALLOW_PRIVILEGED=1 \ + PATH=$(pwd)/third_party/etcd:$PATH \ + ./hack/local-up-cluster.sh -O +... +要使用集群,你可以打开另一个终端/选项卡并运行: + export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig +... +``` + + +集群启动后,在另一个终端运行测试驱动程序控制器。 +必须为以下所有命令设置 `KUBECONFIG`。 +```console +$ go run ./test/e2e/dra/test-driver --feature-gates ContextualLogging=true -v=5 controller +``` + +在另一个终端中,运行 kubelet 插件: +```console +$ sudo mkdir -p /var/run/cdi && \ + sudo chmod a+rwx /var/run/cdi /var/lib/kubelet/plugins_registry /var/lib/kubelet/plugins/ +$ go run ./test/e2e/dra/test-driver --feature-gates ContextualLogging=true -v=6 kubelet-plugin +``` + + +更改目录的权限,这样可以以普通用户身份运行和(使用 delve)调试 kubelet 插件, +这很方便,因为它使用已填充的 Go 缓存。 +完成后,记得使用 `sudo chmod go-w` 还原权限。 +或者,你也可以构建二进制文件并以 root 身份运行该二进制文件。 + +现在集群已准备好创建对象: +```console +$ kubectl create -f test/e2e/dra/test-driver/deploy/example/resourceclass.yaml +resourceclass.resource.k8s.io/example created + +$ kubectl create -f test/e2e/dra/test-driver/deploy/example/pod-inline.yaml +configmap/test-inline-claim-parameters created +resourceclaimtemplate.resource.k8s.io/test-inline-claim-template created +pod/test-inline-claim created + +$ kubectl get resourceclaims +NAME RESOURCECLASSNAME ALLOCATIONMODE STATE AGE +test-inline-claim-resource example WaitForFirstConsumer allocated,reserved 8s + +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +test-inline-claim 0/2 Completed 0 21s +``` + + +这个测试驱动程序没有做什么事情, +它只是将 ConfigMap 中定义的变量设为环境变量。 +测试 pod 会转储环境变量,所以可以检查日志以验证是否正常: +```console +$ kubectl logs test-inline-claim with-resource | grep user_a +user_a='b' +``` + +## 下一步 {#next-steps} + + +- 了解更多该设计的信息, + 参阅[动态资源分配 KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md)。 +- 阅读 Kubernetes 官方文档的[动态资源分配](/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/)。 +- 你可以参与 [SIG Node](https://github.com/kubernetes/community/blob/master/sig-node/README.md) + 和 [CNCF 容器编排设备工作组](https://github.com/cncf/tag-runtime/blob/master/wg/COD.md)。 +- 你可以查看或评论动态资源分配的[项目看板](https://github.com/orgs/kubernetes/projects/95/views/1)。 +- 为了将该功能向 beta 版本推进,我们需要来自硬件供应商的反馈, + 因此,有一个行动号召:尝试这个功能, + 考虑它如何有助于解决你的用户遇到的问题,并编写资源驱动程序… \ No newline at end of file diff --git a/content/zh-cn/blog/_posts/2022-12-27-cpumanager-goes-GA.md b/content/zh-cn/blog/_posts/2022-12-27-cpumanager-goes-GA.md new file mode 100644 index 0000000000000..9e7b876d103b5 --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-12-27-cpumanager-goes-GA.md @@ -0,0 +1,154 @@ +--- +layout: blog +title: 'Kubernetes v1.26:CPUManager 正式发布' +date: 2022-12-27 +slug: cpumanager-ga +--- + + + +**作者:** Francesco Romani (Red Hat) + +**译者:** Michael Yao (DaoCloud) + + +CPU 管理器是 kubelet 的一部分;kubelet 是 Kubernetes 的节点代理,能够让用户给容器分配独占 CPU。 +CPU 管理器自从 Kubernetes v1.10 [进阶至 Beta](/blog/2018/07/24/feature-highlight-cpu-manager/), +已证明了它本身的可靠性,能够充分胜任将独占 CPU 分配给容器,因此采用率稳步增长, +使其成为性能关键型和低延迟场景的基本组件。随着时间的推移,大多数变更均与错误修复或内部重构有关, +以下列出了几个值得关注、用户可见的变更: + + +- [支持显式保留 CPU](https://github.com/Kubernetes/Kubernetes/pull/83592): + 之前已经可以请求为系统资源(包括 kubelet 本身)保留给定数量的 CPU,这些 CPU 将不会被用于独占 CPU 分配。 + 现在还可以显式选择保留哪些 CPU,而不是让 kubelet 自动拣选 CPU。 +- 使用 kubelet 本地 + [PodResources API](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources) + [向容器报告独占分配的 CPU](https://github.com/Kubernetes/Kubernetes/pull/97415),就像已为设备所做的一样。 +- [优化系统资源的使用](https://github.com/Kubernetes/Kubernetes/pull/101771),消除不必要的 sysfs 变更。 + + +CPU 管理器达到了“能胜任”的水平,因此在 Kubernetes v1.26 中,它进阶至正式发布(GA)状态。 + + +## CPU 管理器的自定义选项 {#cpu-managed-customization} + +CPU 管理器支持两种操作模式,使用其**策略**进行配置。 +使用 `none` 策略,CPU 管理器将 CPU 分配给容器,除了 Pod 规约中设置的(可选)配额外,没有任何特定限制。 +使用 `static` 策略,假设 Pod 属于 Guaranteed QoS 类,并且该 Pod 中的每个容器都请求一个整数核数的 vCPU, +则 CPU 管理器将独占分配 CPU。独占分配意味着(无论是来自同一个 Pod 还是来自不同的 Pod)其他容器都不会被调度到该 CPU 上。 + + +这种简单的操作模型很好地服务了用户群体,但随着 CPU 管理器越来越成熟, +用户开始关注更复杂的使用场景以及如何更好地支持这些使用场景。 + +社区没有添加更多策略,而是意识到几乎所有新颖的用例都是 `static` CPU 管理器策略所赋予的一些行为变化。 +因此,决定添加[调整静态策略行为的选项](https://github.com/Kubernetes/enhancements/tree/master/keps/sig-node/2625-cpumanager-policies-thread-placement #proposed-change)。 +这些选项都达到了不同程度的成熟度,类似于其他的所有 Kubernetes 特性, +为了能够被接受,每个新选项在禁用时都能提供向后兼容的行为,并能在需要进行交互时记录彼此如何交互。 + + +这使得 Kubernetes 项目能够将 CPU 管理器核心组件和核心 CPU 分配算法进阶至 GA,同时也开启了该领域新的实验时代。 +在 Kubernetes v1.26 中,CPU +管理器支持[三个不同的策略选项](/zh-cn/docs/tasks/administer-cluster/cpu-management-policies.md#static-policy-options): + + +`full-pcpus-only` +: 将 CPU 管理器核心分配算法限制为仅支持完整的物理核心,从而减少允许共享核心的硬件技术带来的嘈杂邻居问题。 + +`distribute-cpus-across-numa` +: 驱动 CPU 管理器跨 NUMA 节点均匀分配 CPU,以应对需要多个 NUMA 节点来满足分配的情况。 + +`align-by-socket` +: 更改 CPU 管理器将 CPU 分配给容器的方式:考虑 CPU 按插槽而不是 NUMA 节点边界对齐。 + + +## 后续发展 {#further-development} + +在主要 CPU 管理器特性进阶后,每个现有的策略选项将遵循其进阶过程,独立于 CPU 管理器和其他选项。 +添加新选项的空间虽然存在,但随着对更高灵活性的需求不断增长,CPU 管理器及其策略选项当前所提供的灵活性也有不足。 + +社区中正在讨论如何将 CPU 管理器和当前属于 kubelet 可执行文件的其他资源管理器拆分为可插拔的独立 kubelet 插件。 +如果你对这项努力感兴趣,请加入 SIG Node 交流频道(Slack、邮件列表、每周会议)进行讨论。 + + +## 进一步阅读 {#further-reading} + +请查阅[控制节点上的 CPU 管理策略](/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/)任务页面以了解有关 +CPU 管理器的更多信息及其如何适配其他节点级别资源管理器。 + + +## 参与其中 {#getting-involved} + +此特性由 [SIG Node](https://github.com/Kubernetes/community/blob/master/sig-node/README.md) 社区驱动。 +请加入我们与社区建立联系,就上述特性和更多内容分享你的想法和反馈。我们期待你的回音! diff --git a/content/zh-cn/blog/_posts/2023-01-06-unhealthy-pod-eviction-policy-for-pdb.md b/content/zh-cn/blog/_posts/2023-01-06-unhealthy-pod-eviction-policy-for-pdb.md new file mode 100644 index 0000000000000..d17115d2ad9ab --- /dev/null +++ b/content/zh-cn/blog/_posts/2023-01-06-unhealthy-pod-eviction-policy-for-pdb.md @@ -0,0 +1,198 @@ +--- +layout: blog +title: "Kubernetes 1.26:PodDisruptionBudget 守护的不健康 Pod 所用的驱逐策略" +date: 2023-01-06 +slug: "unhealthy-pod-eviction-policy-for-pdbs" +--- + + + +**作者:** Filip Křepinský (Red Hat), Morten Torkildsen (Google), Ravi Gudimetla (Apple) + +**译者:** Michael Yao (DaoCloud) + + +确保对应用的干扰不影响其可用性不是一个简单的任务。 +上个月发布的 Kubernetes v1.26 允许针对 +[PodDisruptionBudget](/zh-cn/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets) (PDB) +指定**不健康 Pod 驱逐策略**,这有助于在节点执行管理操作期间保持可用性。 + + +## 这解决什么问题? {#what-problem-does-this-solve} + +API 发起的 Pod 驱逐尊重 PodDisruptionBudget (PDB) 约束。这意味着因驱逐 Pod +而请求的[自愿干扰](/zh-cn/docs/concepts/scheduling-eviction/#pod-disruption)不应干扰守护的应用且 +PDB 的 `.status.currentHealthy` 不应低于 `.status.desiredHealthy`。 +如果正在运行的 Pod 状态为 [Unhealthy](/zh-cn/docs/tasks/run-application/configure-pdb/#healthiness-of-a-pod), +则该 Pod 不计入 PDB 状态,只有在应用不受干扰时才可以驱逐这些 Pod。 +这有助于尽可能确保受干扰或还未启动的应用的可用性,不会因驱逐造成额外的停机时间。 + + +不幸的是,对于想要腾空节点但又不进行任何手动干预的集群管理员而言,这种机制是有问题的。 +若一些应用因 Pod 处于 `CrashLoopBackOff` 状态(由于漏洞或配置错误)或 Pod 无法进入就绪状态而行为异常, +会使这项任务变得更加困难。当某应用的所有 Pod 均不健康时,所有驱逐请求都会因违反 PDB 而失败。 +在这种情况下,腾空节点不会有任何作用。 + + +另一方面,有些用户依赖于现有行为,以便: + +- 防止因删除守护基础资源或存储的 Pod 而造成数据丢失 +- 让应用达到最佳可用性 + + +Kubernetes 1.26 为 PodDisruptionBudget API 引入了新的实验性字段: +`.spec.unhealthyPodEvictionPolicy`。启用此字段后,将允许你支持上述两种需求。 + + +## 工作原理 {#how-does-it-work} + +API 发起的驱逐是触发 Pod 优雅终止的一个进程。 +这个进程可以通过直接调用 API 发起,也能使用 `kubectl drain` 或集群中的其他主体来发起。 +在这个过程中,移除每个 Pod 时将与对应的 PDB 协商,确保始终有足够数量的 Pod 在集群中运行。 + + +以下策略允许 PDB 作者进一步控制此进程如何处理不健康的 Pod。 + +有两个策略可供选择:`IfHealthyBudget` 和 `AlwaysAllow`。 + +前者,`IfHealthyBudget` 采用现有行为以达到你默认可获得的最佳的可用性。 +不健康的 Pod 只有在其应用中可用的 Pod 个数达到 `.status.desiredHealthy` 即最小可用个数时才会被干扰。 + + +通过将 PDB 的 `spec.unhealthyPodEvictionPolicy` 字段设置为 `AlwaysAllow`, +可以表示尽可能为应用选择最佳的可用性。采用此策略时,始终能够驱逐不健康的 Pod。 +这可以简化集群的维护和升级。 + +我们认为 `AlwaysAllow` 通常是一个更好的选择,但是对于某些关键工作负载, +你可能仍然倾向于防止不健康的 Pod 被从节点上腾空或其他形式的 API 发起的驱逐。 + + +## 如何使用? {#how-do-i-use-it} + +这是一个 Alpha 特性,意味着你必须使用命令行参数 `--feature-gates=PDBUnhealthyPodEvictionPolicy=true` +为 kube-apiserver 启用 `PDBUnhealthyPodEvictionPolicy` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 + + +以下是一个例子。假设你已在集群中启用了此特性门控且你已定义了运行普通 Web 服务器的 Deployment。 +你已为 Deployment 的 Pod 打了标签 `app: nginx`。 +你想要限制可避免的干扰,你知道对于此应用而言尽力而为的可用性也是足够的。 +你决定即使这些 Web 服务器 Pod 不健康也允许驱逐。 +你创建 PDB 守护此应用,使用 `AlwaysAllow` 策略驱逐不健康的 Pod: + +```yaml +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: nginx-pdb +spec: + selector: + matchLabels: + app: nginx + maxUnavailable: 1 + unhealthyPodEvictionPolicy: AlwaysAllow +``` + + +## 查阅更多资料 {#how-can-i-learn-more} + +- 阅读 KEP:[Unhealthy Pod Eviction Policy for PDBs](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3017-pod-healthy-policy-for-pdb) +- 阅读针对 PodDisruptionBudget + 的[不健康 Pod 驱逐策略](/zh-cn/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy)文档 +- 参阅 [PodDisruptionBudget](/zh-cn/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets)、 + [腾空节点](/zh-cn/docs/tasks/administer-cluster/safely-drain-node/)和[驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)等 Kubernetes 文档 + + +## 我如何参与? {#how-do-i-get-involved} + +如果你有任何反馈,请通过 Slack [#sig-apps](https://kubernetes.slack.com/archives/C18NZM5K9) 频道 +(如有需要,请访问 https://slack.k8s.io/ 获取邀请)或通过 SIG Apps 邮件列表 +[kubernetes-sig-apps@googlegroups.com](https://groups.google.com/g/kubernetes-sig-apps) 联系我们。 diff --git a/content/zh-cn/community/code-of-conduct.md b/content/zh-cn/community/code-of-conduct.md index 8c35ef9594ba3..3002e93fa9880 100644 --- a/content/zh-cn/community/code-of-conduct.md +++ b/content/zh-cn/community/code-of-conduct.md @@ -1,40 +1,45 @@ --- -title: Kubernetes 社区行为规范 +title: Kubernetes 社区行为准则 layout: basic cid: community community_styles_migrated: true --- - - +-->

    - +file an issue. +--> Kubernetes 遵循 -CNCF 行为规范。 -CNCF 社区规范文本如下链接 -commit 0ce4694。 -如果你发现这个 CNCF 社区规范文本已经过时,请 -提交 issue。 +CNCF 行为准则。 +有关 CNCF 行为准则的文本,请参阅 +commit fff715fb0。 +如果你发现此 CNCF 行为准则的文本已经过时, +请提交 Issue

    - - -如果你在活动、会议、Slack 或是其它场合发现有任何违反行为规范的行为,请联系[Kubernetes 行为规范委员会](https://github.com/kubernetes/community/tree/master/committee-code-of-conduct)。 +the Kubernetes Code of Conduct Committee. +You can reach us by email at conduct@kubernetes.io. +Your anonymity will be protected. +--> +如果你在活动、会议、Slack 或是其它场合发现有任何违反行为准则的行为, +请联系 [Kubernetes 行为准则委员会](https://git.k8s.io/community/committee-code-of-conduct)。 +你可以发送电子邮件到 conduct@kubernetes.io。 我们会确保你的匿名性。

    diff --git a/content/zh-cn/community/static/cncf-code-of-conduct.md b/content/zh-cn/community/static/cncf-code-of-conduct.md index dde18750ea16f..8509b38644aca 100644 --- a/content/zh-cn/community/static/cncf-code-of-conduct.md +++ b/content/zh-cn/community/static/cncf-code-of-conduct.md @@ -1,39 +1,71 @@ -## 云原生计算基金会(CNCF)社区行为准则 1.0 版本 +## 云原生计算基金会(CNCF)社区行为准则 1.2 版本 ### 贡献者行为准则 -作为这个项目的贡献者和维护者,为了建立一个开放和受欢迎的社区, -我们保证尊重所有通过报告问题、发布功能请求、更新文档、提交拉取请求或补丁以及其他活动做出贡献的人员。 +作为 CNCF 社区的贡献者和维护者,我们努力建设一个开放和受欢迎的社区,我们承诺尊重所有上报 +Issue、发布功能需求、更新文档、提交 PR 或补丁的贡献者以及其他相关活动的所有参与者。 -我们致力于让参与此项目的每个人都不受骚扰, -无论其经验水平、性别、性别认同和表达、性取向、残疾、个人外貌、体型、人种、种族、年龄、宗教或国籍等。 +我们致力于让参与此项目的每个人都不受骚扰,无论其经验水平、性别、性别认同和表达、性取向、残疾、个人外貌、体型、人种、种族、年龄、宗教或国籍等。 + +## 范围 + +当某个人代表项目或其社区时,本行为准则适用于项目空间和公共空间。 + +### CNCF 活动行为准则 + +云原生计算基金会(CNCF)活动受 Linux +基金会[活动行为准则](https://events.linuxfoundation.org/code-of-conduct/)管辖,该行为准则可在活动页面获得。 +其旨在与 CNCF 行为准则兼容。 + +## 我们的标准 + +对社区贡献具有积极正向作用的行为包括: + +* 表现出对他人的共情和善意 +* 尊重不同的意见、观点和经验 +* 提出和优雅地接受建设性的反馈 +* 有担当,如因自己的错误影响到别人能适时道歉,吸取经验教训 +* 关注点不只放在如何让我们自己受益,还能着眼于整个社区 不可接受的参与者行为包括: -- 使用性语言或图像 -- 人身攻击 -- 挑衅、侮辱或贬低性评论 -- 公开或私下骚扰 -- 未经明确许可,发布他人的私人信息,比如地址或电子邮箱 -- 其他不道德或不专业的行为 +* 使用性语言或图像以及任何类型的性关注和性倾向 +* 挑衅、侮辱或贬低性评论以及人身或政治攻击 +* 公开或私下骚扰 +* 未经明确许可,发布他人的私密信息,比如地址或电子邮箱 +* 其他可能视为不道德或不专业的行为 -项目维护者有权利和责任删除、编辑或拒绝评论、提交、代码、维基编辑、问题和其他不符合本行为准则的贡献。 +项目维护者有权利和责任删除、编辑或拒绝不符合本行为准则的评论、提交、编写代码、编辑维基、提问和其它贡献。 通过采用本行为准则,项目维护者承诺将这些原则公平且一致地应用到这个项目管理的各个方面。 不遵守或不执行行为准则的项目维护者可能被永久地从项目团队中移除。 -当个人代表项目或其社区时,本行为准则适用于项目空间和公共空间。 +## 上报 -如需举报侮辱、骚扰或其他不可接受的行为, -你可发送邮件至 联系 -[Kubernetes行为守则委员会](https://github.com/kubernetes/community/tree/master/committee-code-of-conduct)。 -其他事务请联系CNCF项目维护专员,或发送邮件至 联系我们的调解员Mishi Choudhary。 +对于 Kubernetes 社区中出现的事件,请发送邮件至 联系 +[Kubernetes 行为准则委员会](https://git.k8s.io/community/committee-code-of-conduct)。 +预计你可以在三个工作日内收到答复。 -本行为准则改编自《贡献者契约》( https://contributor-covenant.org )1.2.0 版本, -可在 https://contributor-covenant.org/version/1/2/0/ 查看。 +对于其他项目、或与项目无关或影响到多个 CNCF 项目的事件,请通过 conduct@cncf.io +联系 [CNCF 行为准则委员会](https://www.cncf.io/conduct/committee/)。 +你也可以联系 [CNCF 行为准则委员会](https://www.cncf.io/conduct/committee/)的任何成员提交你的报告。 +包括匿名提交报告在内有关如何提交报告的更多详细指示, +请参阅我们的[事件解决程序](https://www.cncf.io/conduct/procedures/)。 +预计你可以在三个工作日内收到答复。 -### CNCF 活动行为准则 +## 执行 + +Kubernetes 项目的[行为准则委员会](https://github.com/kubernetes/community/tree/master/committee-code-of-conduct)负责解决 +Kubernetes 项目有关的行为准则问题。 + +对于未设立行为准则委员会或没有行为准则事件应变人员的其他所有项目,以及与项目无关或影响到多个项目的事件, +[CNCF 行为准则委员会](https://www.cncf.io/conduct/committee/)负责解决行为准则问题。 +有关更多信息,请参阅我们的[管辖及逐级上报政策](https://www.cncf.io/conduct/jurisdiction/)。 + +这两个委员会将尝试以零惩罚平息事件,但可能自行决定将相关人员从项目或 CNCF 社区中移除。 + +## 致谢 -云原生计算基金会(CNCF)活动受 Linux 基金会《[行为准则](https://events.linuxfoundation.org/code-of-conduct/)》管辖, -该行为准则可在活动页面获得。其旨在与上述政策兼容,且包括更多关于事件回应的细节。 \ No newline at end of file +本行为准则改编自[贡献者契约(Contributor Covenant)](http://contributor-covenant.org) v2.0, +具体可查阅 http://contributor-covenant.org/version/2/0/code_of_conduct/ diff --git a/content/zh-cn/docs/concepts/architecture/cri.md b/content/zh-cn/docs/concepts/architecture/cri.md index c80a2af5c6fe2..30128ef1802ca 100644 --- a/content/zh-cn/docs/concepts/architecture/cri.md +++ b/content/zh-cn/docs/concepts/architecture/cri.md @@ -58,8 +58,9 @@ If the kubelet cannot negotiate a supported CRI version, the kubelet gives up and doesn't register as a node. --> 对 Kubernetes v{{< skew currentVersion >}},kubelet 偏向于使用 CRI `v1` 版本。 -如果容器运行时不支持 CRI 的 `v1` 版本,那么 kubelet 会尝试协商任何旧的其他支持版本。 -如果 kubelet 无法协商支持的 CRI 版本,则 kubelet 放弃并且不会注册为节点。 +如果容器运行时不支持 CRI 的 `v1` 版本,那么 kubelet 会尝试协商较老的、仍被支持的所有版本。 +v{{< skew currentVersion >}} 版本的 kubelet 也可协商 CRI `v1alpha2` 版本,但该版本被视为已弃用。 +如果 kubelet 无法协商出可支持的 CRI 版本,则 kubelet 放弃并且不会注册为节点。 + + + + +分布式系统通常需要“租约”,它提供了一种机制来锁定共享资源并协调节点之间的活动。 +在 Kubernetes 中,“租约”概念表示为 `coordination.k8s.io` API 组中的 `Lease` 对象, +常用于类似节点心跳和组件级领导者选举等系统核心能力。 + + + + +## 节点心跳 {#node-heart-beats} + +Kubernetes 使用 Lease API 将 kubelet 节点心跳传递到 Kubernetes API 服务器。 +对于每个 `Node`,在 `kube-node-lease` 名字空间中都有一个具有匹配名称的 `Lease` 对象。 +在此基础上,每个 kubelet 心跳都是对该 `Lease` 对象的 UPDATE 请求,更新该 Lease 的 `spec.renewTime` 字段。 +Kubernetes 控制平面使用此字段的时间戳来确定此 `Node` 的可用性。 + +更多细节请参阅 [Node Lease 对象](/zh-cn/docs/concepts/architecture/nodes/#heartbeats)。 + + +## 领导者选举 {#leader-election} + +租约在 Kubernetes 中还用于确保在任何给定时间某个组件只有一个实例在运行。 +这在高可用配置中由 `kube-controller-manager` 和 `kube-scheduler` 等控制平面组件进行使用, +这些组件只应有一个实例激活运行,而其他实例待机。 + + +## API 服务器身份 {#api-server-identity} + +{{< feature-state for_k8s_version="v1.26" state="beta" >}} + + +从 Kubernetes v1.26 开始,每个 `kube-apiserver` 都使用 Lease API 将其身份发布到系统中的其他位置。 +虽然它本身并不是特别有用,但为客户端提供了一种机制来发现有多少个 `kube-apiserver` 实例正在操作 +Kubernetes 控制平面。kube-apiserver 租约的存在使得未来可以在各个 kube-apiserver 之间协调新的能力。 + +你可以检查 `kube-system` 名字空间中名为 `kube-apiserver-` 的 Lease 对象来查看每个 +kube-apiserver 拥有的租约。你还可以使用标签选择算符 `k8s.io/component=kube-apiserver`: + +```shell +$ kubectl -n kube-system get lease -l k8s.io/component=kube-apiserver +NAME HOLDER AGE +kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4 5m33s +kube-apiserver-dz2dqprdpsgnm756t5rnov7yka kube-apiserver-dz2dqprdpsgnm756t5rnov7yka_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s +kube-apiserver-fyloo45sdenffw2ugwaz3likua kube-apiserver-fyloo45sdenffw2ugwaz3likua_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s +``` + + +租约名称中使用的 SHA256 哈希基于 kube-apiserver 所看到的操作系统主机名生成。 +每个 kube-apiserver 都应该被配置为使用集群中唯一的主机名。 +使用相同主机名的 kube-apiserver 新实例将使用新的持有者身份接管现有租约,而不是实例化新的 Lease 对象。 +你可以通过检查 `kubernetes.io/hostname` 标签的值来查看 kube-apisever 所使用的主机名: + +```shell +kubectl -n kube-system get lease kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a -o yaml +``` + +```yaml +apiVersion: coordination.k8s.io/v1 +kind: Lease +metadata: + creationTimestamp: "2022-11-30T15:37:15Z" + labels: + k8s.io/component: kube-apiserver + kubernetes.io/hostname: kind-control-plane + name: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a + namespace: kube-system + resourceVersion: "18171" + uid: d6c68901-4ec5-4385-b1ef-2d783738da6c +spec: + holderIdentity: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4 + leaseDurationSeconds: 3600 + renewTime: "2022-11-30T18:04:27.912073Z" +``` + + +kube-apiserver 中不再存续的已到期租约将在到期 1 小时后被新的 kube-apiservers 作为垃圾收集。 diff --git a/content/zh-cn/docs/concepts/architecture/nodes.md b/content/zh-cn/docs/concepts/architecture/nodes.md index 8e4670e3c4307..3c85499a27d43 100644 --- a/content/zh-cn/docs/concepts/architecture/nodes.md +++ b/content/zh-cn/docs/concepts/architecture/nodes.md @@ -15,7 +15,7 @@ weight: 10 -Kubernetes 通过将容器放入在节点(Node)上运行的 Pod 中来执行你的工作负载。 +Kubernetes 通过将容器放入在节点(Node)上运行的 Pod +中来执行你的{{< glossary_tooltip text="工作负载" term_id="workload" >}}。 节点可以是一个虚拟机或者物理机器,取决于所在的集群配置。 每个节点包含运行 {{< glossary_tooltip text="Pod" term_id="pod" >}} 所需的服务; -这些节点由 {{< glossary_tooltip text="控制面" term_id="control-plane" >}} 负责管理。 +这些节点由{{< glossary_tooltip text="控制面" term_id="control-plane" >}}负责管理。 通常集群中会有若干个节点;而在一个学习所用或者资源受限的环境中,你的集群中也可能只有一个节点。 @@ -100,6 +101,7 @@ You, or a {{< glossary_tooltip term_id="controller" text="controller">}}, must e delete the Node object to stop that health checking. --> Kubernetes 会一直保存着非法节点对应的对象,并持续检查该节点是否已经变得健康。 + 你,或者某个{{< glossary_tooltip term_id="controller" text="控制器">}}必须显式地删除该 Node 对象以停止健康检查操作。 {{< /note >}} @@ -371,7 +373,7 @@ Condition,被保护起来的节点在其规约中被标记为不可调度(Un In the Kubernetes API, a node's condition is represented as part of the `.status` of the Node resource. For example, the following JSON structure describes a healthy node: --> -在 Kubernetes API 中,节点的状况表示节点资源中`.status` 的一部分。 +在 Kubernetes API 中,节点的状况表示节点资源中 `.status` 的一部分。 例如,以下 JSON 结构描述了一个健康节点: ```json @@ -424,7 +426,7 @@ names. --> 节点控制器在确认 Pod 在集群中已经停止运行前,不会强制删除它们。 你可以看到可能在这些无法访问的节点上运行的 Pod 处于 `Terminating` 或者 `Unknown` 状态。 -如果 kubernetes 不能基于下层基础设施推断出某节点是否已经永久离开了集群, +如果 Kubernetes 不能基于下层基础设施推断出某节点是否已经永久离开了集群, 集群管理员可能需要手动删除该节点对象。 从 Kubernetes 删除节点对象将导致 API 服务器删除节点上所有运行的 Pod 对象并释放它们的名字。 @@ -483,7 +485,6 @@ operating system the node uses. The kubelet gathers this information from the node and publishes it into the Kubernetes API. --> - ### 信息(Info) {#info} Info 指的是节点的一般信息,如内核版本、Kubernetes 版本(`kubelet` 和 `kube-proxy` 版本)、 @@ -847,91 +848,6 @@ Message: Pod was terminated in response to imminent node shutdown. ``` {{< /note >}} - -## 节点非体面关闭 {#non-graceful-node-shutdown} - -{{< feature-state state="alpha" for_k8s_version="v1.24" >}} - - -节点关闭的操作可能无法被 kubelet 的节点关闭管理器检测到, -是因为该命令不会触发 kubelet 所使用的抑制锁定机制,或者是因为用户错误的原因, -即 ShutdownGracePeriod 和 ShutdownGracePeriodCriticalPod 配置不正确。 -请参考以上[节点体面关闭](#graceful-node-shutdown)部分了解更多详细信息。 - - -当某节点关闭但 kubelet 的节点关闭管理器未检测到这一事件时, -在那个已关闭节点上、属于 StatefulSet 的 Pod 将停滞于终止状态,并且不能移动到新的运行节点上。 -这是因为已关闭节点上的 kubelet 已不存在,亦无法删除 Pod, -因此 StatefulSet 无法创建同名的新 Pod。 -如果 Pod 使用了卷,则 VolumeAttachments 不会从原来的已关闭节点上删除, -因此这些 Pod 所使用的卷也无法挂接到新的运行节点上。 -所以,那些以 StatefulSet 形式运行的应用无法正常工作。 -如果原来的已关闭节点被恢复,kubelet 将删除 Pod,新的 Pod 将被在不同的运行节点上创建。 -如果原来的已关闭节点没有被恢复,那些在已关闭节点上的 Pod 将永远滞留在终止状态。 - - -为了缓解上述情况,用户可以手动将具有 `NoExecute` 或 `NoSchedule` 效果的 -`node.kubernetes.io/out-of-service` 污点添加到节点上,标记其无法提供服务。 -如果在 `kube-controller-manager` 上启用了 `NodeOutOfServiceVolumeDetach` -[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), -并且节点被通过污点标记为无法提供服务,如果节点 Pod 上没有设置对应的容忍度, -那么这样的 Pod 将被强制删除,并且该在节点上被终止的 Pod 将立即进行卷分离操作。 -这样就允许那些在无法提供服务节点上的 Pod 能在其他节点上快速恢复。 - - -在非体面关闭期间,Pod 分两个阶段终止: -1. 强制删除没有匹配的 `out-of-service` 容忍度的 Pod。 -2. 立即对此类 Pod 执行分离卷操作。 - - -{{< note >}} -- 在添加 `node.kubernetes.io/out-of-service` 污点之前,应该验证节点已经处于关闭或断电状态(而不是在重新启动中)。 -- 将 Pod 移动到新节点后,用户需要手动移除停止服务的污点,并且用户要检查关闭节点是否已恢复,因为该用户是最初添加污点的用户。 -{{< /note >}} - - @@ -1076,14 +992,12 @@ their respective shutdown periods. 中的 `shutdownGracePeriodByPodPriority` 设置为期望的配置, 其中包含 Pod 的优先级类数值以及对应的关闭期限。 - -{{< note >}} 在节点体面关闭期间考虑 Pod 优先级的能力是作为 Kubernetes v1.23 中的 Alpha 功能引入的。 在 Kubernetes {{< skew currentVersion >}} 中该功能是 Beta 版,默认启用。 {{< /note >}} @@ -1095,6 +1009,93 @@ are emitted under the kubelet subsystem to monitor node shutdowns. kubelet 子系统中会生成 `graceful_shutdown_start_time_seconds` 和 `graceful_shutdown_end_time_seconds` 度量指标以便监视节点关闭行为。 + +## 节点非体面关闭 {#non-graceful-node-shutdown} + +{{< feature-state state="beta" for_k8s_version="v1.26" >}} + + +节点关闭的操作可能无法被 kubelet 的节点关闭管理器检测到, +是因为该命令不会触发 kubelet 所使用的抑制锁定机制,或者是因为用户错误的原因, +即 ShutdownGracePeriod 和 ShutdownGracePeriodCriticalPod 配置不正确。 +请参考以上[节点体面关闭](#graceful-node-shutdown)部分了解更多详细信息。 + + +当某节点关闭但 kubelet 的节点关闭管理器未检测到这一事件时, +在那个已关闭节点上、属于 {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} +的 Pod 将停滞于终止状态,并且不能移动到新的运行节点上。 +这是因为已关闭节点上的 kubelet 已不存在,亦无法删除 Pod, +因此 StatefulSet 无法创建同名的新 Pod。 +如果 Pod 使用了卷,则 VolumeAttachments 不会从原来的已关闭节点上删除, +因此这些 Pod 所使用的卷也无法挂接到新的运行节点上。 +所以,那些以 StatefulSet 形式运行的应用无法正常工作。 +如果原来的已关闭节点被恢复,kubelet 将删除 Pod,新的 Pod 将被在不同的运行节点上创建。 +如果原来的已关闭节点没有被恢复,那些在已关闭节点上的 Pod 将永远滞留在终止状态。 + + +为了缓解上述情况,用户可以手动将具有 `NoExecute` 或 `NoSchedule` 效果的 +`node.kubernetes.io/out-of-service` 污点添加到节点上,标记其无法提供服务。 +如果在 {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} +上启用了 `NodeOutOfServiceVolumeDetach` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), +并且节点被通过污点标记为无法提供服务,如果节点 Pod 上没有设置对应的容忍度, +那么这样的 Pod 将被强制删除,并且该在节点上被终止的 Pod 将立即进行卷分离操作。 +这样就允许那些在无法提供服务节点上的 Pod 能在其他节点上快速恢复。 + + +在非体面关闭期间,Pod 分两个阶段终止: + +1. 强制删除没有匹配的 `out-of-service` 容忍度的 Pod。 +2. 立即对此类 Pod 执行分离卷操作。 + +{{< note >}} + +- 在添加 `node.kubernetes.io/out-of-service` 污点之前, + 应该验证节点已经处于关闭或断电状态(而不是在重新启动中)。 +- 将 Pod 移动到新节点后,用户需要手动移除停止服务的污点, + 并且用户要检查关闭节点是否已恢复,因为该用户是最初添加污点的用户。 +{{< /note >}} + @@ -1188,15 +1189,21 @@ see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its ## {{% heading "whatsnext" %}} -* 进一步了解节点[组件](/zh-cn/docs/concepts/overview/components/#node-components)。 -* 阅读 [Node 的 API 定义](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core)。 -* 阅读架构设计文档中有关 +进一步了解以下资料: + +* 构成节点的[组件](/zh-cn/docs/concepts/overview/components/#node-components)。 +* [Node 的 API 定义](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core)。 +* 架构设计文档中有关 [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) 的章节。 -* 了解[污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)。 +* [污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)。 +* [节点资源管理器](/zh-cn/docs/concepts/policy/node-resource-managers/)。 +* [Windows 节点的资源管理](/zh-cn/docs/concepts/configuration/windows-resource-management/)。 diff --git a/content/zh-cn/docs/concepts/cluster-administration/manage-deployment.md b/content/zh-cn/docs/concepts/cluster-administration/manage-deployment.md index 43b7ee8460ccc..6d0681ace14cb 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/zh-cn/docs/concepts/cluster-administration/manage-deployment.md @@ -3,12 +3,23 @@ title: 管理资源 content_type: concept weight: 40 --- + +You've deployed your application and exposed it via a service. Now what? Kubernetes provides a +number of tools to help you manage your application deployment, including scaling and updating. +Among the features that we will discuss in more depth are +[configuration files](/docs/concepts/configuration/overview/) and +[labels](/docs/concepts/overview/working-with-objects/labels/). +--> 你已经部署了应用并通过服务暴露它。然后呢? Kubernetes 提供了一些工具来帮助管理你的应用部署,包括扩缩容和更新。 我们将更深入讨论的特性包括 @@ -20,9 +31,11 @@ Kubernetes 提供了一些工具来帮助管理你的应用部署,包括扩缩 -## 组织资源配置 +Many applications require multiple resources to be created, such as a Deployment and a Service. +Management of multiple resources can be simplified by grouping them together in the same file +(separated by `---` in YAML). For example: +--> +## 组织资源配置 {#organizing-resource-config} 许多应用需要创建多个资源,例如 Deployment 和 Service。 可以通过将多个资源组合在同一个文件中(在 YAML 中以 `---` 分隔) @@ -32,72 +45,68 @@ Many applications require multiple resources to be created, such as a Deployment +--> 可以用创建单个资源相同的方式来创建多个资源: ```shell kubectl apply -f https://k8s.io/examples/application/nginx-app.yaml ``` -``` +```none service/my-nginx-svc created deployment.apps/my-nginx created ``` +The resources will be created in the order they appear in the file. Therefore, it's best to +specify the service first, since that will ensure the scheduler can spread the pods associated +with the service as they are created by the controller(s), such as Deployment. +--> 资源将按照它们在文件中的顺序创建。 因此,最好先指定服务,这样在控制器(例如 Deployment)创建 Pod 时能够 确保调度器可以将与服务关联的多个 Pod 分散到不同节点。 -`kubectl create` 也接受多个 `-f` 参数: - -```shell -kubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml -``` - - -还可以指定目录路径,而不用添加多个单独的文件: +--> +`kubectl apply` 也接受多个 `-f` 参数: ```shell -kubectl apply -f https://k8s.io/examples/application/nginx/ +kubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \ + -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml ``` -`kubectl` 将读取任何后缀为 `.yaml`、`.yml` 或者 `.json` 的文件。 +It is a recommended practice to put resources related to the same microservice or application tier +into the same file, and to group all of the files associated with your application in the same +directory. If the tiers of your application bind to each other using DNS, you can deploy all of +the components of your stack together. +A URL can also be specified as a configuration source, which is handy for deploying directly from +configuration files checked into GitHub: +--> 建议的做法是,将同一个微服务或同一应用层相关的资源放到同一个文件中, 将同一个应用相关的所有文件按组存放到同一个目录中。 -如果应用的各层使用 DNS 相互绑定,那么你可以将堆栈的所有组件一起部署。 +如果应用的各层使用 DNS 相互绑定,你可以将堆栈的所有组件一起部署。 -还可以使用 URL 作为配置源,便于直接使用已经提交到 Github 上的配置文件进行部署: +还可以使用 URL 作为配置源,便于直接使用已经提交到 GitHub 上的配置文件进行部署: ```shell -kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/zh-cn/examples/application/nginx/nginx-deployment.yaml +kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml ``` -``` +```none deployment.apps/my-nginx created ``` -## kubectl 中的批量操作 +Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract +resource names from configuration files in order to perform other operations, in particular to +delete the same resources you created: +--> +## kubectl 中的批量操作 {#bulk-operations-in-kubectl} 资源创建并不是 `kubectl` 可以批量执行的唯一操作。 `kubectl` 还可以从配置文件中提取资源名,以便执行其他操作, @@ -107,15 +116,16 @@ Resource creation isn't the only operation that `kubectl` can perform in bulk. I kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml ``` -``` +```none deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` -在仅有两种资源的情况下,可以使用"资源类型/资源名"的语法在命令行中 +In the case of two resources, you can specify both resources on the command line using the +resource/name syntax: +--> +在仅有两种资源的情况下,你可以使用"资源类型/资源名"的语法在命令行中 同时指定这两个资源: ```shell @@ -123,7 +133,8 @@ kubectl delete deployments/my-nginx services/my-nginx-svc ``` 对于资源数目较大的情况,你会发现使用 `-l` 或 `--selector` 指定筛选器(标签查询)能很容易根据标签筛选资源: @@ -132,13 +143,14 @@ For larger numbers of resources, you'll find it easier to specify the selector ( kubectl delete deployment,services -l app=nginx ``` -``` +```none deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` 由于 `kubectl` 用来输出资源名称的语法与其所接受的资源名称语法相同, 你可以使用 `$()` 或 `xargs` 进行链式操作: @@ -148,32 +160,37 @@ kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o n kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service | xargs -i kubectl get {} ``` -``` +```none NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-svc LoadBalancer 10.0.0.208 80/TCP 0s ``` +With the above commands, we first create resources under `examples/application/nginx/` and print +the resources created with `-o name` output format (print each resource as resource/name). +Then we `grep` only the "service", and then print it with `kubectl get`. +--> 上面的命令中,我们首先使用 `examples/application/nginx/` 下的配置文件创建资源, 并使用 `-o name` 的输出格式(以"资源/名称"的形式打印每个资源)打印所创建的资源。 然后,我们通过 `grep` 来过滤 "service",最后再打印 `kubectl get` 的内容。 +If you happen to organize your resources across several subdirectories within a particular +directory, you can recursively perform the operations on the subdirectories also, by specifying +`--recursive` or `-R` alongside the `--filename,-f` flag. +--> 如果你碰巧在某个路径下的多个子路径中组织资源,那么也可以递归地在所有子路径上 执行操作,方法是在 `--filename,-f` 后面指定 `--recursive` 或者 `-R`。 +For instance, assume there is a directory `project/k8s/development` that holds all of the +{{< glossary_tooltip text="manifests" term_id="manifest" >}} needed for the development environment, +organized by resource type: +--> 例如,假设有一个目录路径为 `project/k8s/development`,它保存开发环境所需的 -所有清单,并按资源类型组织: +所有{{< glossary_tooltip text="清单" term_id="manifest" >}},并按资源类型组织: -``` +```none project/k8s/development ├── configmap │   └── my-configmap.yaml @@ -184,8 +201,10 @@ project/k8s/development ``` +By default, performing a bulk operation on `project/k8s/development` will stop at the first level +of the directory, not processing any subdirectories. If we had tried to create the resources in +this directory using the following command, we would have encountered an error: +--> 默认情况下,对 `project/k8s/development` 执行的批量操作将停止在目录的第一级, 而不是处理所有子目录。 如果我们试图使用以下命令在此目录中创建资源,则会遇到一个错误: @@ -194,30 +213,31 @@ By default, performing a bulk operation on `project/k8s/development` will stop a kubectl apply -f project/k8s/development ``` -``` +```none error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin) ``` +--> 正确的做法是,在 `--filename,-f` 后面标明 `--recursive` 或者 `-R` 之后: ```shell kubectl apply -f project/k8s/development --recursive ``` -``` +```none configmap/my-config created deployment.apps/my-deployment created persistentvolumeclaim/my-pvc created ``` +--> `--recursive` 可以用于接受 `--filename,-f` 参数的任何操作,例如: `kubectl {create,get,delete,describe,rollout}` 等。 @@ -227,7 +247,7 @@ The `--recursive` flag also works when multiple `-f` arguments are provided: kubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive ``` -``` +```none namespace/development created namespace/staging created configmap/my-config created @@ -236,23 +256,26 @@ persistentvolumeclaim/my-pvc created ``` -如果你有兴趣进一步学习关于 `kubectl` 的内容,请阅读 -[命令行工具(kubectl)](/zh-cn/docs/reference/kubectl/)。 +如果你有兴趣进一步学习关于 `kubectl` 的内容,请阅读[命令行工具(kubectl)](/zh-cn/docs/reference/kubectl/)。 -## 有效地使用标签 +## 有效地使用标签 {#using-labels-effectively} 到目前为止我们使用的示例中的资源最多使用了一个标签。 在许多情况下,应使用多个标签来区分集合。 例如,不同的应用可能会为 `app` 标签设置不同的值。 但是,类似 [guestbook 示例](https://github.com/kubernetes/examples/tree/master/guestbook/) @@ -265,8 +288,9 @@ For instance, different applications would use different values for the `app` la ``` +while the Redis master and slave would have different `tier` labels, and perhaps even an +additional `role` label: +--> Redis 的主节点和从节点会有不同的 `tier` 标签,甚至还有一个额外的 `role` 标签: ```yaml @@ -276,7 +300,9 @@ Redis 的主节点和从节点会有不同的 `tier` 标签,甚至还有一个 role: master ``` - + 以及 ```yaml @@ -288,7 +314,7 @@ Redis 的主节点和从节点会有不同的 `tier` 标签,甚至还有一个 +--> 标签允许我们按照标签指定的任何维度对我们的资源进行切片和切块: ```shell @@ -296,7 +322,7 @@ kubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml kubectl get pods -Lapp -Ltier -Lrole ``` -``` +```none NAME READY STATUS RESTARTS AGE APP TIER ROLE guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend @@ -312,7 +338,7 @@ my-nginx-o0ef1 1/1 Running 0 29m nginx kubectl get pods -lapp=guestbook,role=slave ``` -``` +```none NAME READY STATUS RESTARTS AGE guestbook-redis-slave-2q2yf 1/1 Running 0 3m guestbook-redis-slave-qgazl 1/1 Running 0 3m @@ -321,8 +347,12 @@ guestbook-redis-slave-qgazl 1/1 Running 0 3m +Another scenario where multiple labels are needed is to distinguish deployments of different +releases or configurations of the same component. It is common practice to deploy a *canary* of a +new application release (specified via image tag in the pod template) side by side with the +previous release so that the new release can receive live production traffic before fully rolling +it out. +--> ## 金丝雀部署(Canary Deployments) {#canary-deployments} 另一个需要多标签的场景是用来区分同一组件的不同版本或者不同配置的多个部署。 @@ -334,74 +364,81 @@ Another scenario where multiple labels are needed is to distinguish deployments For instance, you can use a `track` label to differentiate different releases. The primary, stable release would have a `track` label with value as `stable`: - --> +--> 例如,你可以使用 `track` 标签来区分不同的版本。 主要稳定的发行版将有一个 `track` 标签,其值为 `stable`: -```yaml - name: frontend - replicas: 3 - ... - labels: - app: guestbook - tier: frontend - track: stable - ... - image: gb-frontend:v3 +```none +name: frontend +replicas: 3 +... +labels: + app: guestbook + tier: frontend + track: stable +... +image: gb-frontend:v3 ``` +and then you can create a new release of the guestbook frontend that carries the `track` label +with different value (i.e. `canary`), so that two sets of pods would not overlap: +--> 然后,你可以创建 guestbook 前端的新版本,让这些版本的 `track` 标签带有不同的值 (即 `canary`),以便两组 Pod 不会重叠: -```yaml - name: frontend-canary - replicas: 1 - ... - labels: - app: guestbook - tier: frontend - track: canary - ... - image: gb-frontend:v4 +```none +name: frontend-canary +replicas: 1 +... +labels: + app: guestbook + tier: frontend + track: canary +... +image: gb-frontend:v4 ``` +The frontend service would span both sets of replicas by selecting the common subset of their +labels (i.e. omitting the `track` label), so that the traffic will be redirected to both +applications: +--> 前端服务通过选择标签的公共子集(即忽略 `track` 标签)来覆盖两组副本, 以便流量可以转发到两个应用: ```yaml - selector: - app: guestbook - tier: frontend +selector: + app: guestbook + tier: frontend ``` +You can tweak the number of replicas of the stable and canary releases to determine the ratio of +each release that will receive live production traffic (in this case, 3:1). +Once you're confident, you can update the stable track to the new application release and remove +the canary one. +--> 你可以调整 `stable` 和 `canary` 版本的副本数量,以确定每个版本将接收 实时生产流量的比例(在本例中为 3:1)。 一旦有信心,你就可以将新版本应用的 `track` 标签的值从 `canary` 替换为 `stable`,并且将老版本应用删除。 +For a more concrete example, check the +[tutorial of deploying Ghost](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary). +--> 想要了解更具体的示例,请查看 [Ghost 部署教程](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary)。 +--> ## 更新标签 {#updating-labels} 有时,现有的 pod 和其它资源需要在创建新资源之前重新标记。 @@ -412,7 +449,7 @@ For example, if you want to label all your nginx pods as frontend tier, run: kubectl label pods -l app=nginx tier=fe ``` -``` +```none pod/my-nginx-2035384211-j5fhi labeled pod/my-nginx-2035384211-u2c7e labeled pod/my-nginx-2035384211-u3t6x labeled @@ -421,7 +458,7 @@ pod/my-nginx-2035384211-u3t6x labeled +--> 首先用标签 "app=nginx" 过滤所有的 Pod,然后用 "tier=fe" 标记它们。 想要查看你刚才标记的 Pod,请运行: @@ -429,7 +466,7 @@ To see the pods you labeled, run: kubectl get pods -l app=nginx -L tier ``` -``` +```none NAME READY STATUS RESTARTS AGE TIER my-nginx-2035384211-j5fhi 1/1 Running 0 23m fe my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe @@ -437,23 +474,26 @@ my-nginx-2035384211-u3t6x 1/1 Running 0 23m fe ``` +For more information, please see [labels](/docs/concepts/overview/working-with-objects/labels/) +and [kubectl label](/docs/reference/generated/kubectl/kubectl-commands/#label). +--> 这将输出所有 "app=nginx" 的 Pod,并有一个额外的描述 Pod 的 tier 的标签列 (用参数 `-L` 或者 `--label-columns` 标明)。 -想要了解更多信息,请参考 -[标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/) 和 +想要了解更多信息,请参考[标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)和 [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands/#label) 命令文档。 +Sometimes you would want to attach annotations to resources. Annotations are arbitrary +non-identifying metadata for retrieval by API clients such as tools, libraries, etc. +This can be done with `kubectl annotate`. For example: +--> ## 更新注解 {#updating-annotations} 有时,你可能希望将注解附加到资源中。注解是 API 客户端(如工具、库等) @@ -463,6 +503,7 @@ Sometimes you would want to attach annotations to resources. Annotations are arb kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' kubectl get pods my-nginx-v4-9gw19 -o yaml ``` + ```shell apiVersion: v1 kind: pod @@ -473,19 +514,20 @@ metadata: ``` -想要了解更多信息,请参考 -[注解](/zh-cn/docs/concepts/overview/working-with-objects/annotations/)和 +For more information, see [annotations](/docs/concepts/overview/working-with-objects/annotations/) +and [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands/#annotate) document. +--> +想要了解更多信息,请参考[注解](/zh-cn/docs/concepts/overview/working-with-objects/annotations/)和 [`kubectl annotate`](/docs/reference/generated/kubectl/kubectl-commands/#annotate) 命令文档。 -## 扩缩你的应用 +When load on your application grows or shrinks, use `kubectl` to scale your application. +For instance, to decrease the number of nginx replicas from 3 to 1, do: +--> +## 扩缩你的应用 {#scaling-your-app} 当应用上的负载增长或收缩时,使用 `kubectl` 能够实现应用规模的扩缩。 例如,要将 nginx 副本的数量从 3 减少到 1,请执行以下操作: @@ -494,54 +536,57 @@ When load on your application grows or shrinks, use `kubectl` to scale you appli kubectl scale deployment/my-nginx --replicas=1 ``` -``` +```none deployment.apps/my-nginx scaled ``` +--> 现在,你的 Deployment 管理的 Pod 只有一个了。 ```shell kubectl get pods -l app=nginx ``` -``` +```none NAME READY STATUS RESTARTS AGE my-nginx-2035384211-j5fhi 1/1 Running 0 30m ``` +To have the system automatically choose the number of nginx replicas as needed, +ranging from 1 to 3, do: +--> 想要让系统自动选择需要 nginx 副本的数量,范围从 1 到 3,请执行以下操作: ```shell kubectl autoscale deployment/my-nginx --min=1 --max=3 ``` -``` +```none horizontalpodautoscaler.autoscaling/my-nginx autoscaled ``` +For more information, please see [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale), +[kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) and +[horizontal pod autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) document. +--> 现在,你的 nginx 副本将根据需要自动地增加或者减少。 想要了解更多信息,请参考 [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale)命令文档、 -[kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) 命令文档和 -[水平 Pod 自动伸缩](/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/) 文档。 +[kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) +命令文档和[水平 Pod 自动伸缩](/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/)文档。 +--> ## 就地更新资源 {#in-place-updates-of-resources} 有时,有必要对你所创建的资源进行小范围、无干扰地更新。 @@ -549,10 +594,12 @@ Sometimes it's necessary to make narrow, non-disruptive updates to resources you ### kubectl apply +Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) +to push your configuration changes to the cluster. +--> 建议在源代码管理中维护一组配置文件 (参见[配置即代码](https://martinfowler.com/bliki/InfrastructureAsCode.html)), 这样,它们就可以和应用代码一样进行维护和版本管理。 @@ -560,50 +607,56 @@ Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-co 将配置变更应用到集群中。 +This command will compare the version of the configuration that you're pushing with the previous +version and apply the changes you've made, without overwriting any automated changes to properties +you haven't specified. +--> 这个命令将会把推送的版本与以前的版本进行比较,并应用你所做的更改, 但是不会自动覆盖任何你没有指定更改的属性。 ```shell kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml +``` + +```none deployment.apps/my-nginx configured ``` +Note that `kubectl apply` attaches an annotation to the resource in order to determine the changes +to the configuration since the previous invocation. When it's invoked, `kubectl apply` does a +three-way diff between the previous configuration, the provided input and the current +configuration of the resource, in order to determine how to modify the resource. +--> 注意,`kubectl apply` 将为资源增加一个额外的注解,以确定自上次调用以来对配置的更改。 执行时,`kubectl apply` 会在以前的配置、提供的输入和资源的当前配置之间 找出三方差异,以确定如何修改资源。 +Currently, resources are created without this annotation, so the first invocation of `kubectl +apply` will fall back to a two-way diff between the provided input and the current configuration +of the resource. During this first invocation, it cannot detect the deletion of properties set +when the resource was created. For this reason, it will not remove them. +--> 目前,新创建的资源是没有这个注解的,所以,第一次调用 `kubectl apply` 时 将使用提供的输入和资源的当前配置双方之间差异进行比较。 在第一次调用期间,它无法检测资源创建时属性集的删除情况。 因此,kubectl 不会删除它们。 +All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as +`kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to +`kubectl apply` to detect and perform deletions using a three-way diff. +--> 所有后续的 `kubectl apply` 操作以及其他修改配置的命令,如 `kubectl replace` 和 `kubectl edit`,都将更新注解,并允许随后调用的 `kubectl apply` 使用三方差异进行检查和执行删除。 - -{{< note >}} -想要使用 apply,请始终使用 `kubectl apply` 或 `kubectl create --save-config` 创建资源。 -{{< /note >}} - ### kubectl edit +--> 或者,你也可以使用 `kubectl edit` 更新资源: ```shell @@ -611,14 +664,15 @@ kubectl edit deployment/my-nginx ``` +This is equivalent to first `get` the resource, edit it in text editor, and then `apply` the +resource with the updated version: +--> 这相当于首先 `get` 资源,在文本编辑器中编辑它,然后用更新的版本 `apply` 资源: ```shell kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml vi /tmp/nginx.yaml -# do some edit, and then save the file +# 做一些编辑,然后保存文件 kubectl apply -f /tmp/nginx.yaml deployment.apps/my-nginx configured @@ -627,10 +681,11 @@ rm /tmp/nginx.yaml ``` +--> 这使你可以更加容易地进行更重大的更改。 请注意,可以使用 `EDITOR` 或 `KUBE_EDITOR` 环境变量来指定编辑器。 @@ -645,18 +700,20 @@ JSON merge patch, and strategic merge patch. See [Update API Objects in Place Using kubectl patch](/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/) and [kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch). - --> +--> 你可以使用 `kubectl patch` 来更新 API 对象。此命令支持 JSON patch、 -JSON merge patch、以及 strategic merge patch。 请参考 -[使用 kubectl patch 更新 API 对象](/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/) -和 -[kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch). +JSON merge patch、以及 strategic merge patch。 +请参考[使用 kubectl patch 更新 API 对象](/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)和 +[kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch)。 +In some cases, you may need to update resource fields that cannot be updated once initialized, or +you may want to make a recursive change immediately, such as to fix broken pods created by a +Deployment. To change such fields, use `replace --force`, which deletes and re-creates the +resource. In this case, you can modify your original configuration file: +--> ## 破坏性的更新 {#disruptive-updates} 在某些情况下,你可能需要更新某些初始化后无法更新的资源字段,或者你可能只想立即进行递归更改, @@ -667,43 +724,60 @@ In some cases, you may need to update resource fields that cannot be updated onc kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force ``` -``` +```none deployment.apps/my-nginx deleted deployment.apps/my-nginx replaced ``` -## 在不中断服务的情况下更新应用 +--> +## 在不中断服务的情况下更新应用 {#updating-your-app-without-a-service-outage} +At some point, you'll eventually need to update your deployed application, typically by specifying +a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several +update operations, each of which is applicable to different scenarios. +--> 在某些时候,你最终需要更新已部署的应用,通常都是通过指定新的镜像或镜像标签, 如上面的金丝雀发布的场景中所示。`kubectl` 支持几种更新操作, 每种更新操作都适用于不同的场景。 +--> 我们将指导你通过 Deployment 如何创建和更新应用。 +--> 假设你正运行的是 1.14.2 版本的 nginx: ```shell kubectl create deployment my-nginx --image=nginx:1.14.2 ``` -``` + +```none deployment.apps/my-nginx created ``` +with 3 replicas (so the old and new revisions can coexist): +--> +运行 3 个副本(这样新旧版本可以同时存在) + +```shell +kubectl scale deployment my-nginx --current-replicas=1 --replicas=3 +``` + +```none +deployment.apps/my-nginx scaled +``` + + 要更新到 1.16.1 版本,只需使用我们前面学到的 kubectl 命令将 `.spec.template.spec.containers[0].image` 从 `nginx:1.14.2` 修改为 `nginx:1.16.1`。 @@ -712,8 +786,11 @@ kubectl edit deployment/my-nginx ``` +That's it! The Deployment will declaratively update the deployed nginx application progressively +behind the scene. It ensures that only a certain number of old replicas may be down while they are +being updated, and only a certain number of new replicas may be created above the desired number +of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/). +--> 没错,就是这样!Deployment 将在后台逐步更新已经部署的 nginx 应用。 它确保在更新过程中,只有一定数量的旧副本被开闭,并且只有一定基于所需 Pod 数量的新副本被创建。 想要了解更多细节,请参考 [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)。 @@ -721,8 +798,8 @@ That's it! The Deployment will declaratively update the deployed nginx applicati ## {{% heading "whatsnext" %}} +- Learn about [how to use `kubectl` for application introspection and debugging](/docs/tasks/debug/debug-application/debug-running-pod/). +- See [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/). +--> - 学习[如何使用 `kubectl` 观察和调试应用](/zh-cn/docs/tasks/debug/debug-application/debug-running-pod/) - 阅读[配置最佳实践和技巧](/zh-cn/docs/concepts/configuration/overview/) diff --git a/content/zh-cn/docs/concepts/cluster-administration/system-traces.md b/content/zh-cn/docs/concepts/cluster-administration/system-traces.md index 8c5b3f5ff49cf..d313716a21004 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/system-traces.md +++ b/content/zh-cn/docs/concepts/cluster-administration/system-traces.md @@ -143,7 +143,7 @@ The kubelet CRI interface and authenticated http servers are instrumented to gen trace spans. As with the apiserver, the endpoint and sampling rate are configurable. Trace context propagation is also configured. A parent span's sampling decision is always respected. A provided tracing configuration sampling rate will apply to spans without a parent. -Enabled without a configured endpoint, the default OpenTelemetry Collector reciever address of "localhost:4317" is set. +Enabled without a configured endpoint, the default OpenTelemetry Collector receiver address of "localhost:4317" is set. --> kubelet CRI 接口和实施身份验证的 HTTP 服务器被插桩以生成追踪 span。 与 API 服务器一样,端点和采样率是可配置的。 diff --git a/content/zh-cn/docs/concepts/configuration/overview.md b/content/zh-cn/docs/concepts/configuration/overview.md index f75bddd013b72..e7e263e49ca6f 100644 --- a/content/zh-cn/docs/concepts/configuration/overview.md +++ b/content/zh-cn/docs/concepts/configuration/overview.md @@ -4,6 +4,8 @@ content_type: concept weight: 10 --- 本文档重点介绍并整合了整个用户指南、入门文档和示例中介绍的配置最佳实践。 这是一份不断改进的文件。 如果你认为某些内容缺失但可能对其他人有用,请不要犹豫,提交 Issue 或提交 PR。 @@ -33,26 +37,33 @@ This is a living document. If you think of something that is not on this list bu - 定义配置时,请指定最新的稳定 API 版本。 - 在推送到集群之前,配置文件应存储在版本控制中。 这允许你在必要时快速回滚配置更改。 - 它还有助于集群重新创建和恢复。 + 它还有助于集群重新创建和恢复。 - 使用 YAML 而不是 JSON 编写配置文件。虽然这些格式几乎可以在所有场景中互换使用,但 YAML 往往更加用户友好。 - 只要有意义,就将相关对象分组到一个文件中。一个文件通常比几个文件更容易管理。 请参阅 [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/all-in-one/guestbook-all-in-one.yaml) 文件作为此语法的示例。 - 另请注意,可以在目录上调用许多 `kubectl` 命令。 例如,你可以在配置文件的目录中调用 `kubectl apply`。 @@ -67,16 +78,22 @@ This is a living document. If you think of something that is not on this list bu --> - 将对象描述放在注释中,以便更好地进行内省。 - -## “独立的“ Pod 与 ReplicaSet 、Deployment 和 Job {#naked-pods-vs-replicasets-deployments-and-jobs} +## “独立的“ Pod 与 ReplicaSet、Deployment 和 Job {#naked-pods-vs-replicasets-deployments-and-jobs} - 如果可能,不要使用独立的 Pod(即,未绑定到 [ReplicaSet](/zh-cn/docs/concepts/workloads/controllers/replicaset/) 或 @@ -95,9 +112,11 @@ This is a living document. If you think of something that is not on this list bu ## 服务 {#services} - 在创建相应的后端工作负载(Deployment 或 ReplicaSet),以及在需要访问它的任何工作负载之前创建 [服务](/zh-cn/docs/concepts/services-networking/service/)。 @@ -109,22 +128,38 @@ This is a living document. If you think of something that is not on this list bu FOO_SERVICE_PORT= ``` + **这确实意味着在顺序上的要求** - 必须在 `Pod` 本身被创建之前创建 `Pod` 想要访问的任何 `Service`, 否则将环境变量不会生效。DNS 没有此限制。 - 一个可选(尽管强烈推荐)的[集群插件](/zh-cn/docs/concepts/cluster-administration/addons/) 是 DNS 服务器。DNS 服务器为新的 `Services` 监视 Kubernetes API,并为每个创建一组 DNS 记录。 如果在整个集群中启用了 DNS,则所有 `Pod` 应该能够自动对 `Services` 进行名称解析。 - 不要为 Pod 指定 `hostPort`,除非非常有必要这样做。 当你为 Pod 绑定了 `hostPort`,那么能够运行该 Pod 的节点就有限了,因为每个 `` 组合必须是唯一的。 @@ -145,8 +180,9 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN - 避免使用 `hostNetwork`,原因与 `hostPort` 相同。 - 当你不需要 `kube-proxy` 负载均衡时, 使用[无头服务](/zh-cn/docs/concepts/services-networking/service/#headless-services) @@ -158,9 +194,21 @@ services) (which have a `ClusterIP` of `None`) for service discovery when you do ## 使用标签 {#using-labels} - 定义并使用[标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)来识别应用程序 或 Deployment 的 **语义属性**,例如 `{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }`。 @@ -175,16 +223,23 @@ A desired state of an object is described by a Deployment, and if changes to tha 控制器以受控速率将实际状态改变为期望状态。 - - 对于常见场景,应使用 [Kubernetes 通用标签](/zh-cn/docs/concepts/overview/working-with-objects/common-labels/)。 这些标准化的标签丰富了对象的元数据,使得包括 `kubectl` 和 [仪表板(Dashboard)](/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard) 这些工具能够以可互操作的方式工作。 - 你可以操纵标签进行调试。 由于 Kubernetes 控制器(例如 ReplicaSet)和服务使用选择器标签来匹配 Pod, @@ -199,20 +254,26 @@ A desired state of an object is described by a Deployment, and if changes to tha ## 使用 kubectl {#using-kubectl} -- 使用 `kubectl apply -f `。 - 它在 `` 中的所有` .yaml`、`.yml` 和 `.json` 文件中查找 Kubernetes 配置,并将其传递给 `apply`。 +- 使用 `kubectl apply -f <目录>`。 + 它在 `<目录>` 中的所有 `.yaml`、`.yml` 和 `.json` 文件中查找 Kubernetes 配置,并将其传递给 `apply`。 - 使用标签选择器进行 `get` 和 `delete` 操作,而不是特定的对象名称。 - 请参阅[标签选择器](/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors)和 [有效使用标签](/zh-cn/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)部分。 - 使用 `kubectl create deployment` 和 `kubectl expose` 来快速创建单容器 Deployment 和 Service。 有关示例,请参阅[使用服务访问集群中的应用程序](/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster/)。 diff --git a/content/zh-cn/docs/concepts/configuration/secret.md b/content/zh-cn/docs/concepts/configuration/secret.md index b2a77a02d2b43..ce2ddf9cf7995 100644 --- a/content/zh-cn/docs/concepts/configuration/secret.md +++ b/content/zh-cn/docs/concepts/configuration/secret.md @@ -843,20 +843,6 @@ level. 能够完成与镜像库的身份认证。你可以配置 **镜像拉取 Secret** 来实现这点。 Secret 是在 Pod 层面来配置的。 - -Pod 的 `imagePullSecrets` 字段是一个对 Pod 所在的名字空间中的 Secret -的引用列表。你可以使用 `imagePullSecrets` 来将镜像仓库访问凭据传递给 kubelet。 -kubelet 使用这个信息来替你的 Pod 拉取私有镜像。 -参阅 [Pod API 参考](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec) -中的 `PodSpec` 进一步了解 `imagePullSecrets` 字段。 - @@ -67,7 +66,7 @@ Tags let you identify different versions of the same series of images. 如果你不指定仓库的主机名,Kubernetes 认为你在使用 Docker 公共仓库。 -在镜像名称之后,你可以添加一个标签(Tag)(与使用 `docker` 或 `podman` 等命令时的方式相同)。 +在镜像名称之后,你可以添加一个**标签(Tag)**(与使用 `docker` 或 `podman` 等命令时的方式相同)。 使用标签能让你辨识同一镜像序列中的不同版本。 #### 必要的镜像拉取 {#required-image-pull} @@ -262,7 +266,6 @@ If you would like to always force a pull, you can do one of the following: 当你提交 Pod 时,Kubernetes 会将策略设置为 `Always`。 - 启用准入控制器 [AlwaysPullImages](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)。 - ## 带镜像索引的多架构镜像 {#multi-architecture-images-with-image-indexes} @@ -321,25 +333,30 @@ Credentials can be provided in several ways: ## 使用私有仓库 {#using-a-private-registry} 从私有仓库读取镜像时可能需要密钥。 -凭证可以用以下方式提供: - - - 配置节点向私有仓库进行身份验证 - 所有 Pod 均可读取任何已配置的私有仓库 - 需要集群管理员配置节点 +- kubelet 凭据提供程序,动态获取私有仓库的凭据 + - kubelet 可以被配置为使用凭据提供程序 exec 插件来访问对应的私有镜像库 - 预拉镜像 - 所有 Pod 都可以使用节点上缓存的所有镜像 - 需要所有节点的 root 访问权限才能进行设置 @@ -356,7 +373,8 @@ These options are explained in more detail below. ### 配置 Node 对私有仓库认证 {#configuring-nodes-to-authenticate-to-a-private-registry} @@ -370,7 +388,27 @@ task. That example uses a private registry in Docker Hub. --> 有关配置私有容器镜像仓库的示例, 请参阅任务[从私有镜像库中拉取镜像](/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry)。 -该示例使用 Docker Hub 中的私有注册表。 +该示例使用 Docker Hub 中的私有镜像仓库。 + +{{< note >}} + +此方法尤其适合 kubelet 需要动态获取仓库凭据时。 +最常用于由云提供商提供的仓库,其中身份认证令牌的生命期是短暂的。 +{{< /note >}} + + +你可以配置 kubelet,以调用插件可执行文件的方式来动态获取容器镜像的仓库凭据。 +这是为私有仓库获取凭据最稳健和最通用的方法,但也需要 kubelet 级别的配置才能启用。 + +有关更多细节请参见[配置 kubelet 镜像凭据提供程序](/docs/tasks/administer-cluster/kubelet-credential-provider/)。 使用以下语法匹配根 URL (`*my-registry.io`): + ``` pattern: { term } @@ -440,12 +480,6 @@ term: Image pull operations would now pass the credentials to the CRI container runtime for every valid pattern. For example the following container image names would match successfully: - -- `my-registry.io/images` -- `my-registry.io/images/my-image` -- `my-registry.io/images/another-image` -- `sub.my-registry.io/images/my-image` -- `a.sub.my-registry.io/images/my-image` --> 现在镜像拉取操作会将每种有效模式的凭据都传递给 CRI 容器运行时。例如下面的容器镜像名称会匹配成功: @@ -459,7 +493,7 @@ would match successfully: The kubelet performs image pulls sequentially for every found credential. This means, that multiple entries in `config.json` are possible, too: --> -kubelet 为每个找到的凭证的镜像按顺序拉取。这意味着在 `config.json` 中可能有多项: +kubelet 为每个找到的凭据的镜像按顺序拉取。这意味着在 `config.json` 中可能有多项: ```json { @@ -510,7 +544,8 @@ then a local image is used (preferentially or exclusively, respectively). If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images. -This can be used to preload certain images for speed or as an alternative to authenticating to a private registry. +This can be used to preload certain images for speed or as an alternative to authenticating to a +private registry. All pods will have read access to any pre-pulled images. --> @@ -555,14 +590,19 @@ Run the following command, substituting the appropriate uppercase values: 运行以下命令,注意替换适当的大写值: ```shell -kubectl create secret docker-registry --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL +kubectl create secret docker-registry \ + --docker-server=DOCKER_REGISTRY_SERVER \ + --docker-username=DOCKER_USER \ + --docker-password=DOCKER_PASSWORD \ + --docker-email=DOCKER_EMAIL ``` 如果你已经有 Docker 凭据文件,则可以将凭据文件导入为 Kubernetes {{< glossary_tooltip text="Secret" term_id="secret" >}}, @@ -629,7 +669,8 @@ This needs to be done for each pod that is using a private registry. However, setting of this field can be automated by setting the imagePullSecrets in a [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/) resource. -Check [Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) for detailed instructions. +Check [Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) +for detailed instructions. You can use this in conjunction with a per-node `.docker/config.json`. The credentials will be merged. @@ -657,7 +698,8 @@ common use cases and suggested solutions. 1. Cluster running only non-proprietary (e.g. open-source) images. No need to hide images. - Use public images from a public registry - No configuration required. - - Some cloud providers automatically cache or mirror public images, which improves availability and reduces the time to pull images. + - Some cloud providers automatically cache or mirror public images, which improves + availability and reduces the time to pull images. --> 1. 集群运行非专有镜像(例如,开源镜像)。镜像不需要隐藏。 - 使用来自公共仓库的公共镜像 @@ -686,7 +728,8 @@ common use cases and suggested solutions. 3. 集群使用专有镜像,且有些镜像需要更严格的访问控制 @@ -695,9 +738,11 @@ common use cases and suggested solutions. 4. 集群是多租户的并且每个租户需要自己的私有仓库 @@ -711,7 +756,6 @@ If you need access to multiple registries, you can create one secret for each re --> 如果你需要访问多个仓库,可以为每个仓库创建一个 Secret。 - ## {{% heading "whatsnext" %}} -## 定制资源 +## 定制资源 {#custom-resources} **资源(Resource)** 是 [Kubernetes API](/zh-cn/docs/concepts/overview/kubernetes-api/) 中的一个端点, @@ -54,13 +52,12 @@ Once a custom resource is installed, users can create and access its objects usi like *Pods*. --> **定制资源(Custom Resource)** 是对 Kubernetes API 的扩展,不一定在默认的 -Kubernetes 安装中就可用。 -定制资源所代表的是对特定 Kubernetes 安装的一种定制。 +Kubernetes 安装中就可用。定制资源所代表的是对特定 Kubernetes 安装的一种定制。 不过,很多 Kubernetes 核心功能现在都用定制资源来实现,这使得 Kubernetes 更加模块化。 定制资源可以通过动态注册的方式在运行中的集群内或出现或消失,集群管理员可以独立于集群更新定制资源。 -一旦某定制资源被安装,用户可以使用 {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} 来创建和访问其中的对象, -就像他们为 **Pod** 这种内置资源所做的一样。 +一旦某定制资源被安装,用户可以使用 {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} +来创建和访问其中的对象,就像他们为 **Pod** 这种内置资源所做的一样。 ### 声明式 API {#declarative-apis} 典型地,在声明式 API 中: - - 你的 API 包含相对而言为数不多的、尺寸较小的对象(资源)。 - - 对象定义了应用或者基础设施的配置信息。 - - 对象更新操作频率较低。 - - 通常需要人来读取或写入对象。 - - 对象的主要操作是 CRUD 风格的(创建、读取、更新和删除)。 - - 不需要跨对象的事务支持:API 对象代表的是期望状态而非确切实际状态。 +- 你的 API 包含相对而言为数不多的、尺寸较小的对象(资源)。 +- 对象定义了应用或者基础设施的配置信息。 +- 对象更新操作频率较低。 +- 通常需要人来读取或写入对象。 +- 对象的主要操作是 CRUD 风格的(创建、读取、更新和删除)。 +- 不需要跨对象的事务支持:API 对象代表的是期望状态而非确切实际状态。 命令式 API(Imperative API)与声明式有所不同。 以下迹象表明你的 API 可能不是声明式的: @@ -189,10 +187,13 @@ Signs that your API might not be declarative include: Use a ConfigMap if any of the following apply: -* There is an existing, well-documented configuration file format, such as a `mysql.cnf` or `pom.xml`. -* You want to put the entire configuration file into one key of a configMap. -* The main use of the configuration file is for a program running in a Pod on your cluster to consume the file to configure itself. -* Consumers of the file prefer to consume via file in a Pod or environment variable in a pod, rather than the Kubernetes API. +* There is an existing, well-documented configuration file format, such as a `mysql.cnf` or + `pom.xml`. +* You want to put the entire configuration into one key of a ConfigMap. +* The main use of the configuration file is for a program running in a Pod on your cluster to + consume the file to configure itself. +* Consumers of the file prefer to consume via file in a Pod or environment variable in a pod, + rather than the Kubernetes API. * You want to perform rolling updates via Deployment, etc., when the file is updated. --> ## 我应该使用一个 ConfigMap 还是一个定制资源? {#should-i-use-a-configmap-or-a-cr} @@ -206,10 +207,11 @@ Use a ConfigMap if any of the following apply: 而不是通过 Kubernetes API。 * 你希望当文件被更新时通过类似 Deployment 之类的资源完成滚动更新操作。 +{{< note >}} -{{< note >}} 请使用 {{< glossary_tooltip text="Secret" term_id="secret" >}} 来保存敏感数据。 Secret 类似于 configMap,但更为安全。 {{< /note >}} @@ -219,10 +221,12 @@ Use a custom resource (CRD or Aggregated API) if most of the following apply: * You want to use Kubernetes client libraries and CLIs to create and update the new resource. * You want top-level support from `kubectl`; for example, `kubectl get my-object object-name`. -* You want to build new automation that watches for updates on the new object, and then CRUD other objects, or vice versa. +* You want to build new automation that watches for updates on the new object, and then CRUD other + objects, or vice versa. * You want to write automation that handles updates to the object. * You want to use Kubernetes API conventions like `.spec`, `.status`, and `.metadata`. -* You want the object to be an abstraction over a collection of controlled resources, or a summarization of other resources. +* You want the object to be an abstraction over a collection of controlled resources, or a + summarization of other resources. --> 如果以下条件中大多数都被满足,你应该使用定制资源(CRD 或者 聚合 API): @@ -240,7 +244,9 @@ Use a custom resource (CRD or Aggregated API) if most of the following apply: Kubernetes provides two ways to add custom resources to your cluster: - CRDs are simple and can be created without any programming. -- [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) requires programming, but allows more control over API behaviors like how data is stored and conversion between API versions. +- [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) + requires programming, but allows more control over API behaviors like how data is stored and + conversion between API versions. --> ## 添加定制资源 {#adding-custom-resources} @@ -251,13 +257,18 @@ Kubernetes 提供了两种方式供你向集群中添加定制资源: 但支持对 API 行为进行更多的控制,例如数据如何存储以及在不同 API 版本间如何转换等。 Kubernetes 提供这两种选项以满足不同用户的需求,这样就既不会牺牲易用性也不会牺牲灵活性。 @@ -290,6 +301,7 @@ This way, your workload does not rely on the Kubernetes API for its normal opera 如果部分工作负载需要支持服务来维持其日常运转,则这种支持服务应作为一个组件运行或作为一个外部服务来使用。 这样,工作负载的正常运转就不会依赖 Kubernetes API 了。 {{< /note >}} + -CRD 使得你不必编写自己的 API 服务器来处理定制资源,不过其背后实现的通用性也意味着 -你所获得的灵活性要比 [API 服务器聚合](#api-server-aggregation)少很多。 +CRD 使得你不必编写自己的 API 服务器来处理定制资源,不过其背后实现的通用性也意味着你所获得的灵活性要比 +[API 服务器聚合](#api-server-aggregation)少很多。 关于如何注册新的定制资源、使用新资源类别的实例以及如何使用控制器来处理事件, 相关的例子可参见[定制控制器示例](https://github.com/kubernetes/sample-controller)。 @@ -327,23 +339,27 @@ CRD 使得你不必编写自己的 API 服务器来处理定制资源,不过 ## API 服务器聚合 {#api-server-aggregation} 通常,Kubernetes API 中的每个资源都需要处理 REST 请求和管理对象持久性存储的代码。 -Kubernetes API 主服务器能够处理诸如 *pods* 和 *services* 这些内置资源,也可以 -按通用的方式通过 [CRD](#customresourcedefinitions) 来处理定制资源。 +Kubernetes API 主服务器能够处理诸如 **Pod** 和 **Service** 这些内置资源, +也可以按通用的方式通过 [CRD](#customresourcedefinitions) 来处理定制资源。 [聚合层(Aggregation Layer)](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) 使得你可以通过编写和部署你自己的 API 服务器来为定制资源提供特殊的实现。 -主 API 服务器将针对你要处理的定制资源的请求全部委托给你自己的 API 服务器来处理,同时将这些资源 -提供给其所有客户端。 +主 API 服务器将针对你要处理的定制资源的请求全部委托给你自己的 API 服务器来处理, +同时将这些资源提供给其所有客户端。 ## 选择添加定制资源的方法 {#choosing-a-method-for-adding-cr} @@ -398,8 +415,8 @@ Aggregated APIs offer more advanced API features and customization of other feat 聚合 API 可提供更多的高级 API 特性,也可对其他特性实行定制;例如,对存储层进行定制。 ### 公共特性 {#common-features} @@ -437,8 +455,8 @@ When you create a custom resource, either via a CRD or an AA, you get many featu 无论是通过 CRD 还是通过聚合 API 来创建定制资源,你都会获得很多 API 特性: @@ -499,9 +520,11 @@ Failure),例如导致第三方代码被在 API 服务器上运行, ### 存储 {#storage} @@ -513,11 +536,16 @@ API 服务器上的存储空间超载。 ### 身份认证、鉴权授权以及审计 {#authentication-authorization-and-auditing} @@ -532,14 +560,18 @@ CRD 通常与 API 服务器上的内置资源一样使用相同的身份认证 ## 访问定制资源 {#accessing-a-custom-resources} diff --git a/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md index db0530d77f00f..751560ef429db 100644 --- a/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md +++ b/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -1,18 +1,21 @@ --- title: 设备插件 -description: 设备插件可以让你配置集群以支持需要特定于供应商设置的设备或资源,例如 GPU、NIC、FPGA 或非易失性主存储器。 +description: > + 设备插件可以让你配置集群以支持需要特定于供应商设置的设备或资源,例如 GPU、NIC、FPGA 或非易失性主存储器。 content_type: concept weight: 20 --- -{{< feature-state for_k8s_version="v1.10" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} ### 示例 {#example-pod} ## 设备插件的实现 {#device-plugin-implementation} 设备插件的常规工作流程包括以下几个步骤: -* 初始化。在这个阶段,设备插件将执行供应商特定的初始化和设置, - 以确保设备处于就绪状态。 -* 插件使用主机路径 `/var/lib/kubelet/device-plugins/` 下的 Unix 套接字启动一个 - gRPC 服务,该服务实现以下接口: - - - ```gRPC - service DevicePlugin { - // GetDevicePluginOptions 返回与设备管理器沟通的选项。 - rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} - - // ListAndWatch 返回 Device 列表构成的数据流。 - // 当 Device 状态发生变化或者 Device 消失时,ListAndWatch - // 会返回新的列表。 - rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} - - // Allocate 在容器创建期间调用,这样设备插件可以运行一些特定于设备的操作, - // 并告诉 kubelet 如何令 Device 可在容器中访问的所需执行的具体步骤 - rpc Allocate(AllocateRequest) returns (AllocateResponse) {} - - // GetPreferredAllocation 从一组可用的设备中返回一些优选的设备用来分配, - // 所返回的优选分配结果不一定会是设备管理器的最终分配方案。 - // 此接口的设计仅是为了让设备管理器能够在可能的情况下做出更有意义的决定。 - rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {} - - // PreStartContainer 在设备插件注册阶段根据需要被调用,调用发生在容器启动之前。 - // 在将设备提供给容器使用之前,设备插件可以运行一些诸如重置设备之类的特定于 - // 具体设备的操作, - rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {} - } - ``` - - {{< note >}} - - 插件并非必须为 `GetPreferredAllocation()` 或 `PreStartContainer()` 提供有用的实现逻辑, - 调用 `GetDevicePluginOptions()` 时所返回的 `DevicePluginOptions` - 消息中应该设置这些调用是否可用。`kubelet` 在真正调用这些函数之前,总会调用 - `GetDevicePluginOptions()` 来查看是否存在这些可选的函数。 - {{< /note >}} - - -* 插件通过 Unix socket 在主机路径 `/var/lib/kubelet/device-plugins/kubelet.sock` - 处向 kubelet 注册自身。 -* 成功注册自身后,设备插件将以服务模式运行,在此期间,它将持续监控设备运行状况, - 并在设备状态发生任何变化时向 kubelet 报告。它还负责响应 `Allocate` gRPC 请求。 - 在 `Allocate` 期间,设备插件可能还会做一些设备特定的准备;例如 GPU 清理或 QRNG 初始化。 - 如果操作成功,则设备插件将返回 `AllocateResponse`,其中包含用于访问被分配的设备容器运行时的配置。 - kubelet 将此信息传递到容器运行时。 +1. 初始化。在这个阶段,设备插件将执行特定于供应商的初始化和设置,以确保设备处于就绪状态。 + +2. 插件使用主机路径 `/var/lib/kubelet/device-plugins/` 下的 UNIX 套接字启动一个 + gRPC 服务,该服务实现以下接口: + + + ```gRPC + service DevicePlugin { + // GetDevicePluginOptions 返回与设备管理器沟通的选项。 + rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} + + // ListAndWatch 返回 Device 列表构成的数据流。 + // 当 Device 状态发生变化或者 Device 消失时,ListAndWatch + // 会返回新的列表。 + rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} + + // Allocate 在容器创建期间调用,这样设备插件可以运行一些特定于设备的操作, + // 并告诉 kubelet 如何令 Device 可在容器中访问的所需执行的具体步骤 + rpc Allocate(AllocateRequest) returns (AllocateResponse) {} + + // GetPreferredAllocation 从一组可用的设备中返回一些优选的设备用来分配, + // 所返回的优选分配结果不一定会是设备管理器的最终分配方案。 + // 此接口的设计仅是为了让设备管理器能够在可能的情况下做出更有意义的决定。 + rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {} + + // PreStartContainer 在设备插件注册阶段根据需要被调用,调用发生在容器启动之前。 + // 在将设备提供给容器使用之前,设备插件可以运行一些诸如重置设备之类的特定于 + // 具体设备的操作, + rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {} + } + ``` + + {{< note >}} + + 插件并非必须为 `GetPreferredAllocation()` 或 `PreStartContainer()` 提供有用的实现逻辑, + 调用 `GetDevicePluginOptions()` 时所返回的 `DevicePluginOptions` + 消息中应该设置一些标志,表明这些调用(如果有)是否可用。`kubelet` 在直接调用这些函数之前,总会调用 + `GetDevicePluginOptions()` 来查看哪些可选的函数可用。 + {{< /note >}} + + +3. 插件通过位于主机路径 `/var/lib/kubelet/device-plugins/kubelet.sock` 下的 UNIX 套接字 + 向 kubelet 注册自身。 + + {{< note >}} + + 工作流程的顺序很重要。插件必须在向 kubelet 注册自己之前开始提供 gRPC 服务,才能保证注册成功。 + {{< /note >}} + + +4. 成功注册自身后,设备插件将以提供服务的模式运行,在此期间,它将持续监控设备运行状况, + 并在设备状态发生任何变化时向 kubelet 报告。它还负责响应 `Allocate` gRPC 请求。 + 在 `Allocate` 期间,设备插件可能还会做一些特定于设备的准备;例如 GPU 清理或 QRNG 初始化。 + 如果操作成功,则设备插件将返回 `AllocateResponse`,其中包含用于访问被分配的设备容器运行时的配置。 + kubelet 将此信息传递到容器运行时。 ### 处理 kubelet 重启 {#handling-kubelet-restarts} 设备插件应能监测到 kubelet 重启,并且向新的 kubelet 实例来重新注册自己。 -在当前实现中,当 kubelet 重启的时候,新的 kubelet 实例会删除 `/var/lib/kubelet/device-plugins` -下所有已经存在的 Unix 套接字。 +新的 kubelet 实例启动时会删除 `/var/lib/kubelet/device-plugins` 下所有已经存在的 Unix 套接字。 设备插件需要能够监控到它的 Unix 套接字被删除,并且当发生此类事件时重新注册自己。 +## API 兼容性 {#api-compatibility} -* Watch for changes in future releases. -* Support multiple versions of the device plugin API for backward/forward compatibility. +之前版本控制方案要求设备插件的 API 版本与 Kubelet 的版本完全匹配。 +自从此特性在 v1.12 中进阶为 Beta 后,这不再是硬性要求。 +API 是版本化的,并且自此特性进阶 Beta 后一直表现稳定。 +因此,kubelet 升级应该是无缝的,但在稳定之前 API 仍然可能会有变更,还不能保证升级不会中断。 -If you enable the DevicePlugins feature and run device plugins on nodes that need to be upgraded to -a Kubernetes release with a newer device plugin API version, upgrade your device plugins -to support both versions before upgrading these nodes. Taking that approach will -ensure the continuous functioning of the device allocations during the upgrade. +{{< note >}} + -## API 兼容性 {#api-compatibility} +尽管 Kubernetes 的设备管理器(Device Manager)组件是正式发布的特性, +但**设备插件 API** 还不稳定。有关设备插件 API 和版本兼容性的信息, +请参阅[设备插件 API 版本](/zh-cn/docs/reference/node/device-plugin-api-versions/)。 +{{< /note >}} -Kubernetes 设备插件支持还处于 beta 版本。所以在稳定版本出来之前 API 会以不兼容的方式进行更改。 + 作为一个项目,Kubernetes 建议设备插件开发者: -* 注意未来版本的更改 +* 注意未来版本中设备插件 API 的变更。 * 支持多个版本的设备插件 API,以实现向后/向前兼容性。 -如果你启用 DevicePlugins 功能,并在需要升级到 Kubernetes 版本来获得较新的设备插件 API -版本的节点上运行设备插件,请在升级这些节点之前先升级设备插件以支持这两个版本。 + +若在需要升级到具有较新设备插件 API 版本的某个 Kubernetes 版本的节点上运行这些设备插件, +请在升级这些节点之前先升级设备插件以支持这两个版本。 采用该方法将确保升级期间设备分配的连续运行。 这一 `List` 端点提供运行中 Pod 的资源信息,包括类似独占式分配的 CPU ID、设备插件所报告的设备 ID 以及这些设备分配所处的 NUMA 节点 ID。 @@ -406,7 +444,7 @@ message ContainerDevices { {{< note >}} @@ -450,6 +489,7 @@ update and Kubelet needs to be restarted to reflect the correct resource capacit 如果目标是评估空闲/未分配的资源,此调用应该与 List() 端点一起使用。 除非暴露给 kubelet 的底层资源发生变化,否则 `GetAllocatableResources` 得到的结果将保持不变。 这种情况很少发生,但当发生时(例如:热插拔,设备健康状况改变),客户端应该调用 `GetAlloctableResources` 端点。 + 然而,调用 `GetAllocatableResources` 端点在 cpu、内存被更新的情况下是不够的, Kubelet 需要重新启动以获取正确的资源容量和可分配的资源。 {{< /note >}} @@ -461,17 +501,14 @@ message AllocatableResourcesResponse { repeated int64 cpu_ids = 2; repeated ContainerMemory memory = 3; } - ``` 从 Kubernetes v1.23 开始,`GetAllocatableResources` 被默认启用。 你可以通过关闭 `KubeletPodResourcesGetAllocatable` @@ -479,12 +516,15 @@ Preceding Kubernetes v1.23, to enable this feature `kubelet` must be started wit 在 Kubernetes v1.23 之前,要启用这一功能,`kubelet` 必须用以下标志启动: -`--feature-gates=KubeletPodResourcesGetAllocatable=true` +``` +--feature-gates=KubeletPodResourcesGetAllocatable=true +``` `ContainerDevices` 会向外提供各个设备所隶属的 NUMA 单元这类拓扑信息。 NUMA 单元通过一个整数 ID 来标识,其取值与设备插件所报告的一致。 @@ -500,7 +540,8 @@ DaemonSet, `/var/lib/kubelet/pod-resources` must be mounted as a {{< glossary_tooltip term_id="volume" >}} in the device monitoring agent's [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core). -Support for the `PodResourcesLister service` requires `KubeletPodResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. +Support for the `PodResourcesLister service` requires `KubeletPodResources` +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. It is enabled by default starting with Kubernetes 1.15 and is v1 since Kubernetes 1.20. --> gRPC 服务通过 `/var/lib/kubelet/pod-resources/kubelet.sock` 的 UNIX 套接字来提供服务。 @@ -524,7 +565,9 @@ gRPC 服务通过 `/var/lib/kubelet/pod-resources/kubelet.sock` 的 UNIX 套接 {{< feature-state for_k8s_version="v1.18" state="beta" >}} 拓扑管理器是 Kubelet 的一个组件,它允许以拓扑对齐方式来调度资源。 为了做到这一点,设备插件 API 进行了扩展来包括一个 `TopologyInfo` 结构体。 @@ -540,17 +583,18 @@ message NUMANode { ``` 设备插件希望拓扑管理器可以将填充的 TopologyInfo 结构体作为设备注册的一部分以及设备 ID 和设备的运行状况发送回去。然后设备管理器将使用此信息来咨询拓扑管理器并做出资源分配决策。 @@ -566,6 +610,7 @@ NUMA 节点列表表示设备插件没有该设备的 NUMA 亲和偏好。 ``` pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}} ``` + @@ -577,8 +622,10 @@ pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi. Here are some examples of device plugin implementations: * The [AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin) -* The [Intel device plugins](https://github.com/intel/intel-device-plugins-for-kubernetes) for Intel GPU, FPGA, QAT, VPU, SGX, DSA, DLB and IAA devices -* The [KubeVirt device plugins](https://github.com/kubevirt/kubernetes-device-plugins) for hardware-assisted virtualization +* The [Intel device plugins](https://github.com/intel/intel-device-plugins-for-kubernetes) for + Intel GPU, FPGA, QAT, VPU, SGX, DSA, DLB and IAA devices +* The [KubeVirt device plugins](https://github.com/kubevirt/kubernetes-device-plugins) for + hardware-assisted virtualization * The [NVIDIA GPU device plugin for Container-Optimized OS](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu) * The [RDMA device plugin](https://github.com/hustcat/k8s-rdma-device-plugin) * The [SocketCAN device plugin](https://github.com/collabora/k8s-socketcan) @@ -601,12 +648,15 @@ Here are some examples of device plugin implementations: ## {{% heading "whatsnext" %}} * 查看[调度 GPU 资源](/zh-cn/docs/tasks/manage-gpus/scheduling-gpus/)来学习使用设备插件 -* 查看在上如何[公布节点上的扩展资源](/zh-cn/docs/tasks/administer-cluster/extended-resource-node/) +* 查看在节点上如何[公布扩展资源](/zh-cn/docs/tasks/administer-cluster/extended-resource-node/) * 学习[拓扑管理器](/zh-cn/docs/tasks/administer-cluster/topology-manager/) * 阅读如何在 Kubernetes 中使用 [TLS Ingress 的硬件加速](/zh-cn/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) diff --git a/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md index b342c7c6681fb..8118e24d32b75 100644 --- a/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md +++ b/content/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -123,7 +123,7 @@ work correctly with the iptables proxy. --> 默认情况下,如果未指定 kubelet 网络插件,则使用 `noop` 插件, 该插件设置 `net/bridge/bridge-nf-call-iptables=1`,以确保简单的配置 -(如带网桥的 Docker )与 iptables 代理正常工作。 +(如带网桥的 Docker)与 iptables 代理正常工作。 @@ -232,7 +233,8 @@ you implement yourself * [kubebuilder](https://book.kubebuilder.io/) * [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (.NET operator SDK) * [KUDO](https://kudo.dev/)(Kubernetes 通用声明式 Operator) -* [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html),可与 Webhooks 结合使用,以实现自己的功能。 +* [Mast](https://docs.ansi.services/mast/user_guide/operator/) +* [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html),可与 Webhook 结合使用,以实现自己的功能。 * [Operator Framework](https://operatorframework.io) * [shell-operator](https://github.com/flant/shell-operator) diff --git a/content/zh-cn/docs/concepts/overview/components.md b/content/zh-cn/docs/concepts/overview/components.md index 8d3af794bc806..97c4bbcc17b67 100644 --- a/content/zh-cn/docs/concepts/overview/components.md +++ b/content/zh-cn/docs/concepts/overview/components.md @@ -3,7 +3,7 @@ title: Kubernetes 组件 content_type: concept description: > Kubernetes 集群由控制平面的组件和一组称为节点的机器组成。 -weight: 20 +weight: 30 card: name: concepts weight: 20 @@ -16,7 +16,7 @@ content_type: concept description: > A Kubernetes cluster consists of the components that are a part of the control plane and a set of machines called nodes. -weight: 20 +weight: 30 card: name: concepts weight: 20 diff --git a/content/zh-cn/docs/concepts/overview/kubernetes-api.md b/content/zh-cn/docs/concepts/overview/kubernetes-api.md index be6515458f54a..b4272751b7ea8 100644 --- a/content/zh-cn/docs/concepts/overview/kubernetes-api.md +++ b/content/zh-cn/docs/concepts/overview/kubernetes-api.md @@ -1,7 +1,7 @@ --- title: Kubernetes API content_type: concept -weight: 30 +weight: 40 description: > Kubernetes API 使你可以查询和操纵 Kubernetes 中对象的状态。 Kubernetes 控制平面的核心是 API 服务器和它暴露的 HTTP API。 @@ -15,7 +15,7 @@ reviewers: - chenopis title: The Kubernetes API content_type: concept -weight: 30 +weight: 40 description: > The Kubernetes API lets you query and manipulate the state of objects in Kubernetes. The core of Kubernetes' control plane is the API server and the HTTP API that it exposes. Users, the different parts of your cluster, and external components all communicate with one another through the API server. diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/common-labels.md b/content/zh-cn/docs/concepts/overview/working-with-objects/common-labels.md index 5596ecdb0f1c3..eb56faae4469f 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/common-labels.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/common-labels.md @@ -70,7 +70,7 @@ on every resource object. | ----------------------------------- | --------------------- | -------- | ---- | | `app.kubernetes.io/name` | The name of the application | `mysql` | string | | `app.kubernetes.io/instance` | A unique name identifying the instance of an application | `mysql-abcxzy` | string | -| `app.kubernetes.io/version` | The current version of the application (e.g., a semantic version, revision hash, etc.) | `5.7.21` | string | +| `app.kubernetes.io/version` | The current version of the application (e.g., a [SemVer 1.0](https://semver.org/spec/v1.0.0.html), revision hash, etc.) | `5.7.21` | string | | `app.kubernetes.io/component` | The component within the architecture | `database` | string | | `app.kubernetes.io/part-of` | The name of a higher level application this one is part of | `wordpress` | string | | `app.kubernetes.io/managed-by` | The tool being used to manage the operation of an application | `helm` | string | @@ -79,7 +79,7 @@ on every resource object. | ----------------------------------- | --------------------- | -------- | ---- | | `app.kubernetes.io/name` | 应用程序的名称 | `mysql` | 字符串 | | `app.kubernetes.io/instance` | 用于唯一确定应用实例的名称 | `mysql-abcxzy` | 字符串 | -| `app.kubernetes.io/version` | 应用程序的当前版本(例如语义版本、修订版哈希等) | `5.7.21` | 字符串 | +| `app.kubernetes.io/version` | 应用程序的当前版本(例如[语义版本 1.0](https://semver.org/spec/v1.0.0.html)、修订版哈希等) | `5.7.21` | 字符串 | | `app.kubernetes.io/component` | 架构中的组件 | `database` | 字符串 | | `app.kubernetes.io/part-of` | 此级别的更高级别应用程序的名称 | `wordpress` | 字符串 | | `app.kubernetes.io/managed-by` | 用于管理应用程序的工具 | `helm` | 字符串 | diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/field-selectors.md b/content/zh-cn/docs/concepts/overview/working-with-objects/field-selectors.md index 09a85ee73bd24..2e9f2f0d0b8f7 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/field-selectors.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/field-selectors.md @@ -10,7 +10,7 @@ weight: 70 --> “字段选择器(Field selectors)”允许你根据一个或多个资源字段的值 [筛选 Kubernetes 资源](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects)。 @@ -29,18 +29,15 @@ This `kubectl` command selects all Pods for which the value of the [`status.phas ```shell kubectl get pods --field-selector status.phase=Running ``` + +{{< note >}} -{{< note >}} 字段选择器本质上是资源“过滤器(Filters)”。默认情况下,字段选择器/过滤器是未被应用的, 这意味着指定类型的所有资源都会被筛选出来。 -这使得以下的两个 `kubectl` 查询是等价的: +这使得 `kubectl get pods` 和 `kubectl get pods --field-selector ""` 这两个 `kubectl` 查询是等价的。 -```shell -kubectl get pods -kubectl get pods --field-selector "" -``` {{< /note >}} Kubernetes 对象是“目标性记录” —— 一旦创建该对象,Kubernetes 系统将不断工作以确保该对象存在。 通过创建对象,你就是在告知 Kubernetes 系统,你想要的集群工作负载状态看起来应是什么样子的, @@ -68,7 +67,7 @@ Kubernetes 对象是“目标性记录” —— 一旦创建该对象,Kuberne 来直接调用 Kubernetes API。 例如,Kubernetes 中的 Deployment 对象能够表示运行在集群中的应用。 @@ -113,14 +112,14 @@ Kubernetes 系统读取 Deployment 的 `spec`, `spec` 和状态间的不一致 —— 意味着它会启动一个新的实例来替换。 关于对象 spec、status 和 metadata 的更多信息,可参阅 [Kubernetes API 约定](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md)。 进一步了解以下信息: * 最重要的 Kubernetes 基本对象 [Pod](/zh-cn/docs/concepts/workloads/pods/)。 diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/labels.md b/content/zh-cn/docs/concepts/overview/working-with-objects/labels.md index e824a2b98b02e..47daf31989a03 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/labels.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/labels.md @@ -433,3 +433,20 @@ See the documentation on [node selection](/docs/concepts/scheduling-eviction/ass 通过标签进行选择的一个用例是确定节点集,方便 Pod 调度。 有关更多信息,请参阅[选择节点](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)文档。 + +## {{% heading "whatsnext" %}} + + +- 学习如何[给节点添加标签](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node) +- 查阅[众所周知的标签、注解和污点](/zh-cn/docs/reference/labels-annotations-taints/) +- 参见[推荐使用的标签](/zh-cn/docs/concepts/overview/working-with-objects/common-labels/) +- [使用名字空间标签来实施 Pod 安全性标准](/zh-cn/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/) +- [有效使用标签](/zh-cn/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)管理 Deployment。 +- 阅读[为 Pod 标签编写控制器](/blog/2021/06/21/writing-a-controller-for-pod-labels/)的博文 diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/namespaces.md b/content/zh-cn/docs/concepts/overview/working-with-objects/namespaces.md index df549488184ad..a872a08ea9117 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/namespaces.md @@ -268,7 +268,7 @@ kubectl api-resources --namespaced=false --> ## 自动打标签 {#automatic-labelling} -{{< feature-state state="beta" for_k8s_version="1.21" >}} +{{< feature-state state="beta" for_k8s_version="stable" >}} `kubectl` 命令行工具支持多种不同的方式来创建和管理 Kubernetes 对象。 本文档概述了不同的方法。 -阅读 [Kubectl book](https://kubectl.docs.kubernetes.io) 来了解 kubectl +阅读 [Kubectl book](https://kubectl.docs.kubernetes.io/zh/) 来了解 kubectl 管理对象的详细信息。 diff --git a/content/zh-cn/docs/concepts/policy/limit-range.md b/content/zh-cn/docs/concepts/policy/limit-range.md index 6f2149e5698cd..24551a912f310 100644 --- a/content/zh-cn/docs/concepts/policy/limit-range.md +++ b/content/zh-cn/docs/concepts/policy/limit-range.md @@ -91,7 +91,7 @@ LimitRange 的名称必须是合法的 diff --git a/content/zh-cn/docs/concepts/policy/pid-limiting.md b/content/zh-cn/docs/concepts/policy/pid-limiting.md index 61fed9674b36c..4faac45cc084e 100644 --- a/content/zh-cn/docs/concepts/policy/pid-limiting.md +++ b/content/zh-cn/docs/concepts/policy/pid-limiting.md @@ -22,9 +22,10 @@ Kubernetes allow you to limit the number of process IDs (PIDs) that a You can also reserve a number of allocatable PIDs for each {{< glossary_tooltip term_id="node" text="node" >}} for use by the operating system and daemons (rather than by Pods). --> -Kubernetes 允许你限制一个 {{< glossary_tooltip term_id="Pod" text="Pod" >}} 中可以使用的 -进程 ID(PID)数目。你也可以为每个 {{< glossary_tooltip term_id="node" text="节点" >}} -预留一定数量的可分配的 PID,供操作系统和守护进程(而非 Pod)使用。 +Kubernetes 允许你限制一个 {{< glossary_tooltip term_id="Pod" text="Pod" >}} +中可以使用的进程 ID(PID)数目。 +你也可以为每个{{< glossary_tooltip term_id="node" text="节点" >}}预留一定数量的可分配的 PID, +供操作系统和守护进程(而非 Pod)使用。 @@ -33,8 +34,8 @@ Process IDs (PIDs) are a fundamental resource on nodes. It is trivial to hit the task limit without hitting any other resource limits, which can then cause instability to a host machine. --> -进程 ID(PID)是节点上的一种基础资源。很容易就会在尚未超出其它资源约束的时候就 -已经触及任务个数上限,进而导致宿主机器不稳定。 +进程 ID(PID)是节点上的一种基础资源。很容易就会在尚未超出其它资源约束的时候就已经触及任务个数上限, +进而导致宿主机器不稳定。 -集群管理员需要一定的机制来确保集群中运行的 Pod 不会导致 PID 资源枯竭,甚而 -造成宿主机上的守护进程(例如 +集群管理员需要一定的机制来确保集群中运行的 Pod 不会导致 PID 资源枯竭, +甚而造成宿主机上的守护进程(例如 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 或者 {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} 乃至包括容器运行时本身)无法正常运行。 @@ -72,13 +73,12 @@ the whole machine down. This kind of resource limiting helps to prevent simple fork bombs from affecting operation of an entire cluster. --> 你可以配置 kubelet 限制给定 Pod 能够使用的 PID 个数。 -例如,如果你的节点上的宿主操作系统被设置为最多可使用 `262144` 个 PID,同时预期 -节点上会运行的 Pod 个数不会超过 `250`,那么你可以为每个 Pod 设置 `1000` 个 PID +例如,如果你的节点上的宿主操作系统被设置为最多可使用 `262144` 个 PID, +同时预期节点上会运行的 Pod 个数不会超过 `250`,那么你可以为每个 Pod 设置 `1000` 个 PID 的预算,避免耗尽该节点上可用 PID 的总量。 -如果管理员系统像 CPU 或内存那样允许对 PID 进行过量分配(Overcommit),他们也可以 -这样做,只是会有一些额外的风险。不管怎样,任何一个 Pod 都不可以将整个机器的运行 -状态破坏。这类资源限制有助于避免简单的派生炸弹(Fork -Bomb)影响到整个集群的运行。 +如果管理员系统像 CPU 或内存那样允许对 PID 进行过量分配(Overcommit),他们也可以这样做, +只是会有一些额外的风险。不管怎样,任何一个 Pod 都不可以将整个机器的运行状态破坏。 +这类资源限制有助于避免简单的派生炸弹(Fork Bomb)影响到整个集群的运行。 -在 Pod 级别设置 PID 限制使得管理员能够保护 Pod 之间不会互相伤害,不过无法 -确保所有调度到该宿主机器上的所有 Pod 都不会影响到节点整体。 +在 Pod 级别设置 PID 限制使得管理员能够保护 Pod 之间不会互相伤害, +不过无法确保所有调度到该宿主机器上的所有 Pod 都不会影响到节点整体。 Pod 级别的限制也无法保护节点代理任务自身不会受到 PID 耗尽的影响。 你也可以预留一定量的 PID,作为节点的额外开销,与分配给 Pod 的 PID 集合独立。 @@ -110,7 +110,6 @@ PID 限制是与[计算资源](/zh-cn/docs/concepts/configuration/manage-resourc 你需要将其设置到 kubelet 上而不是在 Pod 的 `.spec` 中为 Pod 设置资源限制。 目前还不支持在 Pod 级别设置 PID 限制。 - {{< caution >}} -在 Kubernetes 1.20 版本之前,在节点级别通过 PID 资源限制预留 PID 的能力 -需要启用[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) -`SupportNodePidsLimit` 才行。 -{{< /note >}} +`pid=`。你所设置的参数值分别用来声明为整个系统和 Kubernetes +系统守护进程所保留的进程 ID 数目。 -在 Kubernetes 1.20 版本之前,为 Pod 设置 PID 资源限制的能力需要启用 -[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) -`SupportNodePidsLimit` 才行。 -{{< /note >}} +而不是为特定的 Pod 来将其设置为资源限制。每个节点都可以有不同的 PID 限制设置。 +要设置限制值,你可以设置 kubelet 的命令行参数 `--pod-max-pids`,或者在 kubelet +的[配置文件](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/)中设置 +`PodPidsLimit`。 ## 基于 PID 的驱逐 {#pid-based-eviction} -你可以配置 kubelet 使之在 Pod 行为不正常或者消耗不正常数量资源的时候将其终止。 -这一特性称作驱逐。你可以针对不同的驱逐信号 -[配置资源不足的处理](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/)。 +你可以配置 kubelet 使之在 Pod 行为不正常或者消耗不正常数量资源的时候将其终止。这一特性称作驱逐。 +你可以针对不同的驱逐信号[配置资源不足的处理](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/)。 使用 `pid.available` 驱逐信号来配置 Pod 使用的 PID 个数的阈值。 你可以设置硬性的和软性的驱逐策略。不过,即使使用硬性的驱逐策略, 如果 PID 个数增长过快,节点仍然可能因为触及节点 PID 限制而进入一种不稳定状态。 @@ -214,8 +188,8 @@ when one Pod is misbehaving. --> Pod 级别和节点级别的 PID 限制会设置硬性限制。 一旦触及限制值,工作负载会在尝试获得新的 PID 时开始遇到问题。 -这可能会也可能不会导致 Pod 被重新调度,取决于工作负载如何应对这类失败 -以及 Pod 的存活性和就绪态探测是如何配置的。 +这可能会也可能不会导致 Pod 被重新调度,取决于工作负载如何应对这类失败以及 +Pod 的存活性和就绪态探测是如何配置的。 可是,如果限制值被正确设置,你可以确保其它 Pod 负载和系统进程不会因为某个 Pod 行为不正常而没有 PID 可用。 diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/_index.md b/content/zh-cn/docs/concepts/scheduling-eviction/_index.md index f90a06410f870..de50c5def8b22 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/_index.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/_index.md @@ -1,6 +1,6 @@ --- title: 调度、抢占和驱逐 -weight: 90 +weight: 95 content_type: concept description: > 在 Kubernetes 中,调度 (scheduling) 指的是确保 Pod 匹配到合适的节点, @@ -11,7 +11,7 @@ no_list: true ## 调度 @@ -57,9 +59,11 @@ of terminating one or more Pods on Nodes. * [Pod 开销](/zh-cn/docs/concepts/scheduling-eviction/pod-overhead/) * [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/) * [污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/) +* [动态资源分配](/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation) * [调度框架](/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework) * [调度器性能调试](/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) * [扩展资源的资源装箱](/zh-cn/docs/concepts/scheduling-eviction/resource-bin-packing/) +* [Pod 调度就绪](/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness/) + + + +{{< feature-state for_k8s_version="v1.26" state="alpha" >}} + + +动态资源分配是一个用于在 Pod 之间和 Pod 内部容器之间请求和共享资源的新 API。 +它是对为通用资源所提供的持久卷 API 的泛化。第三方资源驱动程序负责跟踪和分配资源。 +不同类型的资源支持用任意参数进行定义和初始化。 + +## {{% heading "prerequisites" %}} + + +Kubernetes v{{< skew currentVersion >}} 包含用于动态资源分配的集群级 API 支持, +但它需要被[显式启用](#enabling-dynamic-resource-allocation)。 +你还必须为此 API 要管理的特定资源安装资源驱动程序。 +如果你未运行 Kubernetes v{{< skew currentVersion>}}, +请查看对应版本的 Kubernetes 文档。 + + + +## API {#api} + +新的 `resource.k8s.io/v1alpha1` +{{< glossary_tooltip text="API 组" term_id="api-group" >}}提供四种新类型: + + +ResourceClass +: 定义由哪个资源驱动程序处理某种资源,并为其提供通用参数。 + 集群管理员在安装资源驱动程序时创建 ResourceClass。 + +ResourceClaim +: 定义工作负载所需的特定资源实例。 + 由用户创建(手动管理生命周期,可以在不同的 Pod 之间共享), + 或者由控制平面基于 ResourceClaimTemplate 为特定 Pod 创建 + (自动管理生命周期,通常仅由一个 Pod 使用)。 + +ResourceClaimTemplate +: 定义用于创建 ResourceClaim 的 spec 和一些元数据。 + 部署工作负载时由用户创建。 + +PodScheduling +: 供控制平面和资源驱动程序内部使用, + 在需要为 Pod 分配 ResourceClaim 时协调 Pod 调度。 + + +ResourceClass 和 ResourceClaim 的参数存储在单独的对象中, +通常使用安装资源驱动程序时创建的 {{< glossary_tooltip +term_id="CustomResourceDefinition" text="CRD" >}} 所定义的类型。 + + +`core/v1` 的 `PodSpec` 在新的 `resourceClaims` 字段中定义 Pod 所需的 ResourceClaim。 +该列表中的条目引用 ResourceClaim 或 ResourceClaimTemplate。 +当引用 ResourceClaim 时,使用此 PodSpec 的所有 Pod +(例如 Deployment 或 StatefulSet 中的 Pod)共享相同的 ResourceClaim 实例。 +引用 ResourceClaimTemplate 时,每个 Pod 都有自己的实例。 + + +容器资源的 `resources.claims` 列表定义容器可以访问的资源实例, +从而可以实现在一个或多个容器之间共享资源。 + +下面是一个虚构的资源驱动程序的示例。 +该示例将为此 Pod 创建两个 ResourceClaim 对象,每个容器都可以访问其中一个。 + +```yaml +apiVersion: resource.k8s.io/v1alpha1 +kind: ResourceClass +name: resource.example.com +driverName: resource-driver.example.com +--- +apiVersion: cats.resource.example.com/v1 +kind: ClaimParameters +name: large-black-cat-claim-parameters +spec: + color: black + size: large +--- +apiVersion: resource.k8s.io/v1alpha1 +kind: ResourceClaimTemplate +metadata: + name: large-black-cat-claim-template +spec: + spec: + resourceClassName: resource.example.com + parametersRef: + apiGroup: cats.resource.example.com + kind: ClaimParameters + name: large-black-cat-claim-parameters +–-- +apiVersion: v1 +kind: Pod +metadata: + name: pod-with-cats +spec: + containers: + - name: container0 + image: ubuntu:20.04 + command: ["sleep", "9999"] + resources: + claims: + - name: cat-0 + - name: container1 + image: ubuntu:20.04 + command: ["sleep", "9999"] + resources: + claims: + - name: cat-1 + resourceClaims: + - name: cat-0 + source: + resourceClaimTemplateName: large-black-cat-claim-template + - name: cat-1 + source: + resourceClaimTemplateName: large-black-cat-claim-template +``` + +## 调度 {#scheduling} + + +与原生资源(CPU、RAM)和扩展资源(由设备插件管理,并由 kubelet 公布)不同, +调度器不知道集群中有哪些动态资源, +也不知道如何将它们拆分以满足特定 ResourceClaim 的要求。 +资源驱动程序负责这些任务。 +资源驱动程序在为 ResourceClaim 保留资源后将其标记为“已分配(Allocated)”。 +然后告诉调度器集群中可用的 ResourceClaim 的位置。 + + +ResourceClaim 可以在创建时就进行分配(“立即分配”),不用考虑哪些 Pod 将使用它。 +默认情况下采用延迟分配,直到需要 ResourceClaim 的 Pod 被调度时 +(即“等待第一个消费者”)再进行分配。 + + +在这种模式下,调度器检查 Pod 所需的所有 ResourceClaim,并创建一个 PodScheduling 对象, +通知负责这些 ResourceClaim 的资源驱动程序,告知它们调度器认为适合该 Pod 的节点。 +资源驱动程序通过排除没有足够剩余资源的节点来响应调度器。 +一旦调度器有了这些信息,它就会选择一个节点,并将该选择存储在 PodScheduling 对象中。 +然后,资源驱动程序为分配其 ResourceClaim,以便资源可用于该节点。 +完成后,Pod 就会被调度。 + + +作为此过程的一部分,ResourceClaim 会为 Pod 保留。 +目前,ResourceClaim 可以由单个 Pod 独占使用或不限数量的多个 Pod 使用。 + + +除非 Pod 的所有资源都已分配和保留,否则 Pod 不会被调度到节点,这是一个重要特性。 +这避免了 Pod 被调度到一个节点但无法在那里运行的情况, +这种情况很糟糕,因为被挂起 Pod 也会阻塞为其保留的其他资源,如 RAM 或 CPU。 + + +## 限制 {#limitations} + + +调度器插件必须参与调度那些使用 ResourceClaim 的 Pod。 +通过设置 `nodeName` 字段绕过调度器会导致 kubelet 拒绝启动 Pod, +因为 ResourceClaim 没有被保留或甚至根本没有被分配。 +未来可能[去除该限制](https://github.com/kubernetes/kubernetes/issues/114005)。 + + +## 启用动态资源分配 {#enabling-dynamic-resource-allocation} + + +动态资源分配是一个 **alpha 特性**,只有在启用 `DynamicResourceAllocation` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) +和 `resource.k8s.io/v1alpha1` {{< glossary_tooltip text="API 组" +term_id="api-group" >}} 时才启用。 +有关详细信息,参阅 `--feature-gates` 和 `--runtime-config` +[kube-apiserver 参数](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)。 +kube-scheduler、kube-controller-manager 和 kubelet 也需要设置该特性门控。 + + +快速检查 Kubernetes 集群是否支持该功能的方法是列出 ResourceClass 对象: + +```shell +kubectl get resourceclasses +``` + + +如果你的集群支持动态资源分配,则响应是 ResourceClass 对象列表或: +``` +No resources found +``` + + +如果不支持,则会输出如下错误: +``` +error: the server doesn't have a resource type "resourceclasses" +``` + + +kube-scheduler 的默认配置仅在启用特性门控时才启用 "DynamicResources" 插件。 +自定义配置可能需要被修改才能启用它。 + + +除了在集群中启用该功能外,还必须安装资源驱动程序。 +欲了解详细信息,请参阅驱动程序的文档。 + +## {{% heading "whatsnext" %}} + + +- 了解更多该设计的信息, + 参阅[动态资源分配 KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md)。 \ No newline at end of file diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md b/content/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md new file mode 100644 index 0000000000000..291a71c300480 --- /dev/null +++ b/content/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md @@ -0,0 +1,169 @@ +--- +title: Pod 调度就绪态 +content_type: concept +weight: 40 +--- + + + + + +{{< feature-state for_k8s_version="v1.26" state="alpha" >}} + + +Pod 一旦创建就被认为准备好进行调度。 +Kubernetes 调度程序尽职尽责地寻找节点来放置所有待处理的 Pod。 +然而,在实际环境中,会有一些 Pod 可能会长时间处于"缺少必要资源"状态。 +这些 Pod 实际上以一种不必要的方式扰乱了调度器(以及下游的集成方,如 Cluster AutoScaler)。 + +通过指定或删除 Pod 的 `.spec.schedulingGates`,可以控制 Pod 何时准备好被纳入考量进行调度。 + + + + +## 配置 Pod schedulingGates {#configuring-pod-schedulinggates} + +`schedulingGates` 字段包含一个字符串列表,每个字符串文字都被视为 Pod 在被认为可调度之前应该满足的标准。 +该字段只能在创建 Pod 时初始化(由客户端创建,或在准入期间更改)。 +创建后,每个 schedulingGate 可以按任意顺序删除,但不允许添加新的调度门控。 + +{{}} +stateDiagram-v2 + s1: 创建 Pod + s2: Pod 调度门控 + s3: Pod 调度就绪 + s4: Pod 运行 + if: 调度门控为空? + [*] --> s1 + s1 --> if + s2 --> if: 移除了调度门控 + if --> s2: 否 + if --> s3: 是 + s3 --> s4 + s4 --> [*] +{{< /mermaid >}} + + +## 用法示例 {#usage-example} + +要将 Pod 标记为未准备好进行调度,你可以在创建 Pod 时附带一个或多个调度门控,如下所示: + +{{< codenew file="pods/pod-with-scheduling-gates.yaml" >}} + + +Pod 创建后,你可以使用以下方法检查其状态: + +```bash +kubectl get pod test-pod +``` + + +输出显示它处于 `SchedulingGated` 状态: + +```none +NAME READY STATUS RESTARTS AGE +test-pod 0/1 SchedulingGated 0 7s +``` + + +你还可以通过运行以下命令检查其 `schedulingGates` 字段: + +```bash +kubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}' +``` + + +输出是: + +```none +[{"name":"foo"},{"name":"bar"}] +``` + + +要通知调度程序此 Pod 已准备好进行调度,你可以通过重新应用修改后的清单来完全删除其 `schedulingGates`: + +{{< codenew file="pods/pod-without-scheduling-gates.yaml" >}} + + +你可以通过运行以下命令检查 `schedulingGates` 是否已被清空: + +```bash +kubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}' +``` + + +预计输出为空,你可以通过运行下面的命令来检查它的最新状态: + +```bash +kubectl get pod test-pod -o wide +``` + + +鉴于 test-pod 不请求任何 CPU/内存资源,预计此 Pod 的状态会从之前的 `SchedulingGated` 转变为 `Running`: + +```none +NAME READY STATUS RESTARTS AGE IP NODE +test-pod 1/1 Running 0 15s 10.0.0.4 node-2 +``` + + +## 可观测性 {#observability} + +指标 `scheduler_pending_pods` 带有一个新标签 `"gated"`, +以区分 Pod 是否已尝试调度但被宣称不可调度,或明确标记为未准备好调度。 +你可以使用 `scheduler_pending_pods{queue="gated"}` 来检查指标结果。 + +## {{% heading "whatsnext" %}} + + +* 阅读 [PodSchedulingReadiness KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/3521-pod-scheduling-readiness) 了解更多详情 diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md index 0063a075ed55f..7d1943c9099ef 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -106,8 +106,8 @@ spec: whenUnsatisfiable: labelSelector: matchLabelKeys: # 可选;自从 v1.25 开始成为 Alpha - nodeAffinityPolicy: [Honor|Ignore] # 可选;自从 v1.25 开始成为 Alpha - nodeTaintsPolicy: [Honor|Ignore] # 可选;自从 v1.25 开始成为 Alpha + nodeAffinityPolicy: [Honor|Ignore] # 可选;自从 v1.26 开始成为 Beta + nodeTaintsPolicy: [Honor|Ignore] # 可选;自从 v1.26 开始成为 Beta ### 其他 Pod 字段置于此处 ``` @@ -164,12 +164,12 @@ your cluster. Those fields are: {{< note >}} - `minDomains` 字段是一个 Alpha 字段,在 1.25 中默认被启用。 - 你可以通过禁用 `MinDomainsInPodToplogySpread` - [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)来禁用该字段。 + `minDomains` 字段是一个 Beta 字段,在 1.25 中默认被禁用。 + 你可以通过启用 `MinDomainsInPodTopologySpread` + [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)来启用该字段。 {{< /note >}} `nodeAffinityPolicy` 是 1.25 中新增的一个 Alpha 级别字段。 - 你必须启用 `NodeInclusionPolicyInPodTopologySpread` - [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)才能使用此字段。 + 你可以通过禁用 `NodeInclusionPolicyInPodTopologySpread` + [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)来禁用此字段。 {{< /note >}} - `nodeTaintsPolicy` 是 1.25 中新增的一个 Alpha 级别字段。 - 你必须启用 `NodeInclusionPolicyInPodTopologySpread` - [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)才能使用此字段。 + `nodeTaintsPolicy` 是一个 Beta 级别字段,在 1.26 版本默认启用。 + 你可以通过禁用 `NodeInclusionPolicyInPodTopologySpread` + [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)来禁用此字段。 {{< /note >}} 多租户的另一种主要形式通常涉及为客户运行多个工作负载实例的软件即服务 (SaaS) 供应商。 diff --git a/content/zh-cn/docs/concepts/security/pod-security-admission.md b/content/zh-cn/docs/concepts/security/pod-security-admission.md index c7fea290ee23f..8c02233f3c7cf 100644 --- a/content/zh-cn/docs/concepts/security/pod-security-admission.md +++ b/content/zh-cn/docs/concepts/security/pod-security-admission.md @@ -167,10 +167,10 @@ applied to workload resources, only to the resulting pod objects. Pod 通常是通过创建 {{< glossary_tooltip term_id="deployment" >}} 或 {{< glossary_tooltip term_id="job">}} 这类[工作负载对象](/zh-cn/docs/concepts/workloads/controllers/) -来间接创建的。工作负载对象为工作负载资源定义一个 **Pod 模板** -和一个对应的负责基于该模板来创建 Pod 的{{< glossary_tooltip term_id="controller" text="控制器" >}}。 +来间接创建的。工作负载对象为工作负载资源定义一个 **Pod 模板**和一个对应的负责基于该模板来创建 +Pod 的{{< glossary_tooltip term_id="controller" text="控制器" >}}。 为了尽早地捕获违例状况,`audit` 和 `warn` 模式都应用到负载资源。 -不过,`enforce` 模式并 **不** 应用到工作负载资源,仅应用到所生成的 Pod 对象上。 +不过,`enforce` 模式并**不**应用到工作负载资源,仅应用到所生成的 Pod 对象上。 ## 豁免 {#exemptions} -你可以为 Pod 安全性的实施设置 **豁免(Exemptions)** 规则, +你可以为 Pod 安全性的实施设置**豁免(Exemptions)**规则, 从而允许创建一些本来会被与给定名字空间相关的策略所禁止的 Pod。 豁免规则可以在[准入控制器配置](/zh-cn/docs/tasks/configure-pod-container/enforce-standards-admission-controller/#configure-the-admission-controller) 中静态配置。 @@ -191,7 +191,7 @@ Exemptions can be statically configured in the Exemptions must be explicitly enumerated. Requests meeting exemption criteria are _ignored_ by the Admission Controller (all `enforce`, `audit` and `warn` behaviors are skipped). Exemption dimensions include: --> -豁免规则可以显式枚举。满足豁免标准的请求会被准入控制器 **忽略** +豁免规则必须显式枚举。满足豁免标准的请求会被准入控制器**忽略** (所有 `enforce`、`audit` 和 `warn` 行为都会被略过)。 豁免的维度包括: diff --git a/content/zh-cn/docs/concepts/security/pod-security-standards.md b/content/zh-cn/docs/concepts/security/pod-security-standards.md index 6b0a461beaac4..f5e043d68dcd8 100644 --- a/content/zh-cn/docs/concepts/security/pod-security-standards.md +++ b/content/zh-cn/docs/concepts/security/pod-security-standards.md @@ -22,8 +22,8 @@ The Pod Security Standards define three different _policies_ to broadly cover th spectrum. These policies are _cumulative_ and range from highly-permissive to highly-restrictive. This guide outlines the requirements of each policy. --> -Pod 安全性标准定义了三种不同的 **策略(Policy)**,以广泛覆盖安全应用场景。 -这些策略是 **叠加式的(Cumulative)**,安全级别从高度宽松至高度受限。 +Pod 安全性标准定义了三种不同的**策略(Policy)**,以广泛覆盖安全应用场景。 +这些策略是**叠加式的(Cumulative)**,安全级别从高度宽松至高度受限。 本指南概述了每个策略的要求。 在下述表格中,通配符(`*`)意味着一个列表中的所有元素。 -例如 `spec.containers[*].securityContext` 表示 _所定义的所有容器_ 的安全性上下文对象。 +例如 `spec.containers[*].securityContext` 表示**所定义的所有容器**的安全性上下文对象。 如果所列出的任一容器不能满足要求,整个 Pod 将无法通过校验。 {{< /note >}} @@ -575,7 +575,7 @@ to a particular OS can be relaxed for the other OS. --> ### 限制性的 Pod Security Standard 变更 {#restricted-pod-security-standard-changes} -Kubernetes v1.25 中的另一个重要变化是 **限制性的(Restricted)** Pod 安全性已更新, +Kubernetes v1.25 中的另一个重要变化是**限制性的(Restricted)** Pod 安全性已更新, 能够处理 `pod.spec.os.name` 字段。根据 OS 名称,专用于特定 OS 的某些策略对其他 OS 可以放宽限制。 ### 持久卷的创建 {#persistent-volume-creation} -如 [PodSecurityPolicy](/zh-cn/docs/concepts/security/pod-security-policy/#volumes-and-file-systems) -文档中所述,创建 PersistentVolumes 的权限可以提权访问底层主机。 -如果需要访问 PersistentVolume,受信任的管理员应该创建 `PersistentVolume`, -受约束的用户应该使用 `PersistentVolumeClaim` 访问该存储。 +如果允许某人或某个应用创建任意的 PersistentVolume,则这种访问权限包括创建 `hostPath` 卷, +这意味着 Pod 将可以访问对应节点上的下层主机文件系统。授予该能力会带来安全风险。 + + +不受限制地访问主机文件系统的容器可以通过多种方式提升特权,包括从其他容器读取数据以及滥用系统服务(例如 Kubelet)的凭据。 + +你应该只允许以下实体具有创建 PersistentVolume 对象的访问权限: + + +- 需要此访问权限才能工作的用户(集群操作员)以及你信任的人, +- Kubernetes 控制平面组件,这些组件基于已配置为自动制备的 PersistentVolumeClaim 创建 PersistentVolume。 + 这通常由 Kubernetes 提供商或操作员在安装 CSI 驱动程序时进行设置。 + + +在需要访问持久存储的地方,受信任的管理员应创建 PersistentVolume,而受约束的用户应使用 +PersistentVolumeClaim 来访问该存储。 从历史背景看,请注意 Docker 自 2016 年以来一直使用[默认的 Seccomp 配置文件](https://docs.docker.com/engine/security/seccomp/), 仅允许来自 [Docker Engine 1.10](https://www.docker.com/blog/docker-engine-1-10-security/) 的很小的一组系统调用, @@ -342,7 +342,7 @@ violations. AppArmor profiles are enforced on a per-container basis, with an annotation, allowing for processes to gain just the right privileges. --> [AppArmor](https://apparmor.net/) 是一个 Linux 内核安全模块, -可以提供一种简单的方法来实现强制访问控制(Mandatory Access Control, MAC)并通过系统日志进行更好地审计。 +可以提供一种简单的方法来实现强制访问控制(Mandatory Access Control, MAC)并通过系统日志进行更好地审计。 要在 Kubernetes 中[启用 AppArmor](/zh-cn/docs/tutorials/security/apparmor/),至少需要 1.4 版本。 与 Seccomp 一样,AppArmor 也通过配置文件进行配置, 其中每个配置文件要么在强制(Enforcing)模式下运行,即阻止访问不允许的资源,要么在投诉(Complaining)模式下运行,只报告违规行为。 @@ -508,15 +508,16 @@ for time-bound service account credentials. - [ ] Container images are configured to be run as unprivileged user. - [ ] References to container images are made by sha256 digests (rather than tags) or the provenance of the image is validated by verifying the image's -digital signature at deploy time [via admission control](/docs/tasks/administer-cluster/verify-signed-images/#verifying-image-signatures-with-admission-controller). +digital signature at deploy time [via admission control](/docs/tasks/administer-cluster/verify-signed-artifacts/#verifying-image-signatures-with-admission-controller). - [ ] Container images are regularly scanned during creation and in deployment, and known vulnerable software is patched. --> ## 镜像 {#images} + - [ ] 尽量减少容器镜像中不必要的内容。 - [ ] 容器镜像配置为以非特权用户身份运行。 - [ ] 对容器镜像的引用是通过 Sha256 摘要实现的,而不是标签(tags), - 或者[通过准入控制器](/zh-cn/docs/tasks/administer-cluster/verify-signed-images/#verifying-image-signatures-with-admission-controller)在部署时验证镜像的数字签名来验证镜像的来源。 + 或者[通过准入控制器](/zh-cn/docs/tasks/administer-cluster/verify-signed-artifacts/#verifying-image-signatures-with-admission-controller)在部署时验证镜像的数字签名来验证镜像的来源。 - [ ] 在创建和部署过程中定期扫描容器镜像,并对已知的漏洞软件进行修补。 避免使用镜像标签来引用镜像,尤其是 `latest` 标签,因为标签对应的镜像可以在仓库中被轻松地修改。 首选使用完整的 `Sha256` 摘要,该摘要对特定镜像清单文件而言是唯一的。 可以通过 [ImagePolicyWebhook](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook) 强制执行此策略。 -镜像签名还可以在部署时由[准入控制器自动验证](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook), +镜像签名还可以在部署时由[准入控制器自动验证](/zh-cn/docs/tasks/administer-cluster/verify-signed-artifacts/#verifying-image-signatures-with-admission-controller), 以验证其真实性和完整性。 [`CertificateSubjectRestriction`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#certificatesubjectrestriction) -: 拒绝将 `group`(或 `organization attribute` )设置为 `system:masters` 的所有证书请求。 +: 拒绝将 `group`(或 `organization attribute`)设置为 `system:masters` 的所有证书请求。 -## 接下来 +## 接下来 {#what-is-next} - [RBAC 良好实践](/zh-cn/docs/concepts/security/rbac-good-practices/)提供有关授权的更多信息。 - [集群多租户指南](/zh-cn/docs/concepts/security/multi-tenancy/)提供有关多租户的配置选项建议和最佳实践。 diff --git a/content/zh-cn/docs/concepts/services-networking/dns-pod-service.md b/content/zh-cn/docs/concepts/services-networking/dns-pod-service.md index a24fce895b2ea..c50637c213582 100644 --- a/content/zh-cn/docs/concepts/services-networking/dns-pod-service.md +++ b/content/zh-cn/docs/concepts/services-networking/dns-pod-service.md @@ -7,7 +7,8 @@ description: >- --- -Kubernetes DNS 除了在集群上调度 DNS Pod 和 Service, -还配置 kubelet 以告知各个容器使用 DNS Service 的 IP 来解析 DNS 名称。 +Kubernetes 发布有关 Pod 和 Service 的信息,这些信息被用来对 DNS 进行编程。 +Kubelet 配置 Pod 的 DNS,以便运行中的容器可以通过名称而不是 IP 来查找服务。 -集群中定义的每个 Service (包括 DNS 服务器自身)都被赋予一个 DNS 名称。 + +集群中定义的 Service 被赋予 DNS 名称。 默认情况下,客户端 Pod 的 DNS 搜索列表会包含 Pod 自身的名字空间和集群的默认域。 -DNS 查询可以使用 Pod 中的 `/etc/resolv.conf` 展开。kubelet 会为每个 Pod -生成此文件。例如,对 `data` 的查询可能被展开为 `data.test.svc.cluster.local`。 +DNS 查询可以使用 Pod 中的 `/etc/resolv.conf` 展开。 +Kubelet 为每个 Pod 配置此文件。 +例如,对 `data` 的查询可能被展开为 `data.test.svc.cluster.local`。 `search` 选项的取值会被用来展开查询。要进一步了解 DNS 查询,可参阅 [`resolv.conf` 手册页面](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html)。 @@ -91,10 +94,10 @@ options ndots:5 ``` -概括起来,名字空间 `test` 中的 Pod 可以成功地解析 `data.prod` 或者 +概括起来,名字空间 _test_ 中的 Pod 可以成功地解析 `data.prod` 或者 `data.prod.svc.cluster.local`。 @@ -143,40 +146,39 @@ selection from the set. #### A/AAAA 记录 {#a-aaaa-records} -“普通” Service(除了无头 Service)会以 `my-svc.my-namespace.svc.cluster-domain.example` -这种名字的形式被分配一个 DNS A 或 AAAA 记录,取决于 Service 的 IP 协议族。 +除了无头 Service 之外的 “普通” Service 会被赋予一个形如 `my-svc.my-namespace.svc.cluster-domain.example` +的 DNS A 和/或 AAAA 记录,取决于 Service 的 IP 协议族(可能有多个)设置。 该名称会解析成对应 Service 的集群 IP。 -“无头(Headless)” Service (没有集群 IP)也会以 -`my-svc.my-namespace.svc.cluster-domain.example` 这种名字的形式被指派一个 DNS A 或 AAAA 记录, -具体取决于 Service 的 IP 协议族。 +没有集群 IP 的[无头 Service](/zh-cn/docs/concepts/services-networking/service/#headless-services) +也会被赋予一个形如 `my-svc.my-namespace.svc.cluster-domain.example` 的 DNS A 和/或 AAAA 记录。 与普通 Service 不同,这一记录会被解析成对应 Service 所选择的 Pod IP 的集合。 客户端要能够使用这组 IP,或者使用标准的轮转策略从这组 IP 中进行选择。 #### SRV 记录 {#srv-records} -Kubernetes 根据普通 Service 或 -[Headless Service](/zh-cn/docs/concepts/services-networking/service/#headless-services) +Kubernetes 根据普通 Service 或无头 Service 中的命名端口创建 SRV 记录。每个命名端口, -SRV 记录格式为 `_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster-domain.example`。 +SRV 记录格式为 `_port-name._port-protocol.my-svc.my-namespace.svc.cluster-domain.example`。 普通 Service,该记录会被解析成端口号和域名:`my-svc.my-namespace.svc.cluster-domain.example`。 无头 Service,该记录会被解析成多个结果,及该服务的每个后端 Pod 各一个 SRV 记录, -其中包含 Pod 端口号和格式为 `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example` +其中包含 Pod 端口号和格式为 `hostname.my-svc.my-namespace.svc.cluster-domain.example` 的域名。 + ## Pod +### Pod 的 hostname 和 subdomain 字段 {#pod-s-hostname-and-subdomain-fields} -The Pod spec also has an optional `subdomain` field which can be used to specify -its subdomain. For example, a Pod with `hostname` set to "`foo`", and `subdomain` -set to "`bar`", in namespace "`my-namespace`", will have the fully qualified -domain name (FQDN) "`foo.bar.my-namespace.svc.cluster-domain.example`". +当前,创建 Pod 时其主机名(从 Pod 内部观察)取自 Pod 的 `metadata.name` 值。 -Example: + -### Pod 的 hostname 和 subdomain 字段 {#pod-s-hostname-and-subdomain-fields} -当前,创建 Pod 时其主机名取自 Pod 的 `metadata.name` 值。 +Pod 规约中包含一个可选的 `hostname` 字段,可以用来指定一个不同的主机名。 +当这个字段被设置时,它将优先于 Pod 的名字成为该 Pod 的主机名(同样是从 Pod 内部观察)。 +举个例子,给定一个 `spec.hostname` 设置为 `“my-host”` 的 Pod, +该 Pod 的主机名将被设置为 `“my-host”`。 + + -Pod 规约中包含一个可选的 `hostname` 字段,可以用来指定 Pod 的主机名。 -当这个字段被设置时,它将优先于 Pod 的名字成为该 Pod 的主机名。 -举个例子,给定一个 `hostname` 设置为 "`my-host`" 的 Pod, -该 Pod 的主机名将被设置为 "`my-host`"。 +Pod 规约还有一个可选的 `subdomain` 字段,可以用来表明该 Pod 是名字空间的子组的一部分。 +举个例子,某 Pod 的 `spec.hostname` 设置为 `“foo”`,`spec.subdomain` 设置为 `“bar”`, +在名字空间 `“my-namespace”` 中,主机名称被设置成 `“foo”` 并且对应的完全限定域名(FQDN)为 +“`foo.bar.my-namespace.svc.cluster-domain.example`”(还是从 Pod 内部观察)。 -Pod 规约还有一个可选的 `subdomain` 字段,可以用来指定 Pod 的子域名。 -举个例子,某 Pod 的 `hostname` 设置为 “`foo`”,`subdomain` 设置为 “`bar`”, -在名字空间 “`my-namespace`” 中对应的完全限定域名(FQDN)为 -“`foo.bar.my-namespace.svc.cluster-domain.example`”。 + +如果 Pod 所在的名字空间中存在一个无头服务,其名称与子域相同, +则集群的 DNS 服务器还会为 Pod 的完全限定主机名返回 A 和/或 AAAA 记录。 示例: @@ -247,7 +264,7 @@ Pod 规约还有一个可选的 `subdomain` 字段,可以用来指定 Pod 的 apiVersion: v1 kind: Service metadata: - name: default-subdomain + name: busybox-subdomain spec: selector: name: busybox @@ -255,7 +272,6 @@ spec: ports: - name: foo # 实际上不需要指定端口号 port: 1234 - targetPort: 1234 --- apiVersion: v1 kind: Pod @@ -265,7 +281,7 @@ metadata: name: busybox spec: hostname: busybox-1 - subdomain: default-subdomain + subdomain: busybox-subdomain containers: - image: busybox:1.28 command: @@ -281,7 +297,7 @@ metadata: name: busybox spec: hostname: busybox-2 - subdomain: default-subdomain + subdomain: busybox-subdomain containers: - image: busybox:1.28 command: @@ -291,24 +307,16 @@ spec: ``` -如果某无头 Service 与某 Pod 在同一个名字空间中,且它们具有相同的子域名, -集群的 DNS 服务器也会为该 Pod 的全限定主机名返回 A 记录或 AAAA 记录。 -例如,在同一个名字空间中,给定一个主机名为 “busybox-1”、 -子域名设置为 “default-subdomain” 的 Pod,和一个名称为 “`default-subdomain`” -的无头 Service,Pod 将看到自己的 FQDN 为 -"`busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example`"。 -DNS 会为此名字提供一个 A 记录或 AAAA 记录,指向该 Pod 的 IP。 -“`busybox1`” 和 “`busybox2`” 这两个 Pod 分别具有它们自己的 A 或 AAAA 记录。 +鉴于上述服务 `“busybox-subdomain”` 和将 `spec.subdomain` 设置为 `“busybox-subdomain”` 的 Pod, +第一个 Pod 将看到自己的 FQDN 为 `“busybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example”`。 +DNS 会为此名字提供一个 A 记录和/或 AAAA 记录,指向该 Pod 的 IP。 +Pod “`busybox1`” 和 “`busybox2`” 都将有自己的地址记录。 {{< note >}} -由于不是为 Pod 名称创建 A 或 AAAA 记录的,因此 Pod 的 A 或 AAAA 需要 `hostname`。 +由于 A 和 AAAA 记录不是基于 Pod 名称创建,因此需要设置了 `hostname` 才会生成 Pod 的 A 或 AAAA 记录。 没有设置 `hostname` 但设置了 `subdomain` 的 Pod 只会为 -无头 Service 创建 A 或 AAAA 记录(`default-subdomain.my-namespace.svc.cluster-domain.example`) +无头 Service 创建 A 或 AAAA 记录(`busybox-subdomain.my-namespace.svc.cluster-domain.example`) 指向 Pod 的 IP 地址。 -另外,除非在服务上设置了 `publishNotReadyAddresses=True`,否则只有 Pod 进入就绪状态 +另外,除非在服务上设置了 `publishNotReadyAddresses=True`,否则只有 Pod 准备就绪 才会有与之对应的记录。 {{< /note >}} @@ -341,12 +349,16 @@ record unless `publishNotReadyAddresses=True` is set on the Service. {{< feature-state for_k8s_version="v1.22" state="stable" >}} 当 Pod 配置为具有全限定域名 (FQDN) 时,其主机名是短主机名。 -例如,如果你有一个具有完全限定域名 `busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example` 的 Pod, +例如,如果你有一个具有完全限定域名 `busybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example` 的 Pod, 则默认情况下,该 Pod 内的 `hostname` 命令返回 `busybox-1`,而 `hostname --fqdn` 命令返回 FQDN。 当你在 Pod 规约中设置了 `setHostnameAsFQDN: true` 时,kubelet 会将 Pod @@ -526,7 +538,7 @@ options ndots:2 edns0 ``` 对于 IPv6 设置,搜索路径和名称服务器应按以下方式设置: diff --git a/content/zh-cn/docs/concepts/services-networking/endpoint-slices.md b/content/zh-cn/docs/concepts/services-networking/endpoint-slices.md index 7831c400a5fe5..ec47a4954ed4e 100644 --- a/content/zh-cn/docs/concepts/services-networking/endpoint-slices.md +++ b/content/zh-cn/docs/concepts/services-networking/endpoint-slices.md @@ -24,13 +24,13 @@ description: >- {{< feature-state for_k8s_version="v1.21" state="stable" >}} -Kubernetes 的**端点切片(EndpointSlices)** 提供了一种简单的方法来跟踪 +Kubernetes 的 _EndpointSlice_ API 提供了一种简单的方法来跟踪 Kubernetes 集群中的网络端点(network endpoints)。EndpointSlices 为 -Endpoints(/zh-cn/docs/concepts/services-networking/service/#endpoints) +[Endpoints](/zh-cn/docs/concepts/services-networking/service/#endpoints) 提供了一种可扩缩和可拓展的替代方案。 @@ -500,11 +500,12 @@ EndpointSlices 还支持围绕双栈网络和拓扑感知路由等新功能的 ## {{% heading "whatsnext" %}} -* 遵循[使用 Service 连接到应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/)教程 +* 遵循[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)教程 * 阅读 EndpointSlice API 的 [API 参考](/zh-cn/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1/) * 阅读 Endpoints API 的 [API 参考](/zh-cn/docs/reference/kubernetes-api/service-resources/endpoints-v1/) + diff --git a/content/zh-cn/docs/concepts/services-networking/ingress-controllers.md b/content/zh-cn/docs/concepts/services-networking/ingress-controllers.md index 599190fec29fa..36ea9abd1182e 100644 --- a/content/zh-cn/docs/concepts/services-networking/ingress-controllers.md +++ b/content/zh-cn/docs/concepts/services-networking/ingress-controllers.md @@ -5,7 +5,7 @@ description: >- 必须有一个 Ingress 控制器正在运行。你需要选择至少一个 Ingress 控制器并确保其已被部署到你的集群中。 本页列出了你可以部署的常见 Ingress 控制器。 content_type: concept -weight: 30 +weight: 50 --- @@ -136,6 +136,7 @@ Kubernetes 作为一个项目,目前支持和维护 * [Tyk Operator](https://github.com/TykTechnologies/tyk-operator) extends Ingress with Custom Resources to bring API Management capabilities to Ingress. Tyk Operator works with the Open Source Tyk Gateway & Tyk Cloud control plane. * [Voyager](https://appscode.com/products/voyager) is an ingress controller for [HAProxy](https://www.haproxy.org/#desc). +* [Wallarm Ingress Controller](https://www.wallarm.com/solutions/waf-for-kubernetes) is an Ingress Controller that provides WAAP (WAF) and API Security capabilities. --> * [Traefik Kubernetes Ingress 提供程序](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) 是一个用于 [Traefik](https://traefik.io/traefik/) 代理的 Ingress 控制器。 @@ -144,6 +145,8 @@ Kubernetes 作为一个项目,目前支持和维护 使用开源的 Tyk Gateway & Tyk Cloud 控制面。 * [Voyager](https://appscode.com/products/voyager) 是一个针对 [HAProxy](https://www.haproxy.org/#desc) 的 Ingress 控制器。 +* [Wallarm Ingress Controller](https://www.wallarm.com/solutions/waf-for-kubernetes) 是提供 WAAP(WAF) + 和 API 安全功能的 Ingress Controller。 你可以使用 [Ingress 类](/zh-cn/docs/concepts/services-networking/ingress/#ingress-class)在集群中部署任意数量的 diff --git a/content/zh-cn/docs/concepts/services-networking/ingress.md b/content/zh-cn/docs/concepts/services-networking/ingress.md index 2ee41f688fa58..04cb94245b913 100644 --- a/content/zh-cn/docs/concepts/services-networking/ingress.md +++ b/content/zh-cn/docs/concepts/services-networking/ingress.md @@ -54,13 +54,13 @@ For clarity, this guide defines the following terms: ## Ingress 是什么? {#what-is-ingress} -[Ingress](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io) +[Ingress](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1-networking-k8s-io) 公开从集群外部到集群内[服务](/zh-cn/docs/concepts/services-networking/service/)的 HTTP 和 HTTPS 路由。 流量路由由 Ingress 资源上定义的规则控制。 @@ -73,7 +73,7 @@ Here is a simple example where an Ingress sends all its traffic to one Service: {{< figure src="/zh-cn/docs/images/ingress.svg" alt="ingress-diagram" class="diagram-large" caption="图. Ingress" link="https://mermaid.live/edit#pako:eNqNkktLAzEQgP9KSC8Ku6XWBxKlJz0IHsQeuz1kN7M2uC-SrA9sb6X26MFLFZGKoCC0CIIn_Td1139halZq8eJlE2a--TI7yRn2YgaYYCc6EDRpod39DSdCyAs4RGqhMRndffRfs6dxc9Euox0NgZR2NhpmF73sqos2XVFD-ctt_vY2uTnPh8PJ4BGV7Ro3ZKOoaH5Li6Bt19r56zi7fM4fupP-oC1BHHEPGnWzGlimruno87qXvd__qjdpw2pXErOlxl7Mmn_j1VkcImb-i0q5BT5KAsoj5PMgICXGmCWViA-BlHzfL_b2MWeqRVaSE8uLg1iQUqVS2ZiTHK7LQrFcXfNg9V8WnZu3eEEqFYjCNCslJdd15zXVmcacODP9TMcqJmBN5zL9VKdt_uLM1ZoBzIVNF8WqM06ELRyCCCln-oWcTVkHqxaE4GCitwx8mgbK0Y-no9E0YVTBNuMqFpj4NJBgYZqquH4aeZgokcIPtMWpvtywoDpfU3_yww" >}} Ingress 可为 Service 提供外部可访问的 URL、负载均衡流量、终止 SSL/TLS,以及基于名称的虚拟托管。 [Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers) @@ -93,7 +93,7 @@ Ingress 不会公开任意端口或协议。 ## 环境准备 @@ -121,7 +121,7 @@ Make sure you review your Ingress controller's documentation to understand the c {{< /note >}} @@ -194,18 +194,23 @@ Each HTTP rule contains the following information: * An optional host. In this example, no host is specified, so the rule applies to all inbound HTTP traffic through the IP address specified. If a host is provided (for example, foo.bar.com), the rules apply to that host. -* A list of paths (for example, `/testpath`), each of which has an associated backend defined with a `serviceName` - and `servicePort`. Both the host and path must match the content of an incoming request before the - load balancer directs traffic to the referenced Service. +* A list of paths (for example, `/testpath`), each of which has an associated + backend defined with a `service.name` and a `service.port.name` or + `service.port.number`. Both the host and path must match the content of an + incoming request before the load balancer directs traffic to the referenced + Service. * A backend is a combination of Service and port names as described in the - [Service doc](/docs/concepts/services-networking/service/). HTTP (and HTTPS) requests to the + [Service doc](/docs/concepts/services-networking/service/) or a [custom resource backend](#resource-backend) by way of a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}}. HTTP (and HTTPS) requests to the Ingress that match the host and path of the rule are sent to the listed backend. --> * 可选的 `host`。在此示例中,未指定 `host`,因此该规则适用于通过指定 IP 地址的所有入站 HTTP 通信。 如果提供了 `host`(例如 foo.bar.com),则 `rules` 适用于该 `host`。 -* 路径列表 paths(例如,`/testpath`),每个路径都有一个由 `serviceName` 和 `servicePort` 定义的关联后端。 +* 路径列表(例如 `/testpath`),每个路径都有一个由 `service.name` 和 `service.port.name` + 或 `service.port.number` 定义的关联后端。 在负载均衡器将流量定向到引用的服务之前,主机和路径都必须匹配传入请求的内容。 -* `backend`(后端)是 [Service 文档](/zh-cn/docs/concepts/services-networking/service/)中所述的服务和端口名称的组合。 +* `backend`(后端)是 [Service 文档](/zh-cn/docs/concepts/services-networking/service/)中所述的服务和端口名称的组合, + 或者是通过 {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}} + 方式来实现的[自定义资源后端](#resource-backend)。 与规则的 `host` 和 `path` 匹配的对 Ingress 的 HTTP(和 HTTPS )请求将发送到列出的 `backend`。 {{< caution >}} 如果集群中有多个 IngressClass 被标记为默认,准入控制器将阻止创建新的未指定 @@ -743,12 +747,12 @@ Ingress 控制器将提供实现特定的负载均衡器来满足 Ingress, 当它这样做时,你会在 Address 字段看到负载均衡器的地址。 {{< note >}} -取决于你所使用的 [Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers), +取决于你所使用的 [Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers/), 你可能需要创建默认 HTTP 后端[服务](/zh-cn/docs/concepts/services-networking/service/)。 {{< /note >}} @@ -863,7 +867,7 @@ platform specific Ingress controller to understand how TLS works in your environ {{< /note >}} 值得注意的是,尽管健康检查不是通过 Ingress 直接暴露的,在 Kubernetes 中存在并行的概念,比如 [就绪检查](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/), 允许你实现相同的目的。 -请检查特定控制器的说明文档([nginx](https://git.k8s.io/ingress-nginx/README.md)、 +请检查特定控制器的说明文档(例如:[nginx](https://git.k8s.io/ingress-nginx/README.md)、 [GCE](https://git.k8s.io/ingress-gce/README.md#health-checks))以了解它们是怎样处理健康检查的。 ## 跨可用区失败 {#failing-across-availability-zones} @@ -1031,7 +1039,7 @@ You can expose a Service in multiple ways that don't directly involve the Ingres * 进一步了解 [Ingress](/zh-cn/docs/reference/kubernetes-api/service-resources/ingress-v1/) API diff --git a/content/zh-cn/docs/concepts/services-networking/network-policies.md b/content/zh-cn/docs/concepts/services-networking/network-policies.md index 15f89295d03e3..d6d9c6be3c526 100644 --- a/content/zh-cn/docs/concepts/services-networking/network-policies.md +++ b/content/zh-cn/docs/concepts/services-networking/network-policies.md @@ -9,6 +9,10 @@ description: >- --- 如果你希望在 IP 地址或端口层面(OSI 第 3 层或第 4 层)控制网络流量, 则你可以考虑为集群中特定应用使用 Kubernetes 网络策略(NetworkPolicy)。 @@ -30,14 +41,16 @@ NetworkPolicy 是一种以应用为中心的结构,允许你设置如何允许 {{< glossary_tooltip text="Pod" term_id="pod">}} 与网络上的各类网络“实体” (我们这里使用实体以避免过度使用诸如“端点”和“服务”这类常用术语, 这些术语在 Kubernetes 中有特定含义)通信。 -NetworkPolicies 适用于一端或两端与 Pod 的连接,与其他连接无关。 +NetworkPolicy 适用于一端或两端与 Pod 的连接,与其他连接无关。 Pod 可以通信的 Pod 是通过如下三个标识符的组合来辩识的: @@ -47,7 +60,9 @@ Pod 可以通信的 Pod 是通过如下三个标识符的组合来辩识的: 无论 Pod 或节点的 IP 地址) @@ -60,7 +75,9 @@ Meanwhile, when IP based NetworkPolicies are created, we define policies based o ## 前置条件 {#prerequisites} @@ -71,9 +88,12 @@ Network policies are implemented by the [network plugin](/docs/concepts/extend-k - ## Pod 隔离的两种类型 {#the-two-sorts-of-pod-isolation} Pod 有两种隔离: 出口的隔离和入口的隔离。它们涉及到可以建立哪些连接。 @@ -82,9 +102,13 @@ Pod 有两种隔离: 出口的隔离和入口的隔离。它们涉及到可以 并且都与从一个 Pod 到另一个 Pod 的连接有关。 - 默认情况下,一个 Pod 的出口是非隔离的,即所有外向连接都是被允许的。如果有任何的 NetworkPolicy 选择该 Pod 并在其 `policyTypes` 中包含 “Egress”,则该 Pod 是出口隔离的, 我们称这样的策略适用于该 Pod 的出口。当一个 Pod 的出口被隔离时, @@ -92,9 +116,13 @@ By default, a pod is non-isolated for egress; all outbound connections are allow 这些 `egress` 列表的效果是相加的。 - 默认情况下,一个 Pod 对入口是非隔离的,即所有入站连接都是被允许的。如果有任何的 NetworkPolicy 选择该 Pod 并在其 `policyTypes` 中包含 “Ingress”,则该 Pod 被隔离入口, 我们称这种策略适用于该 Pod 的入口。当一个 Pod 的入口被隔离时,唯一允许进入该 Pod @@ -102,11 +130,14 @@ By default, a pod is non-isolated for ingress; all inbound connections are allow 列表所允许的连接。这些 `ingress` 列表的效果是相加的。 - 网络策略是相加的,所以不会产生冲突。如果策略适用于 Pod 某一特定方向的流量, Pod 在对应方向所允许的连接是适用的网络策略所允许的集合。 因此,评估的顺序不影响策略的结果。 @@ -117,7 +148,8 @@ Pod 在对应方向所允许的连接是适用的网络策略所允许的集合 @@ -130,23 +162,26 @@ An example NetworkPolicy might look like this: {{< codenew file="service/networking/networkpolicy.yaml" >}} - {{< note >}} + 除非选择支持网络策略的网络解决方案,否则将上述示例发送到API服务器没有任何效果。 {{< /note >}} **必需字段**:与所有其他的 Kubernetes 配置一样,NetworkPolicy 需要 `apiVersion`、 `kind` 和 `metadata` 字段。关于配置文件操作的一般信息, @@ -161,13 +196,21 @@ __podSelector__: Each NetworkPolicy includes a `podSelector` which selects the g 空的 `podSelector` 选择名字空间下的所有 Pod。 - **policyTypes**:每个 NetworkPolicy 都包含一个 `policyTypes` 列表,其中包含 `Ingress` 或 `Egress` 或两者兼具。`policyTypes` 字段表示给定的策略是应用于进入所选 Pod 的入站流量还是来自所选 Pod 的出站流量,或两者兼有。 @@ -186,28 +229,34 @@ Pod 的入站流量还是来自所选 Pod 的出站流量,或两者兼有。 所以,该网络策略示例: -1. 隔离 "default" 名字空间下 "role=db" 的 Pod (如果它们不是已经被隔离的话)。 -2. (Ingress 规则)允许以下 Pod 连接到 "default" 名字空间下的带有 "role=db" +1. 隔离 `default` 名字空间下 `role=db` 的 Pod (如果它们不是已经被隔离的话)。 +2. (Ingress 规则)允许以下 Pod 连接到 `default` 名字空间下的带有 `role=db` 标签的所有 Pod 的 6379 TCP 端口: - * "default" 名字空间下带有 "role=frontend" 标签的所有 Pod - * 带有 "project=myproject" 标签的所有名字空间中的 Pod + * `default` 名字空间下带有 `role=frontend` 标签的所有 Pod + * 带有 `project=myproject` 标签的所有名字空间中的 Pod * IP 地址范围为 172.17.0.0–172.17.0.255 和 172.17.2.0–172.17.255.255 (即,除了 172.17.1.0/24 之外的所有 172.17.0.0/16) -3. (Egress 规则)允许 “default” 命名空间中任何带有标签 “role=db” 的 Pod 到 CIDR +3. (Egress 规则)允许 `default` 名字空间中任何带有标签 `role=db` 的 Pod 到 CIDR 10.0.0.0/24 下 5978 TCP 端口的连接。 参阅[声明网络策略](/zh-cn/docs/tasks/administer-cluster/declare-network-policy/)演练了解更多示例。 @@ -215,13 +264,18 @@ See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network- ## 选择器 `to` 和 `from` 的行为 {#behavior-of-to-and-from-selectors} @@ -251,9 +305,10 @@ Pod,应将其允许作为入站流量来源或出站流量目的地。 ``` -在 `from` 数组中仅包含一个元素,只允许来自标有 `role=client` 的 Pod +This policy contains a single `from` element allowing connections from Pods with the label +`role=client` in namespaces with the label `user=alice`. But the following policy is different: +--> +此策略在 `from` 数组中仅包含一个元素,只允许来自标有 `role=client` 的 Pod 且该 Pod 所在的名字空间中标有 `user=alice` 的连接。但是 **这项** 策略: ```yaml @@ -270,16 +325,19 @@ contains a single `from` element allowing connections from Pods with the label ` ``` -在 `from` 数组中包含两个元素,允许来自本地名字空间中标有 `role=client` 的 +它在 `from` 数组中包含两个元素,允许来自本地名字空间中标有 `role=client` 的 Pod 的连接,**或** 来自任何名字空间中标有 `user=alice` 的任何 Pod 的连接。 ## 默认策略 {#default-policies} @@ -326,7 +385,8 @@ in that namespace. ### 默认拒绝所有入站流量 {#default-deny-all-ingress-traffic} 你可以通过创建选择所有容器但不允许任何进入这些容器的入站流量的 NetworkPolicy 来为名字空间创建 “default” 隔离策略。 @@ -334,7 +394,8 @@ You can create a "default" ingress isolation policy for a namespace by creating {{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}} 这确保即使没有被任何其他 NetworkPolicy 选择的 Pod 仍将被隔离以进行入口。 此策略不影响任何 Pod 的出口隔离。 @@ -345,14 +406,16 @@ This ensures that even pods that aren't selected by any other NetworkPolicy will ### 允许所有入站流量 {#allow-all-ingress-traffic} -如果你想允许一个命名空间中所有 Pod 的所有入站连接,你可以创建一个明确允许的策略。 +如果你想允许一个名字空间中所有 Pod 的所有入站连接,你可以创建一个明确允许的策略。 {{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}} 有了这个策略,任何额外的策略都不会导致到这些 Pod 的任何入站连接被拒绝。 此策略对任何 Pod 的出口隔离没有影响。 @@ -360,7 +423,8 @@ With this policy in place, no additional policy or policies can cause any incomi ### 默认拒绝所有出站流量 {#default-deny-all-egress-traffic} @@ -370,8 +434,8 @@ You can create a "default" egress isolation policy for a namespace by creating a {{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}} 此策略可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被允许流出流量。 此策略不会更改任何 Pod 的入站流量隔离行为。 @@ -382,15 +446,17 @@ change the ingress isolation behavior of any pod. ### 允许所有出站流量 {#allow-all-egress-traffic} -如果要允许来自命名空间中所有 Pod 的所有连接, -则可以创建一个明确允许来自该命名空间中 Pod 的所有出站连接的策略。 +如果要允许来自名字空间中所有 Pod 的所有连接, +则可以创建一个明确允许来自该名字空间中 Pod 的所有出站连接的策略。 {{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}} 有了这个策略,任何额外的策略都不会导致来自这些 Pod 的任何出站连接被拒绝。 此策略对进入任何 Pod 的隔离没有影响。 @@ -398,7 +464,8 @@ With this policy in place, no additional policy or policies can cause any outgoi ### 默认拒绝所有入站和所有出站流量 {#default-deny-all-ingress-and-all-egress-traffic} @@ -408,7 +475,8 @@ You can create a "default" policy for a namespace which prevents all ingress AND {{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}} 此策略可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被允许入站或出站流量。 @@ -420,7 +488,10 @@ This ensures that even pods that aren't selected by any other NetworkPolicy will {{< feature-state for_k8s_version="v1.20" state="stable" >}} 作为一个稳定特性,SCTP 支持默认是被启用的。 @@ -431,9 +502,10 @@ When the feature gate is enabled, you can set the `protocol` field of a NetworkP {{< note >}} -你必须使用支持 SCTP 协议网络策略的 {{< glossary_tooltip text="CNI" term_id="cni" >}} 插件。 +You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP +protocol NetworkPolicies. +--> +你必须使用支持 SCTP 协议 NetworkPolicy 的 {{< glossary_tooltip text="CNI" term_id="cni" >}} 插件。 {{< /note >}} 上面的规则允许名字空间 `default` 中所有带有标签 `role=db` 的 Pod 使用 TCP 协议与 @@ -484,6 +536,7 @@ port is between the range 32000 and 32768. @@ -536,7 +589,12 @@ Kubernetes 控制面会在所有名字空间上设置一个不可变更的标签 ## 通过网络策略(至少目前还)无法完成的工作 {#what-you-can-t-do-with-network-policies-at-least-not-yet} @@ -547,10 +605,13 @@ As of Kubernetes {{< skew currentVersion >}}, the following functionality does n 还无法实现下面的用户场景是很值得的。 - 强制集群内部流量经过某公用网关(这种场景最好通过服务网格或其他代理来实现); @@ -561,11 +622,14 @@ As of Kubernetes {{< skew currentVersion >}}, the following functionality does n 来选择目标 Pod 或名字空间,这也通常是一种可靠的替代方案); - 创建或管理由第三方来实际完成的“策略请求”; - 实现适用于所有名字空间或 Pods 的默认策略(某些第三方 Kubernetes 发行版本或项目可以做到这点); - 高级的策略查询或者可达性相关工具; @@ -580,7 +644,8 @@ As of Kubernetes {{< skew currentVersion >}}, the following functionality does n - 参阅[声明网络策略](/zh-cn/docs/tasks/administer-cluster/declare-network-policy/)演练了解更多示例; - 有关 NetworkPolicy 资源所支持的常见场景的更多信息, diff --git a/content/zh-cn/docs/concepts/services-networking/service-traffic-policy.md b/content/zh-cn/docs/concepts/services-networking/service-traffic-policy.md index f822f32eb013e..dfe52073367a7 100644 --- a/content/zh-cn/docs/concepts/services-networking/service-traffic-policy.md +++ b/content/zh-cn/docs/concepts/services-networking/service-traffic-policy.md @@ -1,7 +1,7 @@ --- title: 服务内部流量策略 content_type: concept -weight: 75 +weight: 120 description: >- 如果集群中的两个 Pod 想要通信,并且两个 Pod 实际上都在同一节点运行, **服务内部流量策略** 可以将网络流量限制在该节点内。 @@ -13,7 +13,7 @@ reviewers: - maplain title: Service Internal Traffic Policy content_type: concept -weight: 75 +weight: 120 description: >- If two Pods in your cluster want to communicate, and both Pods are actually running on the same node, _Service Internal Traffic Policy_ to keep network traffic within that node. @@ -24,7 +24,7 @@ description: >- -{{< feature-state for_k8s_version="v1.23" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} ## 使用服务内部流量策略 {#using-service-internal-traffic-policy} - -`ServiceInternalTrafficPolicy` -[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 是 Beta 功能,默认启用。 -启用该功能后,你就可以通过将 {{< glossary_tooltip text="Service" term_id="service" >}} 的 +你可以通过将 {{< glossary_tooltip text="Service" term_id="service" >}} 的 `.spec.internalTrafficPolicy` 项设置为 `Local`, 来为它指定一个内部专用的流量策略。 -此设置就相当于告诉 kube-proxy 对于集群内部流量只能使用本地的服务端口。 +此设置就相当于告诉 kube-proxy 对于集群内部流量只能使用节点本地的服务端口。 ## 工作原理 {#how-it-works} - kube-proxy 基于 `spec.internalTrafficPolicy` 的设置来过滤路由的目标服务端点。 -当它的值设为 `Local` 时,只选择节点本地的服务端点。 -当它的值设为 `Cluster` 或缺省时,则选择所有的服务端点。 -启用[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) -`ServiceInternalTrafficPolicy` 后, -`spec.internalTrafficPolicy` 的值默认设为 `Cluster`。 +当它的值设为 `Local` 时,只会选择节点本地的服务端点。 +当它的值设为 `Cluster` 或缺省时,Kubernetes 会选择所有的服务端点。 ## {{% heading "whatsnext" %}} * 请阅读[拓扑感知提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints) * 请阅读 [Service 的外部流量策略](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip) -* 请阅读[用 Service 连接应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/) +* 遵循[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)教程 \ No newline at end of file diff --git a/content/zh-cn/docs/concepts/services-networking/service.md b/content/zh-cn/docs/concepts/services-networking/service.md index db8907865d028..a7529a2c92f4d 100644 --- a/content/zh-cn/docs/concepts/services-networking/service.md +++ b/content/zh-cn/docs/concepts/services-networking/service.md @@ -3,7 +3,7 @@ title: 服务(Service) feature: title: 服务发现与负载均衡 description: > - 无需修改你的应用程序即可使用陌生的服务发现机制。Kubernetes 为容器提供了自己的 IP 地址和一个 DNS 名称,并且可以在它们之间实现负载均衡。 + 无需修改你的应用程序去使用陌生的服务发现机制。Kubernetes 为容器提供了自己的 IP 地址和一个 DNS 名称,并且可以在它们之间实现负载均衡。 description: >- 将在集群中运行的应用程序暴露在单个外向端点后面,即使工作负载分散到多个后端也是如此。 content_type: concept @@ -33,7 +33,7 @@ With Kubernetes you don't need to modify your application to use an unfamiliar s Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them. --> -使用 Kubernetes,你无需修改应用程序即可使用不熟悉的服务发现机制。 +使用 Kubernetes,你无需修改应用程序去使用不熟悉的服务发现机制。 Kubernetes 为 Pod 提供自己的 IP 地址,并为一组 Pod 提供相同的 DNS 名, 并且可以在它们之间进行负载均衡。 @@ -248,7 +248,7 @@ As many Services need to expose more than one port, Kubernetes supports multiple port definitions on a Service object. Each port definition can have the same `protocol`, or a different one. --> -服务的默认协议是 TCP(/zh-cn/docs/reference/networking/service-protocols/#protocol-tcp); +服务的默认协议是 [TCP](/zh-cn/docs/reference/networking/service-protocols/#protocol-tcp); 你还可以使用任何其他[受支持的协议](/zh-cn/docs/reference/networking/service-protocols/)。 由于许多服务需要公开多个端口,因此 Kubernetes 在服务对象上支持多个端口定义。 @@ -305,12 +305,12 @@ spec: 由于此服务没有选择算符,因此不会自动创建相应的 EndpointSlice(和旧版 Endpoint)对象。 -你可以通过手动添加 EndpointSlice 对象,将服务手动映射到运行该服务的网络地址和端口: +你可以通过手动添加 EndpointSlice 对象,将服务映射到运行该服务的网络地址和端口: ```yaml apiVersion: discovery.k8s.io/v1 @@ -402,6 +402,18 @@ the EndpointSlice manifest: a TCP connection to 10.1.2.3 or 10.4.5.6, on port 93 流量被路由到 EndpointSlice 清单中定义的两个端点之一: 通过 TCP 协议连接到 10.1.2.3 或 10.4.5.6 的端口 9376。 +{{< note >}} + +Kubernetes API 服务器不允许代理到未被映射至 Pod 上的端点。由于此约束,当 Service +没有选择算符时,诸如 `kubectl proxy ` 之类的操作将会失败。这可以防止 +Kubernetes API 服务器被用作调用者可能无权访问的端点的代理。 +{{< /note >}} + -#### 腾讯 Kubernetes 引擎(TKE)上的 CLB 注解 - -以下是在 TKE 上管理云负载均衡器的注解。 - -```yaml - metadata: - name: my-service - annotations: - # 绑定负载均衡器到指定的节点。 - service.kubernetes.io/qcloud-loadbalancer-backends-label: key in (value1, value2) - - # 为已有负载均衡器添加 ID。 - service.kubernetes.io/tke-existed-lbid:lb-6swtxxxx - - # 负载均衡器(LB)的自定义参数尚不支持修改 LB 类型。 - service.kubernetes.io/service.extensiveParameters: "" - - # 自定义负载均衡监听器。 - service.kubernetes.io/service.listenerParameters: "" - - # 指定负载均衡类型。 - # 可用参数: classic (Classic Cloud Load Balancer) 或 application (Application Cloud Load Balancer) - service.kubernetes.io/loadbalance-type: xxxxx - - # 指定公用网络带宽计费方法。 - # 可用参数: TRAFFIC_POSTPAID_BY_HOUR(bill-by-traffic) 和 BANDWIDTH_POSTPAID_BY_HOUR (bill-by-bandwidth). - service.kubernetes.io/qcloud-loadbalancer-internet-charge-type: xxxxxx - - # 指定带宽参数 (取值范围: [1,2000] Mbps). - service.kubernetes.io/qcloud-loadbalancer-internet-max-bandwidth-out: "10" - - # 当设置该注解时,负载均衡器将只注册正在运行 Pod 的节点, - # 否则所有节点将会被注册。 - service.kubernetes.io/local-svc-only-bind-node-with-pod: true -``` -* 参阅[通过服务连通应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/) +* 参阅[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)教程 diff --git a/content/zh-cn/docs/concepts/services-networking/windows-networking.md b/content/zh-cn/docs/concepts/services-networking/windows-networking.md index ea2df2e8154ec..9feb413991746 100644 --- a/content/zh-cn/docs/concepts/services-networking/windows-networking.md +++ b/content/zh-cn/docs/concepts/services-networking/windows-networking.md @@ -1,7 +1,7 @@ --- title: Windows 网络 content_type: concept -weight: 75 +weight: 110 --- 如果你选择使用 `WaitForFirstConsumer`,请不要在 Pod 规约中使用 `nodeName` 来指定节点亲和性。 如果在这种情况下使用 `nodeName`,Pod 将会绕过调度程序,PVC 将停留在 `pending` 状态。 - - 相反,在这种情况下,你可以使用节点选择器作为主机名,如下所示 + + 相反,在这种情况下,你可以使用节点选择器作为主机名,如下所示。 {{< /note >}} @@ -538,145 +536,6 @@ using `allowedTopologies`. `zone` 和 `zones` 已被弃用并被 [allowedTopologies](#allowed-topologies) 取代。 {{< /note >}} - -### Glusterfs(已弃用) {#glusterfs} - -```yaml -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: slow -provisioner: kubernetes.io/glusterfs -parameters: - resturl: "http://127.0.0.1:8081" - clusterid: "630372ccdc720a92c681fb928f27b53f" - restauthenabled: "true" - restuser: "admin" - secretNamespace: "default" - secretName: "heketi-secret" - gidMin: "40000" - gidMax: "50000" - volumetype: "replicate:3" -``` - - -* `resturl`:制备 gluster 卷的需求的 Gluster REST 服务/Heketi 服务 url。 - 通用格式应该是 `IPaddress:Port`,这是 GlusterFS 动态制备器的必需参数。 - 如果 Heketi 服务在 OpenShift/kubernetes 中安装并暴露为可路由服务,则可以使用类似于 - `http://heketi-storage-project.cloudapps.mystorage.com` 的格式,其中 fqdn 是可解析的 heketi 服务网址。 -* `restauthenabled`:Gluster REST 服务身份验证布尔值,用于启用对 REST 服务器的身份验证。 - 如果此值为 'true',则必须填写 `restuser` 和 `restuserkey` 或 `secretNamespace` + `secretName`。 - 此选项已弃用,当在指定 `restuser`、`restuserkey`、`secretName` 或 `secretNamespace` 时,身份验证被启用。 -* `restuser`:在 Gluster 可信池中有权创建卷的 Gluster REST服务/Heketi 用户。 -* `restuserkey`:Gluster REST 服务/Heketi 用户的密码将被用于对 REST 服务器进行身份验证。 - 此参数已弃用,取而代之的是 `secretNamespace` + `secretName`。 - - -* `secretNamespace`,`secretName`:Secret 实例的标识,包含与 Gluster - REST 服务交互时使用的用户密码。 - 这些参数是可选的,`secretNamespace` 和 `secretName` 都省略时使用空密码。 - 所提供的 Secret 必须将类型设置为 "kubernetes.io/glusterfs",例如以这种方式创建: - - ``` - kubectl create secret generic heketi-secret \ - --type="kubernetes.io/glusterfs" --from-literal=key='opensesame' \ - --namespace=default - ``` - - Secret 的例子可以在 - [glusterfs-provisioning-secret.yaml](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml) 中找到。 - - -* `clusterid`:`630372ccdc720a92c681fb928f27b53f` 是集群的 ID,当制备卷时, - Heketi 将会使用这个文件。它也可以是一个 clusterid 列表,例如: - `"8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397"`。这个是可选参数。 -* `gidMin`,`gidMax`:StorageClass GID 范围的最小值和最大值。 - 在此范围(gidMin-gidMax)内的唯一值(GID)将用于动态制备卷。这些是可选的值。 - 如果不指定,所制备的卷为一个 2000-2147483647 之间的值,这是 gidMin 和 - gidMax 的默认值。 - - -* `volumetype`:卷的类型及其参数可以用这个可选值进行配置。如果未声明卷类型,则由制备器决定卷的类型。 - - 例如: - * 'Replica volume':`volumetype: replicate:3` 其中 '3' 是 replica 数量。 - * 'Disperse/EC volume':`volumetype: disperse:4:2` 其中 '4' 是数据,'2' 是冗余数量。 - * 'Distribute volume':`volumetype: none` - - 有关可用的卷类型和管理选项, - 请参阅[管理指南](https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/)。 - - 更多相关的参考信息, - 请参阅[如何配置 Heketi](https://github.com/heketi/heketi/wiki/Setting-up-the-topology)。 - - 当动态制备持久卷时,Gluster 插件自动创建名为 `gluster-dynamic-` - 的端点和无头服务。在 PVC 被删除时动态端点和无头服务会自动被删除。 - ### NFS {#nfs} ```yaml diff --git a/content/zh-cn/docs/concepts/storage/storage-limits.md b/content/zh-cn/docs/concepts/storage/storage-limits.md index 6943ba3ed6a00..f9eb0410722f0 100644 --- a/content/zh-cn/docs/concepts/storage/storage-limits.md +++ b/content/zh-cn/docs/concepts/storage/storage-limits.md @@ -1,6 +1,7 @@ --- title: 特定于节点的卷数限制 content_type: concept +weight: 90 --- diff --git a/content/zh-cn/docs/concepts/storage/volume-health-monitoring.md b/content/zh-cn/docs/concepts/storage/volume-health-monitoring.md index bf1c732fd65dd..3bc0a8fe26e28 100644 --- a/content/zh-cn/docs/concepts/storage/volume-health-monitoring.md +++ b/content/zh-cn/docs/concepts/storage/volume-health-monitoring.md @@ -1,6 +1,7 @@ --- title: 卷健康监测 content_type: concept +weight: 100 --- diff --git a/content/zh-cn/docs/concepts/storage/volume-pvc-datasource.md b/content/zh-cn/docs/concepts/storage/volume-pvc-datasource.md index 412b0c1d5bb0c..f75db0abf0422 100644 --- a/content/zh-cn/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/zh-cn/docs/concepts/storage/volume-pvc-datasource.md @@ -1,7 +1,7 @@ --- title: CSI 卷克隆 content_type: concept -weight: 60 +weight: 70 --- diff --git a/content/zh-cn/docs/concepts/storage/volume-snapshots.md b/content/zh-cn/docs/concepts/storage/volume-snapshots.md index 111a6179224ad..27bdef16d3b6b 100644 --- a/content/zh-cn/docs/concepts/storage/volume-snapshots.md +++ b/content/zh-cn/docs/concepts/storage/volume-snapshots.md @@ -390,12 +390,12 @@ the `VolumeSnapshotContent` that corresponds to the `VolumeSnapshot`. 到对应 `VolumeSnapshot` 的 `VolumeSnapshotContent` 中。 -对于预制备的快照,`Spec.SourceVolumeMode` 需要由集群管理员填充。 +对于预制备的快照,`spec.SourceVolumeMode` 需要由集群管理员填充。 启用此特性的 `VolumeSnapshotContent` 资源示例如下所示: diff --git a/content/zh-cn/docs/concepts/storage/volumes.md b/content/zh-cn/docs/concepts/storage/volumes.md index 7486107d391ba..e2a23ab95346e 100644 --- a/content/zh-cn/docs/concepts/storage/volumes.md +++ b/content/zh-cn/docs/concepts/storage/volumes.md @@ -304,7 +304,7 @@ For more details, see the [`azureFile` volume plugin](https://github.com/kuberne --> #### azureFile CSI 迁移 {#azurefile-csi-migration} -{{< feature-state for_k8s_version="v1.21" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} @@ -663,7 +663,8 @@ Kubernetes 主机才可以访问它们。 {{< /note >}} 更多详情请参考 [FC 示例](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel)。 @@ -863,7 +864,9 @@ and the kubelet, set the `InTreePluginGCEUnregister` flag to `true`. {{< warning >}} `gitRepo` 卷类型已经被废弃。如果需要在容器中提供 git 仓库,请将一个 [EmptyDir](#emptydir) 卷挂载到 InitContainer 中,使用 git @@ -902,39 +905,23 @@ spec: ``` -### glusterfs(已弃用) {#glusterfs} +### glusterfs(已移除) {#glusterfs} -{{< feature-state for_k8s_version="v1.25" state="deprecated" >}} + - -`glusterfs` 卷能将 [Glusterfs](https://www.gluster.org) (一个开源的网络文件系统) -挂载到你的 Pod 中。不像 `emptyDir` 那样会在删除 Pod 的同时也会被删除,`glusterfs` -卷的内容在删除 Pod 时会被保存,卷只是被卸载。 -这意味着 `glusterfs` 卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享。 -GlusterFS 可以被多个写者同时挂载。 - -{{< note >}} - -在使用前你必须先安装运行自己的 GlusterFS。 -{{< /note >}} + -更多详情请参考 [GlusterFS 示例](https://github.com/kubernetes/examples/tree/master/volumes/glusterfs)。 +Kubernetes {{< skew currentVersion >}} 不包含 `glusterfs` 卷类型。 +GlusterFS 树内存储驱动程序在 Kubernetes v1.25 版本中被弃用,然后在 v1.26 版本中被完全移除。 + ### hostPath {#hostpath} {{< warning >}} @@ -1293,7 +1280,9 @@ spec: 在使用 NFS 卷之前,你必须运行自己的 NFS 服务器并将目标 share 导出备用。 @@ -1304,7 +1293,8 @@ Also note that you can't specify NFS mount options in a Pod spec. You can either {{< /note >}} 如需了解用持久卷挂载 NFS 卷的示例,请参考 [NFS 示例](https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs)。 @@ -1575,37 +1565,34 @@ For more information, see the [vSphere volume](https://github.com/kubernetes/exa --> #### vSphere CSI 迁移 {#vsphere-csi-migration} -{{< feature-state for_k8s_version="v1.19" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} + -从 Kubernetes v1.25 开始,针对 `vsphereVolume` 的 `CSIMigrationvSphere` 特性默认被启用。 -来自树内 `vspherevolume` 的所有插件操作将被重新指向到 -`csi.vsphere.vmware.com` {{< glossary_tooltip text="CSI" term_id="csi" >}} 驱动, -除非 `CSIMigrationvSphere` 特性门控被禁用。 +在 Kubernetes {{< skew currentVersion >}} 中,对树内 `vsphereVolume` +类的所有操作都会被重定向至 `csi.vsphere.vmware.com` {{< glossary_tooltip text="CSI" term_id="csi" >}} 驱动程序。 [vSphere CSI 驱动](https://github.com/kubernetes-sigs/vsphere-csi-driver)必须安装到集群上。 你可以在 VMware 的文档页面[迁移树内 vSphere 卷插件到 vSphere 容器存储插件](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-968D421F-D464-4E22-8127-6CB9FF54423F.html) 中找到有关如何迁移树内 `vsphereVolume` 的其他建议。 +如果未安装 vSphere CSI 驱动程序,则无法对由树内 `vsphereVolume` 类型创建的 PV 执行卷操作。 -从 Kubernetes v1.25 开始,(已弃用)树内 vSphere 存储驱动不支持低于 7.0u2 的 vSphere 版本。 -你必须运行 vSphere 7.0u2 或更高版本才能继续使用这个已弃用的驱动,或迁移到替代的 CSI 驱动。 +你必须运行 vSphere 7.0u2 或更高版本才能迁移到 vSphere CSI 驱动程序。 如果你正在运行 Kubernetes v{{< skew currentVersion >}},请查阅该 Kubernetes 版本的文档。 @@ -1957,7 +1944,7 @@ persistent volume: volume expansion, the kubelet passes that data via the `NodeExpandVolume()` call to the CSI driver. In order to use the `nodeExpandSecretRef` field, your cluster should be running Kubernetes version 1.25 or later and you must enable - the [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) + the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) named `CSINodeExpandSecret` for each kube-apiserver and for the kubelet on every node. You must also be using a CSI driver that supports or requires secret data during node-initiated storage resize operations. diff --git a/content/zh-cn/docs/concepts/windows/intro.md b/content/zh-cn/docs/concepts/windows/intro.md index 794b1755a7cbb..4322f9085f874 100644 --- a/content/zh-cn/docs/concepts/windows/intro.md +++ b/content/zh-cn/docs/concepts/windows/intro.md @@ -432,11 +432,11 @@ work between Windows and Linux: The following list documents differences between how Pod specifications work between Windows and Linux: * `hostIPC` and `hostpid` - host namespace sharing is not possible on Windows -* `hostNetwork` - There is no Windows OS support to share the host network +* `hostNetwork` - [see below](/docs/concepts/windows/intro#compatibility-v1-pod-spec-containers-hostnetwork) * `dnsPolicy` - setting the Pod `dnsPolicy` to `ClusterFirstWithHostNet` is not supported on Windows because host networking is not provided. Pods always run with a container network. -* `podSecurityContext` (see below) +* `podSecurityContext` [see below](/docs/concepts/windows/intro#compatibility-v1-pod-spec-containers-securitycontext) * `shareProcessNamespace` - this is a beta feature, and depends on Linux namespaces which are not implemented on Windows. Windows cannot share process namespaces or the container's root filesystem. Only the network can be shared. @@ -446,10 +446,10 @@ The following list documents differences between how Pod specifications work bet 以下列表记录了 Pod 规范在 Windows 和 Linux 之间的工作方式差异: * `hostIPC` 和 `hostpid` - 不能在 Windows 上共享主机命名空间。 -* `hostNetwork` - Windows 操作系统不支持共享主机网络。 +* `hostNetwork` - [参见下文](#compatibility-v1-pod-spec-containers-hostnetwork) * `dnsPolicy` - Windows 不支持将 Pod `dnsPolicy` 设为 `ClusterFirstWithHostNet`, 因为未提供主机网络。Pod 始终用容器网络运行。 -* `podSecurityContext`(参见下文) +* `podSecurityContext` [参见下文](#compatibility-v1-pod-spec-containers-securitycontext) * `shareProcessNamespace` - 这是一个 beta 版功能特性,依赖于 Windows 上未实现的 Linux 命名空间。 Windows 无法共享进程命名空间或容器的根文件系统(root filesystem)。 只能共享网络。 @@ -482,11 +482,33 @@ The following list documents differences between how Pod specifications work bet * 你无法为卷挂载启用 `mountPropagation`,因为这在 Windows 上不支持。 +#### hostNetwork 的字段兼容性 {#compatibility-v1-pod-spec-containers-hostnetwork} + +{{< feature-state for_k8s_version="v1.26" state="alpha" >}} + +现在,kubelet 可以请求在 Windows 节点上运行的 Pod 使用主机的网络命名空间,而不是创建新的 Pod 网络命名空间。 +要启用此功能,请将 `--feature-gates=WindowsHostNetwork=true` 传递给 kubelet。 + +{{< note >}} + +此功能需要支持该功能的容器运行时。 +{{< /note >}} + + -##### Pod 安全上下文的字段兼容性 {#compatibility-v1-pod-spec-containers-securitycontext} +#### Pod 安全上下文的字段兼容性 {#compatibility-v1-pod-spec-containers-securitycontext} Pod 的所有 [`securityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) 字段都无法在 Windows 上生效。 @@ -693,16 +715,12 @@ Kubernetes Slack 上的 SIG Windows 频道也是一个很好的途径, The kubeadm tool helps you to deploy a Kubernetes cluster, providing the control plane to manage the cluster it, and nodes to run your workloads. -[Adding Windows nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/) -explains how to deploy Windows nodes to your cluster using kubeadm. The Kubernetes [cluster API](https://cluster-api.sigs.k8s.io/) project also provides means to automate deployment of Windows nodes. --> ### 部署工具 {#deployment-tools} kubeadm 工具帮助你部署 Kubernetes 集群,提供管理集群的控制平面以及运行工作负载的节点。 -[添加 Windows 节点](/zh-cn/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/)阐述了如何使用 -kubeadm 将 Windows 节点部署到你的集群。 Kubernetes [集群 API](https://cluster-api.sigs.k8s.io/) 项目也提供了自动部署 Windows 节点的方式。 diff --git a/content/zh-cn/docs/concepts/windows/user-guide.md b/content/zh-cn/docs/concepts/windows/user-guide.md index 38a4b288b530f..bc9326ae2fc22 100644 --- a/content/zh-cn/docs/concepts/windows/user-guide.md +++ b/content/zh-cn/docs/concepts/windows/user-guide.md @@ -28,7 +28,7 @@ This guide walks you through the steps to configure and deploy Windows container ## Objectives * Configure an example deployment to run Windows containers on the Windows node -* Highlight Windows specific funcationality in Kubernetes +* Highlight Windows specific functionality in Kubernetes --> ## 目标 {#objectives} @@ -38,16 +38,15 @@ This guide walks you through the steps to configure and deploy Windows container ## 在你开始之前 {#before-you-begin} -* 创建一个 Kubernetes 集群,其中包含一个控制平面和一个[运行 Windows Server 的工作节点](/zh-cn/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/) +* 创建一个 Kubernetes 集群,其中包含一个控制平面和一个运行 Windows Server 的工作节点。 * 务必请注意,在 Kubernetes 上创建和部署服务和工作负载的行为方式与 Linux 和 Windows 容器的行为方式大致相同。 与集群交互的 [kubectl 命令](/zh-cn/docs/reference/kubectl/)是一致的。 下一小节的示例旨在帮助你快速开始使用 Windows 容器。 @@ -164,7 +163,7 @@ port 80 of the container directly to the Service. 命令进入容器,并在 Pod 之间(以及跨主机,如果你有多个 Windows 节点)相互进行 ping 操作。 * Service 到 Pod 的通信,在 Linux 控制平面所在的节点以及独立的 Pod 中执行 `curl` 命令来访问虚拟的服务 IP(在 `kubectl get services` 命令下查看)。 - * 服务发现,执行 `curl` 命令来访问带有 Kubernetes + * 服务发现,执行 `curl` 命令来访问带有 Kubernetes [默认 DNS 后缀](/zh-cn/docs/concepts/services-networking/dns-pod-service/#services)的服务名称。 * 入站连接,在 Linux 控制平面所在的节点上或集群外的机器上执行 `curl` 命令来访问 NodePort 服务。 * 出站连接,使用 `kubectl exec`,从 Pod 内部执行 `curl` 访问外部 IP。 @@ -242,7 +241,8 @@ Windows 容器工作负载可以配置为使用组托管服务帐户(Group Man 组托管服务帐户是一种特定类型的活动目录(Active Directory)帐户,可提供自动密码管理、 简化的服务主体名称(Service Principal Name,SPN)管理,以及将管理委派给多个服务器上的其他管理员的能力。 配置了 GMSA 的容器可以携带使用 GMSA 配置的身份访问外部活动目录域资源。 -在[此处](/zh-cn/docs/tasks/configure-pod-container/configure-gmsa/)了解有关为 Windows 容器配置和使用 GMSA 的更多信息。 +在[此处](/zh-cn/docs/tasks/configure-pod-container/configure-gmsa/)了解有关为 Windows +容器配置和使用 GMSA 的更多信息。 **CronJob** 创建基于时隔重复调度的 {{< glossary_tooltip term_id="job" text="Job" >}}。 -一个 CronJob 对象就像 **crontab** (cron table) 文件中的一行。 -它用 [Cron](https://en.wikipedia.org/wiki/Cron) 格式进行编写, +CronJob 用于执行排期操作,例如备份、生成报告等。 +一个 CronJob 对象就像 Unix 系统上的 **crontab**(cron table)文件中的一行。 +它用 [Cron](https://zh.wikipedia.org/wiki/Cron) 格式进行编写, 并周期性地在给定的调度时间执行 Job。 -{{< caution >}} - -所有 **CronJob** 的 `schedule:` 时间都是基于 -{{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}} -的时区。 - -如果你的控制平面在 Pod 或是裸容器中运行了 kube-controller-manager, -那么为该容器所设置的时区将会决定 Cron Job 的控制器所使用的时区。 -{{< /caution >}} - -{{< caution >}} -如 [v1 CronJob API](/zh-cn/docs/reference/kubernetes-api/workload-resources/cron-job-v1/) 所述,官方并不支持设置时区。 - -Kubernetes 项目官方并不支持设置如 `CRON_TZ` 或者 `TZ` 等变量。 -`CRON_TZ` 或者 `TZ` 是用于解析和计算下一个 Job 创建时间所使用的内部库中一个实现细节。 -不建议在生产集群中使用它。 -{{< /caution>}} +CronJob 有所限制,也比较特殊。 +例如在某些情况下,单个 CronJob 可以创建多个并发任务。 +请参阅下面的[限制](#cron-job-limitations)。 -为 CronJob 资源创建清单时,请确保所提供的名称是一个合法的 -[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 -名称不能超过 52 个字符。 -这是因为 CronJob 控制器将自动在提供的 Job 名称后附加 11 个字符,并且存在一个限制, -即 Job 名称的最大长度不能超过 63 个字符。 +当控制平面为 CronJob 创建新的 Job 和(间接)Pod 时,CronJob 的 `.metadata.name` 是命名这些 Pod 的部分基础。 +CronJob 的名称必须是一个合法的 +[DNS 子域](/zh-cn/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names)值, +但这会对 Pod 的主机名产生意外的结果。为获得最佳兼容性,名称应遵循更严格的 +[DNS 标签](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-label-names)规则。 +即使名称是一个 DNS 子域,它也不能超过 52 个字符。这是因为 CronJob 控制器将自动在你所提供的 Job 名称后附加 +11 个字符,并且存在 Job 名称的最大长度不能超过 63 个字符的限制。 -## CronJob {#cronjob} - -CronJob 用于执行周期性的动作,例如备份、报告生成等。 -这些任务中的每一个都应该配置为周期性重复的(例如:每天/每周/每月一次); -你可以定义任务开始执行的时间间隔。 - - -### 示例 {#example} +## 示例 {#example} 下面的 CronJob 示例清单会在每分钟打印出当前时间和问候消息: {{< codenew file="application/job/cronjob.yaml" >}} + [使用 CronJob 运行自动化任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/)一文会为你详细讲解此例。 +## 编写 CronJob 声明信息 {#writing-a-cronjob-spec} + ### Cron 时间表语法 {#cron-schedule-syntax} +`.spec.schedule` 字段是必需的。该字段的值遵循 [Cron](https://zh.wikipedia.org/wiki/Cron) 语法: + + ``` # ┌───────────── 分钟 (0 - 59) # │ ┌───────────── 小时 (0 - 23) @@ -122,6 +116,44 @@ This example CronJob manifest prints the current time and a hello message every # * * * * * ``` + +例如 `0 0 13 * 5` 表示此任务必须在每个星期五的午夜以及每个月的 13 日的午夜开始。 + + +该格式也包含了扩展的 “Vixie cron” 步长值。 +[FreeBSD 手册](https://www.freebsd.org/cgi/man.cgi?crontab%285%29)中解释如下: + + +> 步长可被用于范围组合。范围后面带有 `/<数字>` 可以声明范围内的步幅数值。 +> 例如,`0-23/2` 可被用在小时字段来声明命令在其他数值的小时数执行 +> (V7 标准中对应的方法是 `0,2,4,6,8,10,12,14,16,18,20,22`)。 +> 步长也可以放在通配符后面,因此如果你想表达 “每两小时”,就用 `*/2` 。 + +{{< note >}} + +时间表中的问号 (`?`) 和星号 `*` 含义相同,它们用来表示给定字段的任何可用值。 +{{< /note >}} + + +除了标准语法,还可以使用一些类似 `@monthly` 的宏: + -例如,下面这行指出必须在每个星期五的午夜以及每个月 13 号的午夜开始任务: +为了生成 CronJob 时间表的表达式,你还可以使用 [crontab.guru](https://crontab.guru/) 这类 Web 工具。 -`0 0 13 * 5` + +### 任务模板 {#job-template} - -要生成 CronJob 时间表表达式,你还可以使用 [crontab.guru](https://crontab.guru/) 之类的 Web 工具。 +### 任务延迟开始的最后期限 {#starting-deadline} + +`.spec.startingDeadlineSeconds` 字段是可选的。 +它表示任务如果由于某种原因错过了调度时间,开始该任务的截止时间的秒数。 + +过了截止时间,CronJob 就不会开始该任务的实例(未来的任务仍在调度之中)。 +例如,如果你有一个每天运行两次的备份任务,你可能会允许它最多延迟 8 小时开始,但不能更晚, +因为更晚进行的备份将变得没有意义:你宁愿等待下一次计划的运行。 + + +对于错过已配置的最后期限的 Job,Kubernetes 将其视为失败的任务。 +如果你没有为 CronJob 指定 `startingDeadlineSeconds`,那 Job 就没有最后期限。 + +如果 `.spec.startingDeadlineSeconds` 字段被设置(非空), +CronJob 控制器将会计算从预期创建 Job 到当前时间的时间差。 +如果时间差大于该限制,则跳过此次执行。 + +例如,如果将其设置为 `200`,则 Job 控制器允许在实际调度之后最多 200 秒内创建 Job。 + + +### 并发性规则 {#concurrency-policy} + +`.spec.concurrencyPolicy` 也是可选的。它声明了 CronJob 创建的任务执行时发生重叠如何处理。 +spec 仅能声明下列规则中的一种: + +* `Allow`(默认):CronJob 允许并发任务执行。 +* `Forbid`: CronJob 不允许并发任务执行;如果新任务的执行时间到了而老任务没有执行完,CronJob 会忽略新任务的执行。 +* `Replace`:如果新任务的执行时间到了而老任务没有执行完,CronJob 会用新任务替换当前正在运行的任务。 + +请注意,并发性规则仅适用于相同 CronJob 创建的任务。如果有多个 CronJob,它们相应的任务总是允许并发执行的。 + + +### 调度挂起 {#schedule-suspension} + +通过将可选的 `.spec.suspend` 字段设置为 `true`,可以挂起针对 CronJob 执行的任务。 + +这个设置**不**会影响 CronJob 已经开始的任务。 + + +如果你将此字段设置为 `true`,后续发生的执行都会被挂起 +(这些任务仍然在调度中,但 CronJob 控制器不会启动这些 Job 来运行任务),直到你取消挂起 CronJob 为止。 + +{{< caution >}} + +在调度时间内挂起的执行都会被统计为错过的任务。当现有的 CronJob 将 `.spec.suspend` 从 `true` 改为 `false` 时, +且没有[开始的最后期限](#starting-deadline),错过的任务会被立即调度。 +{{< /caution >}} + + +### 任务历史限制 {#jobs-history-limits} + +`.spec.successfulJobsHistoryLimit` 和 `.spec.failedJobsHistoryLimit` 字段是可选的。 +这两个字段指定应保留多少已完成和失败的任务。 +默认设置分别为 3 和 1。将限制设置为 `0` 代表相应类型的任务完成后不会保留。 + +有关自动清理任务的其他方式, +请参见[自动清理完成的 Job](/zh-cn/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)。 ## 时区 {#time-zones} -对于没有指定时区的 CronJob,kube-controller-manager 基于本地时区解释排期表(Schedule)。 +对于没有指定时区的 CronJob, +{{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}} +基于本地时区解释排期表(Schedule)。 {{< feature-state for_k8s_version="v1.25" state="beta" >}} @@ -167,10 +335,8 @@ you can specify a time zone for a CronJob (if you don't enable that feature gate Kubernetes that does not have experimental time zone support, all CronJobs in your cluster have an unspecified timezone). -When you have the feature enabled, you can set `spec.timeZone` to the name of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For example, setting -`spec.timeZone: "Etc/UTC"` instructs Kubernetes to interpret the schedule relative to Coordinated Universal Time. - -A time zone database from the Go standard library is included in the binaries and used as a fallback in case an external database is not available on the system. +When you have the feature enabled, you can set `.spec.timeZone` to the name of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For example, setting +`.spec.timeZone: "Etc/UTC"` instructs Kubernetes to interpret the schedule relative to Coordinated Universal Time. --> 如果启用了 `CronJobTimeZone` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), 你可以为 CronJob 指定一个时区(如果你没有启用该特性门控,或者你使用的是不支持试验性时区功能的 @@ -180,17 +346,63 @@ Kubernetes 版本,集群中所有 CronJob 的时区都是未指定的)。 设置为有效[时区](https://zh.wikipedia.org/wiki/%E6%97%B6%E5%8C%BA%E4%BF%A1%E6%81%AF%E6%95%B0%E6%8D%AE%E5%BA%93)名称。 例如,设置 `spec.timeZone: "Etc/UTC"` 指示 Kubernetes 采用 UTC 来解释排期表。 +{{< caution >}} + +Kubernetes {{< skew currentVersion >}} 中 CronJob API 的实现允许你设置 +`.spec.schedule` 字段以包含时区;例如:`CRON_TZ=UTC * * * * *` 或 `TZ=UTC * * * * *`。 + +以这种方式指定时区是**未正式支持**(而且从来没有)。 + +如果你尝试设置包含 `TZ` 或 `CRON_TZ` 时区规范的排期表, +Kubernetes 会向客户端报告[警告](/zh-cn/blog/2020/09/03/warnings/)。 +Kubernetes 的未来版本可能根本不会实现这种非正式的时区机制。 +{{< /caution >}} + + Go 标准库中的时区数据库包含在二进制文件中,并用作备用数据库,以防系统上没有可用的外部数据库。 ## CronJob 限制 {#cronjob-limitations} +### 修改 CronJob {#modifying-a-cronjob} + +按照设计,CronJob 包含一个用于**新** Job 的模板。 +如果你修改现有的 CronJob,你所做的更改将应用于修改完成后开始运行的新任务。 +已经开始的任务(及其 Pod)将继续运行而不会发生任何变化。 +也就是说,CronJob **不** 会更新现有任务,即使这些任务仍在运行。 + + +### Job 创建 {#job-creation} + CronJob 根据其计划编排,在每次该执行任务的时候大约会创建一个 Job。 我们之所以说 "大约",是因为在某些情况下,可能会创建两个 Job,或者不会创建任何 Job。 我们试图使这些情况尽量少发生,但不能完全杜绝。因此,Job 应该是 **幂等的**。 @@ -264,50 +476,24 @@ the Job in turn is responsible for the management of the Pods it represents. --> CronJob 仅负责创建与其调度时间相匹配的 Job,而 Job 又负责管理其代表的 Pod。 - -## 控制器版本 {#new-controller} - -从 Kubernetes v1.21 版本开始,CronJob 控制器的第二个版本被用作默认实现。 -要禁用此默认 CronJob 控制器而使用原来的 CronJob 控制器,请在 -{{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}} -中设置[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) -`CronJobControllerV2`,将此标志设置为 `false`。例如: - -``` ---feature-gates="CronJobControllerV2=false" -``` - ## {{% heading "whatsnext" %}} * 了解 CronJob 所依赖的 [Pod](/zh-cn/docs/concepts/workloads/pods/) 与 [Job](/zh-cn/docs/concepts/workloads/controllers/job/) 的概念。 -* 阅读 CronJob `.spec.schedule` 字段的[格式](https://pkg.go.dev/github.com/robfig/cron/v3#hdr-CRON_Expression_Format)。 -* 有关创建和使用 CronJob 的说明及示例规约文件, +* 阅读 CronJob `.spec.schedule` 字段的详细[格式](https://pkg.go.dev/github.com/robfig/cron/v3#hdr-CRON_Expression_Format)。 +* 有关创建和使用 CronJob 的说明及 CronJob 清单的示例, 请参见[使用 CronJob 运行自动化任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/)。 -* 有关自动清理失败或完成作业的说明,请参阅[自动清理作业](/zh-cn/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) * `CronJob` 是 Kubernetes REST API 的一部分, - 阅读 {{< api-reference page="workload-resources/cron-job-v1" >}} - 对象定义以了解关于该资源的 API。 + 阅读 {{< api-reference page="workload-resources/cron-job-v1" >}} API 参考了解更多细节。 diff --git a/content/zh-cn/docs/concepts/workloads/controllers/deployment.md b/content/zh-cn/docs/concepts/workloads/controllers/deployment.md index 343b0896b3573..8f64bfeb3a5fa 100644 --- a/content/zh-cn/docs/concepts/workloads/controllers/deployment.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/deployment.md @@ -96,20 +96,21 @@ In this example: 在该例中: -* 创建名为 `nginx-deployment`(由 `.metadata.name` 字段标明)的 Deployment。 -* 该 Deployment 创建三个(由 `.spec.replicas` 字段标明)Pod 副本。 - - -* `selector` 字段定义 Deployment 如何查找要管理的 Pod。 +* 创建名为 `nginx-deployment`(由 `.metadata.name` 字段标明)的 Deployment。 + 该名称将成为后续创建 ReplicaSet 和 Pod 的命名基础。 + 参阅[编写 Deployment 规约](#writing-a-deployment-spec)获取更多详细信息。 +* 该 Deployment 创建一个 ReplicaSet,它创建三个(由 `.spec.replicas` 字段标明)Pod 副本。 +* `.spec.selector` 字段定义所创建的 ReplicaSet 如何查找要管理的 Pod。 在这里,你选择在 Pod 模板中定义的标签(`app: nginx`)。 不过,更复杂的选择规则是也可能的,只要 Pod 模板本身满足所给规则即可。 @@ -253,10 +254,13 @@ Follow the steps given below to create the above Deployment: * `AGE` 显示应用已经运行的时间长度。 - 注意 ReplicaSet 的名称始终被格式化为`[Deployment名称]-[哈希]`。 + 注意 ReplicaSet 的名称格式始终为 `[Deployment 名称]-[哈希]`。 + 该名称将成为所创建的 Pod 的命名基础。 其中的`哈希`字符串与 ReplicaSet 上的 `pod-template-hash` 标签一致。 -Deployment 对象的名称必须是合法的 -[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 -Deployment 还需要 [`.spec` 部分](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)。 +当控制面为 Deployment 创建新的 Pod 时,Deployment 的 `.metadata.name` 是命名这些 Pod 的部分基础。 +Deployment 的名称必须是一个合法的 +[DNS 子域](/zh-cn/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names)值, +但这会对 Pod 的主机名产生意外的结果。为获得最佳兼容性,名称应遵循更严格的 +[DNS 标签](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-label-names)规则。 + +Deployment 还需要 +[`.spec` 部分](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)。 + 使用 `kubectl` 来检查 Job 的状态: -```shell -kubectl describe jobs/pi -``` - - -输出类似于: - -``` -Name: pi -Namespace: default -Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c -Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c - job-name=pi -Annotations: kubectl.kubernetes.io/last-applied-configuration: - {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":... -Parallelism: 1 -Completions: 1 -Start Time: Mon, 02 Dec 2019 15:20:11 +0200 -Completed At: Mon, 02 Dec 2019 15:21:16 +0200 -Duration: 65s -Pods Statuses: 0 Running / 1 Succeeded / 0 Failed +{{< tabs name="Check status of Job" >}} +{{< tab name="kubectl describe job pi" codelang="bash" >}} +Name: pi +Namespace: default +Selector: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578 +Labels: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578 + job-name=pi +Annotations: batch.kubernetes.io/job-tracking: +Parallelism: 1 +Completions: 1 +Completion Mode: NonIndexed +Start Time: Fri, 28 Oct 2022 13:05:18 +0530 +Completed At: Fri, 28 Oct 2022 13:05:21 +0530 +Duration: 3s +Pods Statuses: 0 Active / 1 Succeeded / 0 Failed Pod Template: - Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c + Labels: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578 job-name=pi Containers: pi: @@ -132,8 +126,66 @@ Pod Template: Events: Type Reason Age From Message ---- ------ ---- ---- ------- - Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7 -``` + Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4 + Normal Completed 18s job-controller Job completed +{{< /tab >}} +{{< tab name="kubectl get job pi -o yaml" codelang="bash" >}} +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + batch.kubernetes.io/job-tracking: "" + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":{"spec":{"containers":[{"command":["perl","-Mbignum=bpi","-wle","print bpi(2000)"],"image":"perl:5.34.0","name":"pi"}],"restartPolicy":"Never"}}}} + creationTimestamp: "2022-11-10T17:53:53Z" + generation: 1 + labels: + controller-uid: 204fb678-040b-497f-9266-35ffa8716d14 + job-name: pi + name: pi + namespace: default + resourceVersion: "4751" + uid: 204fb678-040b-497f-9266-35ffa8716d14 +spec: + backoffLimit: 4 + completionMode: NonIndexed + completions: 1 + parallelism: 1 + selector: + matchLabels: + controller-uid: 204fb678-040b-497f-9266-35ffa8716d14 + suspend: false + template: + metadata: + creationTimestamp: null + labels: + controller-uid: 204fb678-040b-497f-9266-35ffa8716d14 + job-name: pi + spec: + containers: + - command: + - perl + - -Mbignum=bpi + - -wle + - print bpi(2000) + image: perl:5.34.0 + imagePullPolicy: IfNotPresent + name: pi + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + dnsPolicy: ClusterFirst + restartPolicy: Never + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 +status: + active: 1 + ready: 0 + startTime: "2022-11-10T17:53:57Z" + uncountedTerminatedPods: {} +{{< /tab >}} +{{< /tabs >}} ## 编写 Job 规约 {#writing-a-job-spec} 与 Kubernetes 中其他资源的配置类似,Job 也需要 `apiVersion`、`kind` 和 `metadata` 字段。 -Job 的名字必须是合法的 [DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 + +当控制面为 Job 创建新的 Pod 时,Job 的 `.metadata.name` 是命名这些 Pod 的基础组成部分。 +Job 的名字必须是合法的 [DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)值, +但这可能对 Pod 主机名产生意料之外的结果。为了获得最佳兼容性,此名字应遵循更严格的 +[DNS 标签](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-label-names)规则。 +即使该名字被要求遵循 DNS 子域名规则,也不得超过 63 个字符。 Job 配置还需要一个 [`.spec` 节](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)。 @@ -367,7 +432,7 @@ Jobs with _fixed completion count_ - that is, jobs that have non null the deterministic hostnames to address each other via DNS. For more information about how to configure this, see [Job with Pod-to-Pod Communication](/docs/tasks/job/job-with-pod-to-pod-communication/). - From the containerized task, in the environment variable `JOB_COMPLETION_INDEX`. - + The Job is considered complete when there is one successfully completed Pod for each index. For more information about how to use this mode, see [Indexed Job for Parallel Processing with Static Work Assignment](/docs/tasks/job/indexed-parallel-processing-static/). @@ -402,7 +467,6 @@ on the node, but the container is re-run. Therefore, your program needs to hand restarted locally, or else specify `.spec.template.spec.restartPolicy = "Never"`. See [pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`. --> - ## 处理 Pod 和容器失效 {#handling-pod-and-container-failures} Pod 中的容器可能因为多种不同原因失效,例如因为其中的进程退出时返回值非零, @@ -429,6 +493,15 @@ caused by previous runs. 这意味着,你的应用需要处理在一个新 Pod 中被重启的情况。 尤其是应用需要处理之前运行所产生的临时文件、锁、不完整的输出等问题。 + +默认情况下,每个 Pod 失效都被计入 `.spec.backoffLimit` 限制, +请参阅 [Pod 回退失效策略](#pod-backoff-failure-policy)。 +但你可以通过设置 Job 的 [Pod 失效策略](#pod-failure-policy)自定义对 Pod 失效的处理方式。 + +当[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) +`PodDisruptionConditions` 和 `JobPodFailurePolicy` 都被启用且 `.spec.podFailurePolicy` 字段被设置时, +Job 控制器不会将终止过程中的 Pod(已设置 `.metadata.deletionTimestamp` 字段的 Pod)视为失效 Pod, +直到该 Pod 完全终止(其 `.status.phase` 为 `Failed` 或 `Succeeded`)。 +但只要终止变得显而易见,Job 控制器就会创建一个替代的 Pod。一旦 Pod 终止,Job 控制器将把这个刚终止的 +Pod 考虑在内,评估相关 Job 的 `.backoffLimit` 和 `.podFailurePolicy`。 + +如果不满足任一要求,即使 Pod 稍后以 `phase: "Succeeded"` 终止,Job 控制器也会将此即将终止的 Pod 计为立即失效。 + ### Pod 回退失效策略 {#pod-backoff-failure-policy} @@ -482,9 +574,6 @@ in the API. 如果两种方式其中一个的值达到 `.spec.backoffLimit`,则 Job 被判定为失败。 -当 [`JobTrackingWithFinalizers`](#job-tracking-with-finalizers) 特性被禁用时, -失败的 Pod 数目仅基于 API 中仍然存在的 Pod。 - {{< note >}} -## Job 终止与清理 {#clean-up-finished-jobs-automatically} +## Job 终止与清理 {#job-termination-and-cleanup} Job 完成时不会再创建新的 Pod,不过已有的 Pod [通常](#pod-backoff-failure-policy)也不会被删除。 保留这些 Pod 使得你可以查看已完成的 Pod 的日志输出,以便检查错误、警告或者其它诊断性输出。 @@ -658,6 +747,37 @@ Job `pi-with-ttl` 在结束 100 秒之后,可以成为被自动删除的对象 如果该字段设置为 `0`,Job 在结束之后立即成为可被自动删除的对象。 如果该字段没有设置,Job 不会在结束之后被 TTL 控制器自动清除。 +{{< note >}} + +建议设置 `ttlSecondsAfterFinished` 字段,因为非托管任务 +(是你直接创建的 Job,而不是通过其他工作负载 API(如 CronJob)间接创建的 Job) +的默认删除策略是 `orphanDependents`,这会导致非托管 Job 创建的 Pod 在该 Job 被完全删除后被保留。 +即使{{< glossary_tooltip text="控制面" term_id="control-plane" >}}最终在 Pod 失效或完成后 +对已删除 Job 中的这些 Pod 执行[垃圾收集](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)操作, +这些残留的 Pod 有时可能会导致集群性能下降,或者在最坏的情况下会导致集群因这种性能下降而离线。 + + +你可以使用 [LimitRange](/zh-cn/docs/concepts/policy/limit-range/) 和 +[ResourceQuota](/zh-cn/docs/concepts/policy/resource-quotas/), +设定一个特定名字空间可以消耗的资源上限。 +{{< /note >}} + 下面是对这些权衡的汇总,第 2 到 4 列对应上面的权衡比较。 模式的名称对应了相关示例和更详细描述的链接。 @@ -742,6 +870,15 @@ Here, `W` is the number of work items. 下表显示的是每种模式下 `.spec.parallelism` 和 `.spec.completions` 所需要的设置。 其中,`W` 表示的是工作条目的个数。 + | 模式 | `.spec.completions` | `.spec.parallelism` | | ----- |:-------------------:|:--------------------:| | [每工作条目一 Pod 的队列](/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | 任意值 | @@ -1090,7 +1227,7 @@ mismatch. --> ### Pod 失效策略 {#pod-failure-policy} -{{< feature-state for_k8s_version="v1.25" state="alpha" >}} +{{< feature-state for_k8s_version="v1.26" state="beta" >}} {{< note >}} 只有你在集群中启用了 `JobPodFailurePolicy` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), 你才能为某个 Job 配置 Pod 失效策略。 此外,建议启用 `PodDisruptionConditions` 特性门控以便在 Pod 失效策略中检测和处理 Pod 干扰状况 (参考:[Pod 干扰状况](/zh-cn/docs/concepts/workloads/pods/disruptions#pod-disruption-conditions))。 -这两个特性门控都是在 Kubernetes v1.25 中提供的。 +这两个特性门控都是在 Kubernetes {{< skew currentVersion >}} 中提供的。 {{< /note >}} + that they don't count towards the `.spec.backoffLimit` limit of retries. +--> * 通过避免不必要的 Pod 重启来优化工作负载的运行成本, 你可以在某 Job 中一个 Pod 失效且其退出码表明存在软件错误时立即终止该 Job。 * 为了保证即使有干扰也能完成 Job,你可以忽略由干扰导致的 Pod 失效 @@ -1172,7 +1309,7 @@ Job 将被标记为失败。以下是 `main` 容器的具体规则: - an exit code of 42 means that the **entire Job** failed - any other exit code represents that the container failed, and hence the entire Pod. The Pod will be re-created if the total number of restarts is - below `backoffLimit`. If the `backoffLimit` is reached the **entire Job** failed. + below `backoffLimit`. If the `backoffLimit` is reached the **entire Job** failed. --> - 退出码 0 代表容器成功 - 退出码 42 代表 **整个 Job** 失败 @@ -1227,7 +1364,7 @@ These are some requirements and semantics of the API: - `Ignore`: use to indicate that the counter towards the `.spec.backoffLimit` should not be incremented and a replacement Pod should be created. - `Count`: use to indicate that the Pod should be handled in the default way. - The counter towards the `.spec.backoffLimit` should be incremented. + The counter towards the `.spec.backoffLimit` should be incremented. --> 下面是此 API 的一些要求和语义: - 如果你想在 Job 中使用 `.spec.podFailurePolicy` 字段, @@ -1250,75 +1387,53 @@ These are some requirements and semantics of the API: --> ### 使用 Finalizer 追踪 Job {#job-tracking-with-finalizers} -{{< feature-state for_k8s_version="v1.23" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} {{< note >}} -要使用该行为,你必须为 [API 服务器](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/) -和[控制器管理器](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/)启用 -`JobTrackingWithFinalizers` -[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 -该特性默认是启用的。 - - -启用后,控制面基于下述行为追踪新的 Job。在启用该特性之前创建的 Job 不受影响。 -作为用户,你会看到的唯一区别是控制面对 Job 完成情况的跟踪更加准确。 +如果 Job 是在特性门控 `JobTrackingWithFinalizers` 被禁用时创建的,即使你将控制面升级到 1.26, +控制面也不会使用 Finalizer 跟踪 Job。 {{< /note >}} -该功能未启用时,Job {{< glossary_tooltip term_id="controller" >}} 依靠计算集群中存在的 Pod 来跟踪作业状态。 -也就是说,维持一个统计 `succeeded` 和 `failed` 的 Pod 的计数器。 -然而,Pod 可以因为一些原因被移除,包括: -- 当一个节点宕机时,垃圾收集器会删除孤立(Orphan)Pod。 -- 垃圾收集器在某个阈值后删除已完成的 Pod(处于 `Succeeded` 或 `Failed` 阶段)。 -- 人工干预删除 Job 的 Pod。 -- 一个外部控制器(不包含于 Kubernetes)来删除或取代 Pod。 - - -如果你为你的集群启用了 `JobTrackingWithFinalizers` 特性,控制面会跟踪属于任何 Job 的 Pod。 -并注意是否有任何这样的 Pod 被从 API 服务器上删除。 +The control plane keeps track of the Pods that belong to any Job and notices if +any such Pod is removed from the API server. To do that, the Job controller +creates Pods with the finalizer `batch.kubernetes.io/job-tracking`. The +controller removes the finalizer only after the Pod has been accounted for in +the Job status, allowing the Pod to be removed by other controllers or users. + +Jobs created before upgrading to Kubernetes 1.26 or before the feature gate +`JobTrackingWithFinalizers` is enabled are tracked without the use of Pod +finalizers. +The Job {{< glossary_tooltip term_id="controller" text="controller" >}} updates +the status counters for `succeeded` and `failed` Pods based only on the Pods +that exist in the cluster. The contol plane can lose track of the progress of +the Job if Pods are deleted from the cluster. +--> +控制面会跟踪属于任何 Job 的 Pod,并通知是否有任何这样的 Pod 被从 API 服务器中移除。 为了实现这一点,Job 控制器创建的 Pod 带有 Finalizer `batch.kubernetes.io/job-tracking`。 -控制器只有在 Pod 被记入 Job 状态后才会移除 Finalizer,允许 Pod 可以被其他控制器或用户删除。 +控制器只有在 Pod 被记入 Job 状态后才会移除 Finalizer,允许 Pod 可以被其他控制器或用户移除。 -Job 控制器只对新的 Job 使用新的算法。在启用该特性之前创建的 Job 不受影响。 +在升级到 Kubernetes 1.26 之前或在启用特性门控 `JobTrackingWithFinalizers` +之前创建的 Job 被跟踪时不使用 Pod Finalizer。 +Job {{< glossary_tooltip term_id="controller" text="控制器" >}}仅根据集群中存在的 Pod +更新 `succeeded` 和 `failed` Pod 的状态计数器。如果 Pod 被从集群中删除,控制面可能无法跟踪 Job 的进度。 + + 你可以根据检查 Job 是否含有 `batch.kubernetes.io/job-tracking` 注解, -来确定 Job 控制器是否正在使用 Pod Finalizer 追踪 Job。 +来确定控制面是否正在使用 Pod Finalizer 追踪 Job。 你**不**应该给 Job 手动添加或删除该注解。 +取而代之的是你可以重新创建 Job 以确保使用 Pod Finalizer 跟踪这些 Job。 * 了解 [Pod](/zh-cn/docs/concepts/workloads/pods)。 * 了解运行 Job 的不同的方式: @@ -1418,3 +1535,5 @@ object, but maintains complete control over what Pods are created and how work i 对象定义理解关于该资源的 API。 * 阅读 [`CronJob`](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/), 它允许你定义一系列定期运行的 Job,类似于 UNIX 工具 `cron`。 +* 根据循序渐进的[示例](/zh-cn/docs/tasks/job/pod-failure-policy/), + 练习如何使用 `podFailurePolicy` 配置处理可重试和不可重试的 Pod 失效。 diff --git a/content/zh-cn/docs/concepts/workloads/controllers/replicaset.md b/content/zh-cn/docs/concepts/workloads/controllers/replicaset.md index ddf1c38cf53ae..5fef6703a111f 100644 --- a/content/zh-cn/docs/concepts/workloads/controllers/replicaset.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/replicaset.md @@ -357,8 +357,12 @@ In this manner, a ReplicaSet can own a non-homogenous set of Pods As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. For ReplicaSets, the `kind` is always a ReplicaSet. -The name of a ReplicaSet object must be a valid -[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +When the control plane creates new Pods for a ReplicaSet, the `.metadata.name` of the +ReplicaSet is part of the basis for naming those Pods. The name of a ReplicaSet must be a valid +[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) +value, but this can produce unexpected results for the Pod hostnames. For best compatibility, +the name should follow the more restrictive rules for a +[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names). A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). --> @@ -367,8 +371,11 @@ A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contrib 与所有其他 Kubernetes API 对象一样,ReplicaSet 也需要 `apiVersion`、`kind`、和 `metadata` 字段。 对于 ReplicaSet 而言,其 `kind` 始终是 ReplicaSet。 -ReplicaSet 对象的名称必须是合法的 -[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +当控制平面为 ReplicaSet 创建新的 Pod 时,ReplicaSet +的 `.metadata.name` 是命名这些 Pod 的部分基础。ReplicaSet 的名称必须是一个合法的 +[DNS 子域](/zh-cn/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names)值, +但这可能对 Pod 的主机名产生意外的结果。为获得最佳兼容性,名称应遵循更严格的 +[DNS 标签](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-label-names)规则。 ReplicaSet 也需要 [`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) diff --git a/content/zh-cn/docs/concepts/workloads/controllers/replicationcontroller.md b/content/zh-cn/docs/concepts/workloads/controllers/replicationcontroller.md index cb443b9905b56..695af22acff13 100644 --- a/content/zh-cn/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/replicationcontroller.md @@ -173,20 +173,31 @@ specifies an expression with the name from each pod in the returned list. `--output=jsonpath` 选项指定了一个表达式,仅从返回列表中的每个 Pod 中获取名称。 -## 编写一个 ReplicationController 规约 {#writing-a-replicationcontroller-spec} +## 编写一个 ReplicationController 清单 {#writing-a-replicationcontroller-manifest} 与所有其它 Kubernetes 配置一样,ReplicationController 需要 `apiVersion`、`kind` 和 `metadata` 字段。 -ReplicationController 对象的名称必须是有效的 -[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 + +当控制平面为 ReplicationController 创建新的 Pod 时,ReplicationController +的 `.metadata.name` 是命名这些 Pod 的部分基础。ReplicationController 的名称必须是一个合法的 +[DNS 子域](/zh-cn/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names)值, +但这可能对 Pod 的主机名产生意外的结果。为获得最佳兼容性,名称应遵循更严格的 +[DNS 标签](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-label-names)规则。 + 有关使用配置文件的常规信息, 参考[对象管理](/zh-cn/docs/concepts/overview/working-with-objects/object-management/)。 diff --git a/content/zh-cn/docs/concepts/workloads/controllers/statefulset.md b/content/zh-cn/docs/concepts/workloads/controllers/statefulset.md index 2f330ae4c370f..7ff1d685926aa 100644 --- a/content/zh-cn/docs/concepts/workloads/controllers/statefulset.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/statefulset.md @@ -242,13 +242,41 @@ StatefulSet Pod 具有唯一的标识,该标识包括顺序标识、稳定的 ### 有序索引 {#ordinal-index} -对于具有 N 个副本的 StatefulSet,该 StatefulSet 中的每个 Pod 将被分配一个从 0 到 N-1 -的整数序号,该序号在此 StatefulSet 上是唯一的。 +对于具有 N 个[副本](#replicas)的 StatefulSet,该 StatefulSet 中的每个 Pod 将被分配一个整数序号, +该序号在此 StatefulSet 上是唯一的。默认情况下,这些 Pod 将被从 0 到 N-1 的序号。 + + +### 起始序号 {#start-ordinal} + +{{< feature-state for_k8s_version="v1.26" state="alpha" >}} + + +`.spec.ordinals` 是一个可选的字段,允许你配置分配给每个 Pod 的整数序号。 +该字段默认为 nil 值。你必须启用 `StatefulSetStartOrdinal` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)才能使用此字段。 +一旦启用,你就可以配置以下选项: + + +* `.spec.ordinals.start`:如果 `.spec.ordinals.start` 字段被设置,则 Pod 将被分配从 + `.spec.ordinals.start` 到 `.spec.ordinals.start + .spec.replicas - 1` 的序号。 @@ -14,101 +20,123 @@ weight: 70 {{< feature-state for_k8s_version="v1.23" state="stable" >}} -TTL-after-finished {{}} 提供了一种 TTL 机制来限制已完成执行的资源对象的生命周期。 -TTL 控制器目前只处理 {{< glossary_tooltip text="Job" term_id="job" >}}。 +当你的 Job 已结束时,将 Job 保留在 API 中(而不是立即删除 Job)很有用, +这样你就可以判断 Job 是成功还是失败。 +Kubernetes TTL-after-finished {{}}提供了一种 +TTL 机制来限制已完成执行的 Job 对象的生命期。 -## TTL-after-finished 控制器 +## 清理已完成的 Job {#cleanup-for-finished-jobs} -TTL-after-finished 控制器只支持 Job。集群操作员可以通过指定 Job 的 `.spec.ttlSecondsAfterFinished` -字段来自动清理已结束的作业(`Complete` 或 `Failed`),如 -[示例](/zh-cn/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) -所示。 +TTL-after-finished 控制器只支持 Job。你可以通过指定 Job 的 `.spec.ttlSecondsAfterFinished` +字段来自动清理已结束的 Job(`Complete` 或 `Failed`), +如[示例](/zh-cn/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)所示。 -TTL-after-finished 控制器假设作业能在执行完成后的 TTL 秒内被清理,也就是当 TTL 过期后。 -当 TTL 控制器清理作业时,它将做级联删除操作,即删除资源对象的同时也删除其依赖对象。 -注意,当资源被删除时,由该资源的生命周期保证其终结器(Finalizers)等被执行。 +TTL-after-finished 控制器假设 Job 能在执行完成后的 TTL 秒内被清理。一旦 Job +的状态条件发生变化表明该 Job 是 `Complete` 或 `Failed`,计时器就会启动;一旦 TTL 已过期,该 Job +就能被[级联删除](/zh-cn/docs/concepts/architecture/garbage-collection/#cascading-deletion)。 +当 TTL 控制器清理作业时,它将做级联删除操作,即删除 Job 的同时也删除其依赖对象。 -可以随时设置 TTL 秒。以下是设置 Job 的 `.spec.ttlSecondsAfterFinished` 字段的一些示例: +Kubernetes 尊重 Job 对象的生命周期保证,例如等待 +[Finalizer](/zh-cn/docs/concepts/overview/working-with-objects/finalizers/)。 + +你可以随时设置 TTL 秒。以下是设置 Job 的 `.spec.ttlSecondsAfterFinished` 字段的一些示例: +* 在 Job 清单(manifest)中指定此字段,以便 Job 在完成后的某个时间被自动清理。 +* 手动设置现有的、已完成的 Job 的此字段,以便这些 Job 可被清理。 +* 在创建 Job 时使用[修改性质的准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) + 动态设置该字段。集群管理员可以使用它对已完成的作业强制执行 TTL 策略。 + -* 在作业清单(manifest)中指定此字段,以便 Job 在完成后的某个时间被自动清除。 -* 将此字段设置为现有的、已完成的作业,以采用此新功能。 -* 在创建作业时使用 [mutating admission webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) - 动态设置该字段。集群管理员可以使用它对完成的作业强制执行 TTL 策略。 -* 使用 [mutating admission webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) - 在作业完成后动态设置该字段,并根据作业状态、标签等选择不同的 TTL 值。 +* 使用[修改性质的准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) + 在 Job 完成后动态设置该字段,并根据 Job 状态、标签等选择不同的 TTL 值。 + 对于这种情况,Webhook 需要检测 Job 的 `.status` 变化,并且仅在 Job 被标记为已完成时设置 TTL。 +* 编写你自己的控制器来管理与特定{{< glossary_tooltip term_id="selector" text="选择算符" >}}匹配的 + Job 的清理 TTL。 -## 警告 +## 警告 {#caveats} -### 更新 TTL 秒数 +### 更新已完成 Job 的 TTL {#updating-ttl-for-finished-jobs} -请注意,在创建 Job 或已经执行结束后,仍可以修改其 TTL 周期,例如 Job 的 +在创建 Job 或已经执行结束后,你仍可以修改其 TTL 周期,例如 Job 的 `.spec.ttlSecondsAfterFinished` 字段。 -但是一旦 Job 变为可被删除状态(当其 TTL 已过期时),即使你通过 API 增加其 TTL -时长得到了成功的响应,系统也不保证 Job 将被保留。 +如果你在当前 `ttlSecondsAfterFinished` 时长已过期后延长 TTL 周期, +即使延长 TTL 的更新得到了成功的 API 响应,Kubernetes 也不保证保留此 Job, ### 时间偏差 {#time-skew} -由于 TTL-after-finished 控制器使用存储在 Kubernetes 资源中的时间戳来确定 TTL 是否已过期, -因此该功能对集群中的时间偏差很敏感,这可能导致 TTL-after-finished 控制器在错误的时间清理资源对象。 +由于 TTL-after-finished 控制器使用存储在 Kubernetes Job 中的时间戳来确定 TTL 是否已过期, +因此该功能对集群中的时间偏差很敏感,这可能导致控制平面在错误的时间清理 Job 对象。 -* [自动清理 Job](/zh-cn/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) -* [设计文档](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md) +* 阅读[自动清理 Job](/zh-cn/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) + +* 参阅 [Kubernetes 增强提案](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md) + (KEP) 了解此机制的演进过程。 diff --git a/content/zh-cn/docs/concepts/workloads/pods/_index.md b/content/zh-cn/docs/concepts/workloads/pods/_index.md index 54d74d319a76f..59908afe7f81b 100644 --- a/content/zh-cn/docs/concepts/workloads/pods/_index.md +++ b/content/zh-cn/docs/concepts/workloads/pods/_index.md @@ -257,11 +257,16 @@ Pod 不是进程,而是容器运行的环境。 {{< /note >}} -当你为 Pod 对象创建清单时,要确保所指定的 Pod 名称是合法的 -[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +The name of a Pod must be a valid +[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) +value, but this can produce unexpected results for the Pod hostname. For best compatibility, +the name should follow the more restrictive rules for a +[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names). +--> +Pod 的名称必须是一个合法的 +[DNS 子域](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)值, +但这可能对 Pod 的主机名产生意外的结果。为获得最佳兼容性,名称应遵循更严格的 +[DNS 标签](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-label-names)规则。 -{{< caution >}} 并非所有的自愿干扰都会受到 Pod 干扰预算的限制。 例如,删除 Deployment 或 Pod 的删除操作就会跳过 Pod 干扰预算检查。 {{< /caution >}} @@ -234,7 +234,7 @@ PDB 指定应用可以容忍的副本数量(相当于应该有多少副本) The group of pods that comprise the application is specified using a label selector, the same as the one used by the application's controller (deployment, stateful-set, etc). --> -使用标签选择器来指定构成应用的一组 Pod,这与应用的控制器(Deployment,StatefulSet 等) +使用标签选择器来指定构成应用的一组 Pod,这与应用的控制器(Deployment、StatefulSet 等) 选择 Pod 的逻辑一样。 ## Pod 干扰状况 {#pod-disruption-conditions} -{{< feature-state for_k8s_version="v1.25" state="alpha" >}} +{{< feature-state for_k8s_version="v1.26" state="beta" >}} + +{{< note >}} + +如果你正使用的 Kubernetes 版本早于 {{< skew currentVersion >}},请参阅对应版本的文档。 +{{< /note >}} {{< note >}} -要使用此行为,你必须在集群中启用 `PodDisruptionCondition` +要使用此行为,你必须在集群中启用 `PodDisruptionConditions` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 {{< /note >}} @@ -487,6 +495,15 @@ Taint Manager(`kube-controller-manager` 中节点生命周期控制器的一 : 绑定到一个不再存在的 Node 上的 Pod 将被 [Pod 垃圾收集](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)删除。 + +`TerminationByKubelet` +: Pod + 由于{{}}或[节点体面关闭](/zh-cn/docs/concepts/architecture/nodes/#graceful-node-shutdown)而被 + kubelet 终止。 + {{< note >}} +当 `PodDisruptionConditions` 特性门控被启用时,在清理 Pod 的同时,如果这些 Pod 处于非终止阶段, +则 Pod 垃圾回收器 (PodGC) 也会将这些 Pod 标记为失效 +(另见 [Pod 垃圾回收](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection))。 + - - 接受升级期间的停机时间。 - 故障转移到另一个完整的副本集群。 - - 没有停机时间,但是对于重复的节点和人工协调成本可能是昂贵的。 + - 没有停机时间,但是对于重复的节点和人工协调成本可能是昂贵的。 - 编写可容忍干扰的应用和使用 PDB。 - - 不停机。 - - 最小的资源重复。 - - 允许更多的集群管理自动化。 - - 编写可容忍干扰的应用是棘手的,但对于支持容忍自愿干扰所做的工作,和支持自动扩缩和容忍非 - 自愿干扰所做工作相比,有大量的重叠 + - 不停机。 + - 最小的资源重复。 + - 允许更多的集群管理自动化。 + - 编写可容忍干扰的应用是棘手的,但对于支持容忍自愿干扰所做的工作,和支持自动扩缩和容忍非 + 自愿干扰所做工作相比,有大量的重叠 ## {{% heading "whatsnext" %}} diff --git a/content/zh-cn/docs/concepts/workloads/pods/pod-lifecycle.md b/content/zh-cn/docs/concepts/workloads/pods/pod-lifecycle.md index a97cfad55947c..0fb350715b023 100644 --- a/content/zh-cn/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/zh-cn/docs/concepts/workloads/pods/pod-lifecycle.md @@ -977,27 +977,51 @@ documentation for 请参阅[从 StatefulSet 中删除 Pod](/zh-cn/docs/tasks/run-application/force-delete-stateful-set-pod/) 的任务文档。 -### 已终止 Pod 的垃圾收集 {#pod-garbage-collection} +### Pod 的垃圾收集 {#pod-garbage-collection} 对于已失败的 Pod 而言,对应的 API 对象仍然会保留在集群的 API 服务器上, 直到用户或者{{< glossary_tooltip term_id="controller" text="控制器" >}}进程显式地将其删除。 -控制面组件会在 Pod 个数超出所配置的阈值 +Pod 的垃圾收集器(PodGC)是控制平面的控制器,它会在 Pod 个数超出所配置的阈值 (根据 `kube-controller-manager` 的 `terminated-pod-gc-threshold` 设置)时删除已终止的 Pod(阶段值为 `Succeeded` 或 `Failed`)。 这一行为会避免随着时间演进不断创建和终止 Pod 而引起的资源泄露问题。 + +此外,PodGC 会清理满足以下任一条件的所有 Pod: +1. 孤儿 Pod - 绑定到不再存在的节点, +2. 计划外终止的 Pod +3. 终止过程中的 Pod,当启用 `NodeOutOfServiceVolumeDetach` 特性门控时, + 绑定到有 [`node.kubernetes.io/out-of-service`](/zh-cn/docs/reference/labels-annotations-taints/#node-kubernetes-io-out-of-service) + 污点的未就绪节点。 + +若启用 `PodDisruptionConditions` 特性门控,在清理 Pod 的同时, +如果它们处于非终止状态阶段,PodGC 也会将它们标记为失败。 +此外,PodGC 在清理孤儿 Pod 时会添加 Pod 干扰状况(另请参阅: +[Pod 干扰状况](/zh-cn/docs/concepts/workloads/pods/disruptions#pod-disruption-conditions))。 + ## {{% heading "whatsnext" %}} + + +{{< feature-state for_k8s_version="v1.25" state="alpha" >}} + +本页解释了在 Kubernetes pods 中如何使用用户命名空间。 +用户命名空间允许将容器内运行的用户与主机内的用户隔离开来。 + +在容器中以 root 身份运行的进程可以在主机中以不同的(非 root)用户身份运行; +换句话说,该进程在用户命名空间内的操作具有完全的权限, +但在命名空间外的操作是无特权的。 + + +你可以使用这个功能来减少被破坏的容器对主机或同一节点中的其他 Pod 的破坏。 +有[几个安全漏洞][KEP-vulns]被评为 **高** 或 **重要**, +当用户命名空间处于激活状态时,这些漏洞是无法被利用的。 +预计用户命名空间也会减轻一些未来的漏洞。 + +[KEP-vulns]: https://github.com/kubernetes/enhancements/tree/217d790720c5aef09b8bd4d6ca96284a0affe6c2/keps/sig-node/127-user-namespaces#motivation + + +## {{% heading "prerequisites" %}} + +{{% thirdparty-content single="true" %}} + + + + +这是一个只对 Linux 有效的功能特性。此外,需要在{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}提供支持, +才能在 Kubernetes 无状态 Pod 中使用这一功能: + +* CRI-O:v1.25 版已经支持用户命名空间。 +* containerd:计划在 1.7 版本中支持。更多细节请参见 containerd 问题 [#7063][containerd-userns-issue]。 + +目前 [cri-dockerd 没有计划][CRI-dockerd-issue]支持此功能。 + +[CRI-dockerd-issue]: https://github.com/Mirantis/cri-dockerd/issues/74 +[containerd-userns-issue]: https://github.com/containerd/containerd/issues/7063 + + +## 介绍 {#introduction} + + +用户命名空间是一个 Linux 功能,允许将容器中的用户映射到主机中的不同用户。 +此外,在某用户命名空间中授予 Pod 的权能只在该命名空间中有效,在该命名空间之外无效。 + +一个 Pod 可以通过将 `pod.spec.hostUsers` 字段设置为 `false` 来选择使用用户命名空间。 + + +kubelet 将挑选 Pod 所映射的主机 UID/GID, +并将以保证同一节点上没有两个无状态 Pod 使用相同的映射的方式进行。 + +`pod.spec` 中的 `runAsUser`、`runAsGroup`、`fsGroup` 等字段总是指的是容器内的用户。 +启用该功能时,有效的 UID/GID 在 0-65535 范围内。这以限制适用于文件和进程(`runAsUser`、`runAsGroup` 等)。 + + +使用这个范围之外的 UID/GID 的文件将被视为属于溢出 ID, +通常是 65534(配置在 `/proc/sys/kernel/overflowuid和/proc/sys/kernel/overflowgid`)。 +然而,即使以 65534 用户/组的身份运行,也不可能修改这些文件。 + +大多数需要以 root 身份运行但不访问其他主机命名空间或资源的应用程序, +在用户命名空间被启用时,应该可以继续正常运行,不需要做任何改变。 + + +## 了解无状态 Pod 的用户命名空间 {#understanding-user-namespaces-for-stateless-pods} + + +一些容器运行时的默认配置(如 Docker Engine、containerd、CRI-O)使用 Linux 命名空间进行隔离。 +其他技术也存在,也可以与这些运行时(例如,Kata Containers 使用虚拟机而不是 Linux 命名空间)结合使用。 +本页适用于使用 Linux 命名空间进行隔离的容器运行时。 + +在创建 Pod 时,默认情况下会使用几个新的命名空间进行隔离: +一个网络命名空间来隔离容器网络,一个 PID 命名空间来隔离进程视图等等。 +如果使用了一个用户命名空间,这将把容器中的用户与节点中的用户隔离开来。 + + +这意味着容器可以以 root 身份运行,并将该身份映射到主机上的一个非 root 用户。 +在容器内,进程会认为它是以 root 身份运行的(因此像 `apt`、`yum` 等工具可以正常工作), +而实际上该进程在主机上没有权限。 +你可以验证这一点,例如,如果你从主机上执行 `ps aux` 来检查容器进程是以哪个用户运行的。 +`ps` 显示的用户与你在容器内执行 `id` 命令时看到的用户是不一样的。 + +这种抽象限制了可能发生的情况,例如,容器设法逃逸到主机上时的后果。 +鉴于容器是作为主机上的一个非特权用户运行的,它能对主机做的事情是有限的。 + + +此外,由于每个 Pod 上的用户将被映射到主机中不同的非重叠用户, +他们对其他 Pod 可以执行的操作也是有限的。 + +授予一个 Pod 的权能也被限制在 Pod 的用户命名空间内, +并且在这一命名空间之外大多无效,有些甚至完全无效。这里有两个例子: + + - `CAP_SYS_MODULE` 若被授予一个使用用户命名空间的 Pod 则没有任何效果,这个 Pod 不能加载内核模块。 + - `CAP_SYS_ADMIN` 只限于 Pod 所在的用户命名空间,在该命名空间之外无效。 + + +在不使用用户命名空间的情况下,以 root 账号运行的容器,在容器逃逸时,在节点上有 root 权限。 +而且如果某些权能被授予了某容器,这些权能在宿主机上也是有效的。 +当我们使用用户命名空间时,这些都不再成立。 + +如果你想知道关于使用用户命名空间时的更多变化细节,请参见 `man 7 user_namespaces`。 + + +## 设置一个节点以支持用户命名空间 {#set-up-a-node-to-support-user-namespaces} + + +建议主机的文件和主机的进程使用 0-65535 范围内的 UID/GID。 + +kubelet 会把高于这个范围的 UID/GID 分配给 Pod。 +因此,为了保证尽可能多的隔离,主机的文件和主机的进程所使用的 UID/GID 应该在 0-65535 范围内。 + +请注意,这个建议对减轻 [CVE-2021-25741][CVE-2021-25741] 等 CVE 的影响很重要; +在这些 CVE 中,Pod 有可能读取主机中的任意文件。 +如果 Pod 和主机的 UID/GID 不重叠,Pod 能够做的事情就会受到限制: +Pod的 UID/GID 不会与主机的文件所有者/组相匹配。 + +[CVE-2021-25741]: https://github.com/kubernetes/kubernetes/issues/104980 + + +## 限制 {#limitations} + +当 Pod 使用用户命名空间时,不允许 Pod 使用其他主机命名空间。 +特别是,如果你设置了 `hostUsers: false`,那么你就不可以设置如下属性: + + * `hostNetwork: true` + * `hostIPC: true` + * `hostPID: true` + + +Pod 完全不使用卷是被允许的;如果使用卷,只允许使用以下卷类型: + + * configmap + * secret + * projected + * downwardAPI + * emptyDir + + +为了保证 Pod 可以读取这些卷中的文件,卷的创建操作就像你为 Pod 指定了 `.spec.securityContext.fsGroup` 为 `0` 一样。 +如果该属性被设定为不同值,那么这个不同值当然也会被使用。 + +作为一个副产品,这些卷的文件夹和文件将具有所给组的权限, +即使 `defaultMode` 或 volumes 的特定项目的 `mode` 被指定为没有组的权限。 +例如,不可以在挂载这些卷时使其文件只允许所有者访问。 \ No newline at end of file diff --git a/content/zh-cn/docs/contribute/advanced.md b/content/zh-cn/docs/contribute/advanced.md index 1fceb38050dd3..bdba79982752a 100644 --- a/content/zh-cn/docs/contribute/advanced.md +++ b/content/zh-cn/docs/contribute/advanced.md @@ -20,7 +20,6 @@ This page assumes that you understand how to to learn about more ways to contribute. You need to use the Git command line client and other tools for some of these tasks. --> - 如果你已经了解如何[贡献新内容](/zh-cn/docs/contribute/new-content/)和 [评阅他人工作](/zh-cn/docs/contribute/review/reviewing-prs/),并准备了解更多贡献的途径, 请阅读此文。你需要使用 Git 命令行工具和其他工具做这些工作。 @@ -35,7 +34,7 @@ can propose improvements. --> ## 提出改进建议 {#propose-improvements} -SIG Docs 的[成员](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#members) 可以提出改进建议。 +SIG Docs 的[成员](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#members)可以提出改进建议。 ## 为 Kubernetes 版本发布协调文档工作 {#coordinate-docs-for-a-kubernetes-release} -SIG Docs 的[批准者(approvers)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers) +SIG Docs 的[批准人(Approver)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers) 可以为 Kubernetes 版本发布协调文档工作。 - SIG Docs 团队的代表需要为一个指定的版本协调以下工作: - 通过特性跟踪表来监视新功能特性或现有功能特性的修改。 @@ -140,10 +138,9 @@ few PR submissions. Responsibilities for New Contributor Ambassadors include: --> - ## 担任新的贡献者大使 {#serve-as-a-new-contributor-ambassador} -SIG Docs [批准人(Approvers)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers) +SIG Docs [批准人(Approver)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers) 可以担任新的贡献者大使。 新的贡献者大使欢迎 SIG-Docs 的新贡献者,对新贡献者的 PR 提出建议, @@ -152,7 +149,7 @@ SIG Docs [批准人(Approvers)](/zh-cn/docs/contribute/participate/roles-and 新的贡献者大使的职责包括: - 监听 [Kubernetes #sig-docs 频道](https://kubernetes.slack.com) 上新贡献者的 Issue。 -- 与 PR 管理者合作为新参与者寻找[合适的第一个 issues](https://kubernetes.dev/docs/guide/help-wanted/#good-first-issue)。 +- 与 PR 管理者合作为新参与者寻找[合适的第一个 issue](https://kubernetes.dev/docs/guide/help-wanted/#good-first-issue)。 - 通过前几个 PR 指导新贡献者为文档存储库作贡献。 - 帮助新的贡献者创建成为 Kubernetes 成员所需的更复杂的 PR。 - [为贡献者提供保荐](#sponsor-a-new-contributor),使其成为 Kubernetes 成员。 @@ -179,7 +176,7 @@ can sponsor new contributors. --> ## 为新的贡献者提供保荐 {#sponsor-a-new-contributor} -SIG Docs 的[评审人(Reviewers)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#reviewers) +SIG Docs 的[评审人(Reviewer)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#reviewers) 可以为新的贡献者提供保荐。 ## 担任 SIG 联合主席 {#sponsor-a-new-contributor} -SIG Docs [成员(Members)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#members) +SIG Docs [成员(Member)](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#members) 可以担任 SIG Docs 的联合主席。 ### 前提条件 {#prerequisites} @@ -228,10 +225,10 @@ A Kubernetes member must meet the following requirements to be a co-chair: - Understand SIG Docs workflows and tooling: git, Hugo, localization, blog subproject - Understand how other Kubernetes SIGs and repositories affect the SIG Docs workflow, including: - [teams in k/org](https://github.com/kubernetes/org/blob/main/config/kubernetes/sig-docs/teams.yaml), the - [process in k/community](https://github.com/kubernetes/community/tree/main/sig-docs), + [teams in k/org](https://github.com/kubernetes/org/blob/master/config/kubernetes/sig-docs/teams.yaml), the + [process in k/community](https://github.com/kubernetes/community/tree/master/sig-docs), plugins in [k/test-infra](https://github.com/kubernetes/test-infra/), and the role of - [SIG Architecture](https://github.com/kubernetes/community/tree/main/sig-architecture). + [SIG Architecture](https://github.com/kubernetes/community/tree/master/sig-architecture). In addition, understand how the [Kubernetes docs release process](/docs/contribute/advanced/#coordinate-docs-for-a-kubernetes-release) works. - Approved by the SIG Docs community either directly or via lazy consensus. - Commit at least 5 hours per week (and often more) to the role for a minimum of 6 months @@ -240,8 +237,8 @@ Kubernetes 成员必须满足以下要求才能成为联合主席: - 理解 SIG Docs 工作流程和工具:git、Hugo、本地化、博客子项目 - 理解其他 Kubernetes SIG 和仓库会如何影响 SIG Docs 工作流程,包括: - [k/org 中的团队](https://github.com/kubernetes/org/blob/main/config/kubernetes/sig-docs/teams.yaml)、 - [k/community 中的流程](https://github.com/kubernetes/community/tree/main/sig-docs)、 + [k/org 中的团队](https://github.com/kubernetes/org/blob/master/config/kubernetes/sig-docs/teams.yaml)、 + [k/community 中的流程](https://github.com/kubernetes/community/tree/master/sig-docs)、 [k/test-infra](https://github.com/kubernetes/test-infra/) 中的插件、 [SIG Architecture](https://github.com/kubernetes/community/tree/main/sig-architecture) 中的角色。 此外,了解 [Kubernetes 文档发布流程](/zh-cn/docs/contribute/advanced/#coordinate-docs-for-a-kubernetes-release)的工作原理。 @@ -251,14 +248,21 @@ Kubernetes 成员必须满足以下要求才能成为联合主席: ### 职责范围 {#responsibilities} -联合主席主要提供以下服务: -联合主席负责处理流程和政策、时间安排和召开会议、安排 PR 管理员、以及一些其他人不想做的事情,目的是增长贡献者团队。 +联合主席的角色提供以下服务: + +- 拓展贡献者规模 +- 处理流程和政策 +- 安排时间和召开会议 +- 安排 PR 管理员 +- 在 Kubernetes 社区中提出文档倡议 +- 确保文档在 Kubernetes 发布周期中符合预期 +- 让 SIG Docs 专注于有效的优先事项 职责范围包括: @@ -354,17 +358,17 @@ Begin and end meetings on time. **有效利用 Zoom**: -- 熟悉 [ Kubernetes Zoom 指南](https://github.com/kubernetes/community/blob/main/communication/zoom-guidelines.md) +- 熟悉 [Kubernetes Zoom 指南](https://github.com/kubernetes/community/blob/master/communication/zoom-guidelines.md) - 输入主持人密钥登录时声明主持人角色 -声明 Zoom 角色 +声明 Zoom 主持人角色 +### SIG 联合主席 (Emeritus) 离职 {#offboarding-a-sig-cochair} + +参见 [k/community/sig-docs/offboarding.md](https://github.com/kubernetes/community/blob/master/sig-docs/offboarding.md) diff --git a/content/zh-cn/docs/reference/_index.md b/content/zh-cn/docs/reference/_index.md index 60f3d8746aa62..b42011c811f14 100644 --- a/content/zh-cn/docs/reference/_index.md +++ b/content/zh-cn/docs/reference/_index.md @@ -77,7 +77,7 @@ client libraries: ## CLI * [kubectl](/docs/reference/kubectl/) - Main CLI tool for running commands and managing Kubernetes clusters. - * [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](https://goessner.net/articles/JsonPath/) with kubectl. + * [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](https://goessner.net/articles/JsonPath/) with kubectl. * [kubeadm](/docs/reference/setup-tools/kubeadm/) - CLI tool to easily provision a secure Kubernetes cluster. --> ## CLI @@ -96,16 +96,19 @@ client libraries: * [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - REST API that validates and configures data for API objects such as pods, services, replication controllers. -* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes. +* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - + Daemon that embeds the core control loops shipped with Kubernetes. * [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of back-ends. -* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler that manages availability, performance, and capacity. +* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - + Scheduler that manages availability, performance, and capacity. * [Scheduler Policies](/docs/reference/scheduling/policies) * [Scheduler Profiles](/docs/reference/scheduling/config#profiles) - * List of [ports and protocols](/docs/reference/networking/ports-and-protocols/) that - should be open on control plane and worker nodes + +* List of [ports and protocols](/docs/reference/networking/ports-and-protocols/) that + should be open on control plane and worker nodes --> ## 组件 {#components} @@ -122,7 +125,8 @@ client libraries: * [调度策略](/zh-cn/docs/reference/scheduling/policies) * [调度配置](/zh-cn/docs/reference/scheduling/config#profiles) - * 应该在控制平面和工作节点上打开的[端口和协议](/zh-cn/docs/reference/networking/ports-and-protocols/)列表 + +* 应该在控制平面和工作节点上打开的[端口和协议](/zh-cn/docs/reference/networking/ports-and-protocols/)列表 +[`ValidatingAdmissionPolicy`](#validatingadmissionpolicy) 准入插件默认被启用, +但只有启用 `ValidatingAdmissionPolicy` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) **和** +`admissionregistration.k8s.io/v1alpha1` API 时才会激活。 +{{< note >}} + @@ -897,8 +910,8 @@ and enforces kubelet modification of labels under the `kubernetes.io/` or `k8s.i * `kubernetes.io/os` * `beta.kubernetes.io/instance-type` * `node.kubernetes.io/instance-type` - * `failure-domain.beta.kubernetes.io/region` (已弃用) - * `failure-domain.beta.kubernetes.io/zone` (已弃用) + * `failure-domain.beta.kubernetes.io/region`(已弃用) + * `failure-domain.beta.kubernetes.io/zone`(已弃用) * `topology.kubernetes.io/region` * `topology.kubernetes.io/zone` * `kubelet.kubernetes.io/` 为前缀的标签 @@ -974,7 +987,7 @@ For more information about persistent volume claims, see [PersistentVolumeClaims 关于持久化卷申领的更多信息,请参见 [PersistentVolumeClaim](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。 -### PersistentVolumeLabel {#persistentvolumelabel} +### PersistentVolumeLabel {#persistentvolumelabel} {{< feature-state for_k8s_version="v1.13" state="deprecated" >}} @@ -1120,8 +1133,7 @@ for more information. --> 这是下节所讨论的已被废弃的 [PodSecurityPolicy](#podsecuritypolicy) 准入控制器的替代品。 此准入控制器负责在创建和修改 Pod 时,根据请求的安全上下文和 -[Pod 安全标准](/zh-cn/docs/concepts/security/pod-security-standards/) -来确定是否可以执行请求。 +[Pod 安全标准](/zh-cn/docs/concepts/security/pod-security-standards/)来确定是否可以执行请求。 更多信息请参阅 [Pod 安全性准入控制器](/zh-cn/docs/concepts/security/pod-security-admission/)。 @@ -1320,6 +1332,17 @@ conditions. 这些污点能够避免一些竞态条件的发生,而这类竞态条件可能导致 Pod 在更新节点污点以准确反映其所报告状况之前,就被调度到新节点上。 +### ValidatingAdmissionPolicy {#validatingadmissionpolicy} + + +[此准入控制器](/zh-cn/docs/reference/access-authn-authz/validating-admission-policy/)针对传入的匹配请求实现 +CEL 校验。当 `validatingadmissionpolicy` 和 `admissionregistration.k8s.io/v1alpha1` 特性门控组/版本被启用时, +此特性被启用。如果任意 ValidatingAdmissionPolicy 失败,则请求失败。 + ### ValidatingAdmissionWebhook {#validatingadmissionwebhook} 你必须在 API 服务器上设置 `--enable-bootstrap-token-auth` 标志来启用基于启动引导令牌的身份认证组件。 @@ -495,26 +495,26 @@ sequenceDiagram {{< /mermaid >}} -1. 登录到你的身份服务(Identity Provider) -2. 你的身份服务将为你提供 `access_token`、`id_token` 和 `refresh_token` -3. 在使用 `kubectl` 时,将 `id_token` 设置为 `--token` 标志值,或者将其直接添加到 - `kubeconfig` 中 -4. `kubectl` 将你的 `id_token` 放到一个称作 `Authorization` 的头部,发送给 API 服务器 -5. API 服务器将负责通过检查配置中引用的证书来确认 JWT 的签名是合法的 -6. 检查确认 `id_token` 尚未过期 -7. 确认用户有权限执行操作 -8. 鉴权成功之后,API 服务器向 `kubectl` 返回响应 -9. `kubectl` 向用户提供反馈信息 +1. Login to your identity provider +2. Your identity provider will provide you with an `access_token`, `id_token` and a `refresh_token` +3. When using `kubectl`, use your `id_token` with the `--token` flag or add it directly to your `kubeconfig` +4. `kubectl` sends your `id_token` in a header called Authorization to the API server +5. The API server will make sure the JWT signature is valid by checking against the certificate named in the configuration +6. Check to make sure the `id_token` hasn't expired +7. Make sure the user is authorized +8. Once authorized the API server returns a response to `kubectl` +9. `kubectl` provides feedback to the user +--> +1. 登录到你的身份服务(Identity Provider) +2. 你的身份服务将为你提供 `access_token`、`id_token` 和 `refresh_token` +3. 在使用 `kubectl` 时,将 `id_token` 设置为 `--token` 标志值,或者将其直接添加到 + `kubeconfig` 中 +4. `kubectl` 将你的 `id_token` 放到一个称作 `Authorization` 的头部,发送给 API 服务器 +5. API 服务器将负责通过检查配置中引用的证书来确认 JWT 的签名是合法的 +6. 检查确认 `id_token` 尚未过期 +7. 确认用户有权限执行操作 +8. 鉴权成功之后,API 服务器向 `kubectl` 返回响应 +9. `kubectl` 向用户提供反馈信息 ### Webhook 令牌身份认证 {#webhook-token-authentication} @@ -744,6 +746,9 @@ Webhook 身份认证是一种用来验证持有者令牌的回调机制。 其中描述如何访问远程的 Webhook 服务。 * `--authentication-token-webhook-cache-ttl` 用来设定身份认证决定的缓存时间。 默认时长为 2 分钟。 +* `--authentication-token-webhook-version` 决定是使用 `authentication.k8s.io/v1beta1` 还是 + `authenticationk8s.io/v1` 版本的 `TokenReview` 对象从 webhook 发送/接收信息。 + 默认为“v1beta1”。 +## 为客户端提供的对身份验证信息的 API 访问 {#self-subject-review} + +{{< feature-state for_k8s_version="v1.26" state="alpha" >}} + + +如果集群启用了此 API,你可以使用 `SelfSubjectReview` API 来了解 Kubernetes +集群如何映射你的身份验证信息从而将你识别为某客户端。无论你是作为用户(通常代表一个真的人)还是作为 +ServiceAccount 进行身份验证,这一 API 都可以使用。 + +`SelfSubjectReview` 对象没有任何可配置的字段。 +Kubernetes API 服务器收到请求后,将使用用户属性填充 status 字段并将其返回给用户。 + +请求示例(主体将是 `SelfSubjectReview`): + +``` +POST /apis/authentication.k8s.io/v1alpha1/selfsubjectreviews +``` + +```json +{ + "apiVersion": "authentication.k8s.io/v1alpha1", + "kind": "SelfSubjectReview" +} +``` + + +响应示例: + +```json +{ + "apiVersion": "authentication.k8s.io/v1alpha1", + "kind": "SelfSubjectReview", + "status": { + "userInfo": { + "name": "jane.doe", + "uid": "b6c7cfd4-f166-11ec-8ea0-0242ac120002", + "groups": [ + "viewers", + "editors", + "system:authenticated" + ], + "extra": { + "provider_id": ["token.company.example"] + } + } + } +} +``` + + +为了方便,Kubernetes 提供了 `kubectl alpha auth whoami` 命令。 +执行此命令将产生以下输出(但将显示不同的用户属性): + +* 简单的输出示例 + + ``` + ATTRIBUTE VALUE + Username jane.doe + Groups [system:authenticated] + ``` + + +* 包括额外属性的复杂示例 + + ``` + ATTRIBUTE VALUE + Username jane.doe + UID b79dbf30-0c6a-11ed-861d-0242ac120002 + Groups [students teachers system:authenticated] + Extra: skills [reading learning] + Extra: subjects [math sports] + ``` + + +通过提供 output 标志,也可以打印结果的 JSON 或 YAML 表现形式: + +{{< tabs name="self_subject_attributes_review_Example_1" >}} +{{% tab name="JSON" %}} +```json +{ + "apiVersion": "authentication.k8s.io/v1alpha1", + "kind": "SelfSubjectReview", + "status": { + "userInfo": { + "username": "jane.doe", + "uid": "b79dbf30-0c6a-11ed-861d-0242ac120002", + "groups": [ + "students", + "teachers", + "system:authenticated" + ], + "extra": { + "skills": [ + "reading", + "learning" + ], + "subjects": [ + "math", + "sports" + ] + } + } + } +} +``` +{{% /tab %}} + +{{% tab name="YAML" %}} +```yaml +apiVersion: authentication.k8s.io/v1alpha1 +kind: SelfSubjectReview +status: + userInfo: + username: jane.doe + uid: b79dbf30-0c6a-11ed-861d-0242ac120002 + groups: + - students + - teachers + - system:authenticated + extra: + skills: + - reading + - learning + subjects: + - math + - sports +``` +{{% /tab %}} +{{< /tabs >}} + + +在 Kubernetes 集群中使用复杂的身份验证流程时,例如如果你使用 +[Webhook 令牌身份验证](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)或[身份验证代理](/zh-cn/docs/reference/access-authn-authz/authentication/#authenticating-proxy)时, +此特性极其有用。 + +{{< note >}} + +Kubernetes API 服务器在所有身份验证机制 +(包括[伪装](/zh-cn/docs/reference/access-authn-authz/authentication/#user-impersonation)), +被应用后填充 `userInfo`, +如果你或某个身份验证代理使用伪装进行 SelfSubjectReview,你会看到被伪装用户的用户详情和属性。 +{{< /note >}} + + +默认情况下,所有经过身份验证的用户都可以在 `APISelfSubjectReview` 特性被启用时创建 `SelfSubjectReview` 对象。 +这是 `system:basic-user` 集群角色允许的操作。 + +{{< note >}} + +你只能在以下情况下进行 `SelfSubjectReview` 请求: + +* 集群启用了 `APISelfSubjectReview` + [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) +* 集群的 API 服务器已启用 `authentication.k8s.io/v1alpha1` + {{< glossary_tooltip term_id="api-group" text="API 组" >}}。。 +{{< /note >}} + ## {{% heading "whatsnext" %}} -scope 字段指定是仅集群范围的资源(Cluster)还是名字空间范围的资源资源(Namespaced)将与此规则匹配。 +`scope` 字段指定是仅集群范围的资源(Cluster)还是名字空间范围的资源资源(Namespaced)将与此规则匹配。 `*` 表示没有范围限制。 1. kubelet 启动 @@ -271,7 +271,7 @@ of provisioning. 2. [令牌认证文件](#token-authentication-file) 启动引导令牌是一种对 kubelet 进行身份认证的方法,相对简单且容易管理, 且不需要在启动 kube-apiserver 时设置额外的标志。 @@ -374,7 +374,7 @@ head -c 16 /dev/urandom | od -An -t x | tr -d ' ' ``` @@ -499,7 +499,7 @@ kubelet 身份认证,很重要的一点是为控制器管理器所提供的 CA ``` 例如: @@ -594,7 +594,7 @@ by default. The controller uses the [`SubjectAccessReview` API](/docs/reference/access-authn-authz/authorization/#checking-api-access) to determine if a given user is authorized to request a CSR, then approves based on the authorization outcome. To prevent conflicts with other approvers, the -builtin approver doesn't explicitly deny CSRs. It only ignores unauthorized +built-in approver doesn't explicitly deny CSRs. It only ignores unauthorized requests. The controller also prunes expired certificates as part of garbage collection. --> @@ -760,37 +760,41 @@ TLS 启动引导所提供的客户端证书默认被签名为仅用于 `client a ### 证书轮换 {#certificate-rotation} -Kubernetes v1.8 和更高版本的 kubelet 实现了对客户端证书与/或服务证书进行轮换 -这一 Beta 特性。这一特性通过 kubelet 对应的 `RotateKubeletClientCertificate` 和 -`RotateKubeletServerCertificate` 特性门控标志来控制,并且是默认启用的。 +Kubernetes v1.8 和更高版本的 kubelet 实现了对客户端证书与/或服务证书进行轮换这一特性。 +请注意,服务证书轮换是一项 **Beta** 特性,需要 kubelet 上 `RotateKubeletServerCertificate` 特性的支持(默认启用) -`RotateKubeletClientCertificate` 会导致 kubelet 在其现有凭据即将过期时通过创建新的 -CSR 来轮换其客户端证书。要启用此功能特性,可将下面的标志传递给 kubelet: +你可以配置 kubelet 使其在现有凭据过期时通过创建新的 CSR 来轮换其客户端证书。 +要启用此功能,请使用 [kubelet 配置文件](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/)的 +`rotateCertificates` 字段或将以下命令行参数传递给 kubelet(已弃用): ``` --rotate-certificates ``` -`RotateKubeletServerCertificate` 会让 kubelet 在启动引导其客户端凭据之后请求一个服务证书 -**且** 对该服务证书执行轮换操作。要启用此功能特性,将下面的标志传递给 kubelet: +启用 `RotateKubeletServerCertificate` 会让 kubelet +在启动引导其客户端凭据之后请求一个服务证书**且**对该服务证书执行轮换操作。 +要启用此特性,请使用 [kubelet 配置文件](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/)的 +`serverTLSBootstrap` 字段将以下命令行参数传递给 kubelet(已弃用): ``` --rotate-server-certificates @@ -812,12 +816,12 @@ CSR 批复控制器并不会自动批复节点的**服务**证书。 对 kubelet 服务证书的批复过程因集群部署而异,通常应该仅批复如下 CSR: @@ -865,7 +869,7 @@ You have several options for generating these credentials: ## kubectl 批复 {#kubectl-approval} @@ -879,9 +883,9 @@ appropriately-privileged user. This flow is intended to allow for automated approval handled by an external approval controller or the approval controller implemented in the core controller-manager. However cluster administrators can also manually approve certificate requests using kubectl. An administrator can -list CSRs with `kubectl get csr` and describe one in detail with `kubectl -describe csr `. An administrator can approve or deny a CSR with `kubectl -certificate approve ` and `kubectl certificate deny `. +list CSRs with `kubectl get csr` and describe one in detail with +`kubectl describe csr `. An administrator can approve or deny a CSR with +`kubectl certificate approve ` and `kubectl certificate deny `. --> 签名控制器并不会立即对所有证书请求执行签名操作。相反, 它会等待这些请求被某具有适当特权的用户标记为 “Approved(已批准)”状态。 diff --git a/content/zh-cn/docs/reference/access-authn-authz/rbac.md b/content/zh-cn/docs/reference/access-authn-authz/rbac.md index 1346e60485981..482be53b4dce8 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/rbac.md +++ b/content/zh-cn/docs/reference/access-authn-authz/rbac.md @@ -610,8 +610,6 @@ objects with an `aggregationRule` set. The `aggregationRule` defines a label {{< glossary_tooltip text="selector" term_id="selector" >}} that the controller uses to match other ClusterRole objects that should be combined into the `rules` field of this one. - -Here is an example aggregated ClusterRole: --> ### 聚合的 ClusterRole {#aggregated-clusterroles} @@ -620,9 +618,19 @@ Here is an example aggregated ClusterRole: 为控制器定义一个标签{{< glossary_tooltip text="选择算符" term_id="selector" >}}供后者匹配应该组合到当前 ClusterRole 的 `roles` 字段中的 ClusterRole 对象。 -下面是一个聚合 ClusterRole 的示例: +{{< caution >}} + +控制平面会覆盖你在聚合 ClusterRole 的 `rules` 字段中手动指定的所有值。 +如果你想更改或添加规则,请在被 `aggregationRule` 所选中的 `ClusterRole` 对象上执行变更。 +{{< /caution >}} +下面是一个聚合 ClusterRole 的示例: + ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole diff --git a/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md b/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md index d9d6673592170..6d2f971a13e3d 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md +++ b/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md @@ -160,7 +160,7 @@ each source also represents a single path within that volume. The three sources 1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox or an accidentally misconfigured peer). -1. A `downwardAPI` source that looks up the name of thhe namespace containing the Pod, and makes +1. A `downwardAPI` source that looks up the name of the namespace containing the Pod, and makes that name information available to application code running inside the Pod. --> 该清单片段定义了由三个数据源组成的投射卷。在当前场景中,每个数据源也代表该卷内的一条独立路径。这三个数据源是: @@ -315,7 +315,7 @@ it does the following when a Pod is created: `/var/run/secrets/kubernetes.io/serviceaccount`. For Linux containers, that volume is mounted at `/var/run/secrets/kubernetes.io/serviceaccount`; on Windows nodes, the mount is at the equivalent path. -1. If the spec of the incoming Pod does already contain any `imagePullSecrets`, then the +1. If the spec of the incoming Pod doesn't already contain any `imagePullSecrets`, then the admission controller adds `imagePullSecrets`, copying them from the `ServiceAccount`. --> 3. 如果服务账号的 `automountServiceAccountToken` 字段或 Pod 的 @@ -326,7 +326,7 @@ it does the following when a Pod is created: 忽略已为 `/var/run/secrets/kubernetes.io/serviceaccount` 路径定义的卷挂载的所有容器。 对于 Linux 容器,此卷挂载在 `/var/run/secrets/kubernetes.io/serviceaccount`; 在 Windows 节点上,此卷挂载在等价的路径上。 -4. 如果新来 Pod 的规约已包含任何 `imagePullSecrets`,则准入控制器添加 `imagePullSecrets`, +4. 如果新来 Pod 的规约不包含任何 `imagePullSecrets`,则准入控制器添加 `imagePullSecrets`, 并从 `ServiceAccount` 进行复制。 ### TokenRequest API @@ -392,14 +392,14 @@ kubelet 确保该卷包含允许容器作为正确 ServiceAccount 进行身份 该清单片段定义了由三个数据源信息组成的投射卷。 @@ -536,7 +536,7 @@ metadata: selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing uid: f23fd170-66f2-4697-b049-e1e266b7f835 secrets: -- name: example-automated-thing-token-zyxwv + - name: example-automated-thing-token-zyxwv ``` 有关 Kubernetes 组件仍可识别的特性门控,请参阅 -[Alpha 和 Beta 状态的特性门控](/zh-cn/docs/reference/command-line-tools/reference/feature-gates/#feature-gates-for-alpha-or-beta-features)或 -[已毕业和已废弃的特性门控](/zh-cn/docs/reference/command-line-tools/reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features)。 +[Alpha 和 Beta 状态的特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features)或 +[已毕业和已废弃的特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features)。 - `Accelerators`:使用 Docker Engine 时启用 Nvidia GPU 支持。这一特性不再提供。 关于替代方案,请参阅[设备插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)。 @@ -347,6 +387,8 @@ In the following table: - `AllowExtTrafficLocalEndpoints`:启用服务用于将外部请求路由到节点本地终端。 +- `AllowInsecureBackendProxy`:允许用户在请求 Pod 日志时跳过 kubelet 的 TLS 验证。 + -- `CSIMigrationGCEComplete`:停止在 kubelet 和卷控制器中注册 GCE-PD 内嵌插件, - 并启用 shims 和转换逻辑以将卷操作从 GCE-PD 内嵌插件路由到 PD CSI 插件。 - 这需要启用 CSIMigration 和 CSIMigrationGCE 特性标志,并在集群中的所有节点上安装和配置 - PD CSI 插件。该特性标志已被废弃,取而代之的是能防止注册内嵌 GCE PD 插件的 - `InTreePluginGCEUnregister` 特性标志。 +- `CSIMigrationOpenStack`:确保填充和转换逻辑能够将卷操作从 Cinder 内嵌插件路由到 + Cinder CSI 插件。对于禁用了此特性的节点或者没有安装并配置 Cinder CSI 插件的节点, + 支持回退到内嵌(in-tree)Cinder 插件来执行挂载操作。 + 不支持回退到内嵌插件来执行制备操作,因为对应的 CSI 插件必须已安装且正确配置。 + 此磁特性需要启用 CSIMigration 特性标志。 + +- `CSIMigrationOpenStack`:启用垫片和转换逻辑以将卷操作从 Cinder in-tree + 插件路由到 Cinder CSI 插件。支持回退到树内 Cinder 插件, + 以便对禁用该功能或未安装和配置 Cinder CSI 插件的节点进行挂载操作。 + 不支持回退供应操作,对于那些必须安装和配置 CSI 插件。需要启用 CSIMigration 功能标志。 @@ -582,6 +645,9 @@ In the following table: [CustomResourceDefinition](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/) 创建的资源启用基于模式的验证。 +- `DefaultPodTopologySpread`:启用 `PodTopologySpread` 调度插件来完成 + [默认的调度传播](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints)。 + - `CustomResourceWebhookConversion`:对于用 [CustomResourceDefinition](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/) 创建的资源启用基于 Webhook 的转换。 @@ -589,6 +655,10 @@ In the following table: - `DynamicAuditing`:在 v1.19 版本前用于启用动态审计。 -- `DynamicProvisioningScheduling`:扩展默认调度器以了解卷拓扑并处理 PV 配置。 +- `DynamicKubeletConfig`:启用 kubelet 的动态配置。 + 除偏差策略场景外,不再支持该功能。该特性门控在 kubelet 1.24 版本中已被移除。 + 请参阅[重新配置 kubelet](/zh-cn/docs/tasks/administer-cluster/reconfigure-kubelet/)。 + +- `DynamicProvisioningScheduling`:扩展默认调度器以了解卷拓扑并处理 PV 制备。 此特性已在 v1.12 中完全被 `VolumeScheduling` 特性取代。 - `DynamicVolumeProvisioning`:启用持久化卷到 Pod - 的[动态预配置](/zh-cn/docs/concepts/storage/dynamic-provisioning/)。 + 的[动态制备](/zh-cn/docs/concepts/storage/dynamic-provisioning/)。 - `HyperVContainer`:为 Windows 容器启用 [Hyper-V 隔离](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)。 @@ -701,6 +778,9 @@ In the following table: - `ImmutableEphemeralVolumes`:允许将各个 Secret 和 ConfigMap 标记为不可变更的, 以提高安全性和性能。 +- `IndexedJob`:允许 [Job](/zh-cn/docs/concepts/workloads/controllers/job/) + 控制器根据完成索引来管理 Pod 完成。 + @@ -770,14 +852,24 @@ In the following table: - `NodeLease`:启用新的 Lease(租期)API 以报告节点心跳,可用作节点运行状况信号。 +- `NonPreemptingPriority`:为 PriorityClass 和 Pod 启用 `preemptionPolicy` 选项。 + - `PVCProtection`:当 PersistentVolumeClaim (PVC) 仍然在 Pod 使用时被删除,启用保护。 - `SelectorIndex`: 允许使用 API 服务器的 watch 缓存中基于标签和字段的索引来加速 list 操作。 -- `ServiceAccountIssuerDiscovery`:在 API 服务器中为服务帐户颁发者启用 OIDC 发现端点 +- `ServiceAccountIssuerDiscovery`:在 API 服务器中为服务账号颁发者启用 OIDC 发现端点 (颁发者和 JWKS URL)。详情参见 - [为 Pod 配置服务账户](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery)。 + [为 Pod 配置服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery)。 - `ServiceAppProtocol`:为 Service 和 Endpoints 启用 `appProtocol` 字段。 +- `ServiceLoadBalancerClass`: 为服务启用 `loadBalancerClass` 字段。 + 有关更多信息,请参见[指定负载均衡器实现类](/zh-cn/docs/concepts/services-networking/service/#load-balancer-class)。 + - `ServiceLoadBalancerFinalizer`:为服务负载均衡启用终结器(finalizers)保护。 +- `ServiceLBNodePortControl`:为服务启用 `allocateLoadBalancerNodePorts` 字段。 - `SupportPodPidsLimit`:启用支持限制 Pod 中的进程 PID。 +- `SuspendJob`:启用对追加和恢复 Job 的支持。更多细节请参阅 + [Job 文档](/zh-cn/docs/concepts/workloads/controllers/job/)。 + - `Sysctls`:允许为每个 Pod 设置的名字空间内核参数(sysctls)。 更多详细信息,请参见 [sysctls](/zh-cn/docs/tasks/administer-cluster/sysctl-cluster/)。 @@ -987,12 +1109,12 @@ In the following table: - `TaintNodesByCondition`: 根据[节点状况](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)启用自动为节点标记污点。 -- `TokenRequest`:在服务帐户资源上启用 `TokenRequest` 端点。 +- `TokenRequest`:在服务账号资源上启用 `TokenRequest` 端点。 - `TokenRequestProjection`:启用通过 - [`projected` 卷](/zh-cn/docs/concepts/storage/volumes/#projected)将服务帐户令牌注入到 Pod 中的特性。 + [`projected` 卷](/zh-cn/docs/concepts/storage/volumes/#projected)将服务账号令牌注入到 Pod 中的特性。 -- `ValidateProxyRedirects`: 这个标志控制 API 服务器是否应该验证只跟随到相同的主机的重定向。 +- `ValidateProxyRedirects`:这个标志控制 API 服务器是否应该验证只跟随到相同的主机的重定向。 仅在启用 `StreamingProxyRedirects` 标志时被使用。 - `AdvancedAuditing`:启用[高级审计功能](/zh-cn/docs/tasks/debug/debug-cluster/audit/#advanced-audit)。 -- `AllowExtTrafficLocalEndpoints`:启用服务用于将外部请求路由到节点本地终端。 +- `AggregatedDiscoveryEndpoint`:启用单个 HTTP 端点 `/discovery/`, + 支持用 ETag 进行原生 HTTP 缓存,包含 API 服务器已知的所有 APIResource。 - `AnyVolumeDataSource`:允许使用任何自定义的资源来做作为 {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}} 中的 `DataSource`。 - `AppArmor`:在 Linux 节点上为 Pod 启用 AppArmor 机制的强制访问控制。 @@ -604,28 +614,14 @@ Each feature gate is designed for enabling/disabling a specific feature: 不支持回退到内嵌插件来执行制备操作,因为对应的 CSI 插件必须已安装且正确配置。 此特性需要启用 CSIMigration 特性标志。 -- `CSIMigrationOpenStack`:确保填充和转换逻辑能够将卷操作从 Cinder 内嵌插件路由到 - Cinder CSI 插件。对于禁用了此特性的节点或者没有安装并配置 Cinder CSI 插件的节点, - 支持回退到内嵌(in-tree)Cinder 插件来执行挂载操作。 - 不支持回退到内嵌插件来执行制备操作,因为对应的 CSI 插件必须已安装且正确配置。 - 此磁特性需要启用 CSIMigration 特性标志。 - -- `csiMigrationRBD`:启用填充和转换逻辑,将卷操作从 RBD 的内嵌插件路由到 Ceph RBD +- `CSIMigrationRBD`:启用填充和转换逻辑,将卷操作从 RBD 的内嵌插件路由到 Ceph RBD CSI 插件。此特性要求 CSIMigration 和 csiMigrationRBD 特性标志均被启用, 且集群中安装并配置了 Ceph CSI 插件。此标志已被弃用,以鼓励使用 `InTreePluginRBDUnregister` 特性标志。后者会禁止注册内嵌的 RBD 插件。 @@ -665,16 +661,21 @@ Each feature gate is designed for enabling/disabling a specific feature: 详情请参见 [`csi` 卷类型](/zh-cn/docs/concepts/storage/volumes/#csi)。 -- `CSIVolumeHealth`:启用对节点上的 CSI volume 运行状况监控的支持 -- `CSRDuration`:允许客户端来通过请求 Kubernetes CSR API 签署的证书的持续时间。 +- `CSIVolumeHealth`:启用对节点上的 CSI volume 运行状况监控的支持。 +- `ComponentSLIs`: 在 kubelet、kube-scheduler、kube-proxy、kube-controller-manager、cloud-controller-manager + 等 Kubernetes 组件上启用 `/metrics/slis` 端点,从而允许你抓取健康检查指标。 +- `ConsistentHTTPGetHandlers`:使用探测器为生命周期处理程序规范化 HTTP get URL 和标头传递。 - `ContextualLogging`:当你启用这个特性门控,支持日志上下文记录的 Kubernetes 组件会为日志输出添加额外的详细内容。 - `ControllerManagerLeaderMigration`:为 `kube-controller-manager` 和 `cloud-controller-manager` @@ -682,12 +683,17 @@ Each feature gate is designed for enabling/disabling a specific feature: - `CronJobTimeZone`:允许在 [CronJobs](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/) 中使用 `timeZone` 可选字段。 +- `CrossNamespaceVolumeDataSource`:启用跨名字空间卷数据源,以允许你在 PersistentVolumeClaim + 的 `dataSourceRef` 字段中指定一个源名字空间。 - `CustomCPUCFSQuotaPeriod`:使节点能够更改 [kubelet 配置](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/)中的 `cpuCFSQuotaPeriod`。 - `CustomResourceValidationExpressions`:启用 CRD 中的表达式语言合法性检查, @@ -700,8 +706,6 @@ Each feature gate is designed for enabling/disabling a specific feature: - `DaemonSetUpdateSurge`:使 DaemonSet 工作负载在每个节点的更新期间保持可用性。 参阅[对 DaemonSet 执行滚动更新](/zh-cn/docs/tasks/manage-daemon/update-daemon-set/)。 -- `DefaultPodTopologySpread`:启用 `PodTopologySpread` 调度插件来完成 - [默认的调度传播](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints)。 - `DelegateFSGroupToCSIDriver`:如果 CSI 驱动程序支持,则通过 NodeStageVolume 和 NodePublishVolume CSI 调用传递 `fsGroup`,将应用 `fsGroup` 从 Pod 的 `securityContext` 的角色委托给驱动。 @@ -739,19 +741,14 @@ Each feature gate is designed for enabling/disabling a specific feature: - `DryRun`:启用在服务器端对请求进行[试运行(Dry Run)](/zh-cn/docs/reference/using-api/api-concepts/#dry-run), 以便测试验证、合并和修改,同时避免提交更改。 -- `DynamicKubeletConfig`:启用 kubelet 的动态配置。 - 除偏差策略场景外,不再支持该功能。该特性门控在 kubelet 1.24 版本中已被移除。 - 请参阅[重新配置 kubelet](/zh-cn/docs/tasks/administer-cluster/reconfigure-kubelet/)。 - +- `DynamicResourceAllocation`:启用对具有自定义参数和独立于 Pod 生命周期的资源的支持。 - `EndpointSliceTerminatingCondition`:允许使用 EndpointSlice 的 `terminating` 和 `serving` 状况字段。 - `EfficientWatchResumption`:允许将存储发起的书签(进度通知)事件传递给用户。 @@ -760,6 +757,14 @@ Each feature gate is designed for enabling/disabling a specific feature: - `EphemeralContainers`: Enable the ability to add {{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}} to running pods. +- `EventedPLEG`: Enable support for the kubelet to receive container life cycle events from the + {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}} via + an extension to {{}}. + (PLEG is an abbreviation for “Pod lifecycle event generator”). + For this feature to be useful, you also need to enable support for container lifecycle events + in each container runtime running in your cluster. If the container runtime does not announce + support for container lifecycle events then the kubelet automatically switches to the legacy + generic PLEG mechanism, even if you have this feature gate enabled. - `ExecProbeTimeout`: Ensure kubelet respects exec probe timeouts. This feature gate exists in case any of your existing workloads depend on a now-corrected fault where Kubernetes ignored exec probe timeouts. See @@ -768,6 +773,11 @@ Each feature gate is designed for enabling/disabling a specific feature: - `EphemeralContainers`:启用添加 {{< glossary_tooltip text="临时容器" term_id="ephemeral-container" >}} 到正在运行的 Pod 的特性。 +- `EventedPLEG`:启用此特性后,kubelet 能够通过 {{}} + 扩展从{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}接收容器生命周期事件。 + (PLEG 是 “Pod lifecycle event generator” 的缩写,即 Pod 生命周期事件生成器)。 + 要使用此特性,你还需要在集群中运行的每个容器运行时中启用对容器生命周期事件的支持。 + 如果容器运行时未宣布支持容器生命周期事件,即使你已启用了此特性门控,kubelet 也会自动切换到原有的通用 PLEG 机制。 - `ExecProbeTimeout`:确保 kubelet 会遵从 exec 探针的超时值设置。 此特性门控的主要目的是方便你处理现有的、依赖于已被修复的缺陷的工作负载; 该缺陷导致 Kubernetes 会忽略 exec 探针的超时值设置。 @@ -838,16 +848,14 @@ Each feature gate is designed for enabling/disabling a specific feature: - `HPAScaleToZero`:使用自定义指标或外部指标时,可将 `HorizontalPodAutoscaler` 资源的 `minReplicas` 设置为 0。 -- `IPTablesOwnershipCleanup`:这使得 kubelet 不再创建传统的 IPTables 规则。 +- `IPTablesOwnershipCleanup`:这使得 kubelet 不再创建传统的 iptables 规则。 - `KubeletPodResources`:启用 kubelet 上 Pod 资源 GRPC 端点。更多详细信息, 请参见[支持设备监控](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)。 @@ -967,6 +974,8 @@ Each feature gate is designed for enabling/disabling a specific feature: 获取更多详细信息。 - `LegacyServiceAccountTokenNoAutoGeneration`:停止基于 Secret 自动生成[服务账号令牌](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens)。 +- `LegacyServiceAccountTokenTracking`:跟踪使用基于 Secret + 的[服务账号令牌](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens)。 - `LogarithmicScaleDown`:启用 Pod 的半随机(semi-random)选择,控制器将根据 Pod 时间戳的对数桶按比例缩小去驱逐 Pod。 +- `LoggingAlphaOptions`:允许微调实验性的、Alpha 质量的日志选项。 +- `LoggingBetaOptions`:允许微调实验性的、Beta 质量的日志选项。 - `MatchLabelKeysInPodTopologySpread`:为 - [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/)启用 `matchLabelKeys` 字段。 + [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/) + 启用 `matchLabelKeys` 字段。 - `MaxUnavailableStatefulSet`:启用为 StatefulSet 的[滚动更新策略](/zh-cn/docs/concepts/workloads/controllers/statefulset/#rolling-updates)设置 `maxUnavailable` 字段。该字段指定更新过程中不可用 Pod 个数的上限。 @@ -1019,6 +1035,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `MemoryQoS`:使用 cgroup v2 内存控制器在 Pod / 容器上启用内存保护和使用限制。 - `MinDomainsInPodTopologySpread`:在 [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/)中启用 `minDomains`。 +- `MinimizeIPTablesRestore`:在 kube-proxy iptables 模式中启用新的性能改进逻辑。 - `MixedProtocolLBService`:允许在同一 `LoadBalancer` 类型的 Service 实例中使用不同的协议。 - `NodeOutOfServiceVolumeDetach`:当使用 `node.kubernetes.io/out-of-service` 污点将节点标记为停止服务时,节点上不能容忍这个污点的 Pod 将被强制删除, @@ -1054,43 +1073,32 @@ Each feature gate is designed for enabling/disabling a specific feature: - `NodeSwap`:启用 kubelet 为节点上的 Kubernetes 工作负载分配交换内存的能力。 必须将 `KubeletConfiguration.failSwapOn` 设置为 false 的情况下才能使用。 更多详细信息,请参见[交换内存](/zh-cn/docs/concepts/architecture/nodes/#swap-memory)。 -- `NonPreemptingPriority`:为 PriorityClass 和 Pod 启用 `preemptionPolicy` 选项。 - `OpenAPIEnums`:允许在从 API 服务器返回的 spec 中填充 OpenAPI 模式的 "enum" 字段。 - `OpenAPIV3`:允许 API 服务器发布 OpenAPI V3。 +- `PDBUnhealthyPodEvictionPolicy`:启用 `PodDisruptionBudget` 的 `unhealthyPodEvictionPolicy` 字段。 + 此字段指定何时应考虑驱逐不健康的 Pod。 + 更多细节请参阅[不健康 Pod 驱逐策略](/zh-cn/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy)。 - `PodDeletionCost`:启用 [Pod 删除成本](/zh-cn/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)功能。 该功能使用户可以影响 ReplicaSet 的降序顺序。 -- `PodAffinityNamespaceSelector`:启用 [Pod 亲和性名字空间选择算符](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#namespace-selector)和 - [CrossNamespacePodAffinity](/zh-cn/docs/concepts/policy/resource-quotas/#cross-namespace-pod-affinity-quota) - 资源配额功能。 - `PodAndContainerStatsFromCRI`:配置 kubelet 从 CRI 容器运行时中而不是从 cAdvisor 中采集容器和 Pod 统计信息。 + 从 1.26 开始,这还包括从 CRI 收集指标并通过 `/metrics/cadvisor` 输出这些指标(而不是让 cAdvisor 直接输出)。 - `PodDisruptionConditions`:启用支持追加一个专用的 Pod 状况,以表示 Pod 由于某个干扰正在被删除。 - `PodHasNetworkCondition`:使得 kubelet 能够对 Pod 标记 [PodHasNetwork](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-has-network) 状况。 -- `PodOverhead`:启用 [PodOverhead](/zh-cn/docs/concepts/scheduling-eviction/pod-overhead/) - 特性以考虑 Pod 开销。 +- `PodSchedulingReadiness`:启用设置 `schedulingGates` 字段以控制 Pod 的[调度就绪](/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness)。 - `PodSecurity`: 开启 `PodSecurity` 准入控制插件。 -- `PreferNominatedNode`: 这个标志告诉调度器在循环遍历集群中的所有其他节点之前, - 是否首先检查指定的节点。 -- `SELinuxMountReadWriteOncePod`:通过使用正确的 SELinux - 标签挂载卷而不是以递归方式更改这些卷上的每个文件来加速容器启动。最初的实现侧重 ReadWriteOncePod 卷。 +- `SELinuxMountReadWriteOncePod`:通过允许 kubelet 直接用正确的 SELinux + 标签为 Pod 挂载卷而不是以递归方式更改这些卷上的每个文件来加速容器启动。最初的实现侧重 ReadWriteOncePod 卷。 - `SeccompDefault`: 允许将所有工作负载的默认 seccomp 配置文件为 `RuntimeDefault`。 seccomp 配置在 Pod 或者容器的 `securityContext` 字段中指定。 -- `SELinuxMountReadWriteOncePod`:允许 kubelet 直接用合适的 SELinux 标签为 Pod 挂载卷, - 而不是将 SELinux 标签以递归方式应用到卷上的每个文件。 - `ServerSideApply`:在 API 服务器上启用[服务器端应用(SSA)](/zh-cn/docs/reference/using-api/server-side-apply/)。 - `ServerSideFieldValidation`:启用服务器端字段验证。 这意味着验证资源模式在 API 服务器端而不是客户端执行 (例如,`kubectl create` 或 `kubectl apply` 命令行)。 - `ServiceInternalTrafficPolicy`:为服务启用 `internalTrafficPolicy` 字段。 -- `ServiceLBNodePortControl`:为服务启用 `allocateLoadBalancerNodePorts` 字段。 -- `ServiceLoadBalancerClass`: 为服务启用 `loadBalancerClass` 字段。 - 有关更多信息,请参见[指定负载均衡器实现类](/zh-cn/docs/concepts/services-networking/service/#load-balancer-class)。 - `ServiceIPStaticSubrange`:启用服务 ClusterIP 分配策略,从而细分 ClusterIP 范围。 动态分配的 ClusterIP 地址将优先从较高范围分配,以低冲突风险允许用户从较低范围分配静态 ClusterIP。 更多详细信息请参阅[避免冲突](/zh-cn/docs/concepts/services-networking/service/#avoiding-collisions) @@ -1203,22 +1199,24 @@ Each feature gate is designed for enabling/disabling a specific feature: memory-backed volumes (mainly `emptyDir` volumes). - `StatefulSetMinReadySeconds`: Allows `minReadySeconds` to be respected by the StatefulSet controller. +- `StatefulSetStartOrdinal`: Allow configuration of the start ordinal in a + StatefulSet. See + [Start ordinal](/docs/concepts/workloads/controllers/statefulset/#start-ordinal) + for more details. --> - `SizeMemoryBackedVolumes`:允许 kubelet 检查基于内存制备的卷的尺寸约束(目前主要针对 `emptyDir` 卷)。 - `StatefulSetMinReadySeconds`: 允许 StatefulSet 控制器采纳 `minReadySeconds` 设置。 +- `StatefulSetStartOrdinal`:允许在 StatefulSet 中配置起始序号。 + 更多细节请参阅[起始序号](/zh-cn/docs/concepts/workloads/controllers/statefulset/#start-ordinal)。 - `StorageVersionAPI`: 启用[存储版本 API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageversion-v1alpha1-internal-apiserver-k8s-io)。 - `StorageVersionHash`:允许 API 服务器在版本发现中公开存储版本的哈希值。 -- `SuspendJob`:启用对追加和恢复 Job 的支持。更多细节请参阅 - [Job 文档](/docs/concepts/workloads/controllers/job/)。 - `TopologyAwareHints`: 在 EndpointSlices 中启用基于拓扑提示的拓扑感知路由。 更多详细信息可参见[拓扑感知提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints/)。 - `TopologyManager`:启用一种机制来协调 Kubernetes 不同组件的细粒度硬件资源分配。 详见[控制节点上的拓扑管理策略](/zh-cn/docs/tasks/administer-cluster/topology-manager/)。 +- `TopologyManagerPolicyAlphaOptions`:允许微调拓扑管理器策略的实验性的、Alpha 质量的选项。 + 此特性门控守护 **一组** 质量级别为 Alpha 的拓扑管理器选项。 + 此特性门控绝对不会进阶至 Beta 或稳定版。 +- `TopologyManagerPolicyBetaOptions`:允许微调拓扑管理器策略的实验性的、Beta 质量的选项。 + 此特性门控守护 **一组** 质量级别为 Beta 的拓扑管理器选项。 + 此特性门控绝对不会进阶至稳定版。 +- `TopologyManagerPolicyOptions`: Allow fine-tuning of topology manager policies, - `UserNamespacesStatelessPodsSupport`:为无状态 Pod 启用用户名字空间的支持。 +- `ValidatingAdmissionPolicy`:启用准入控制中所用的对 CEL 校验的 [ValidatingAdmissionPolicy](/zh-cn/docs/reference/access-authn-authz/validating-admission-policy/) 支持。 - `VolumeCapacityPriority`: 基于可用 PV 容量的拓扑,启用对不同节点的优先级支持。 +- `WindowsHostNetwork`:启用对 Windows 容器接入主机网络名字空间的支持。 - `WindowsHostProcessContainers`:启用对 Windows HostProcess 容器的支持。 ## {{% heading "whatsnext" %}} diff --git a/content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md b/content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md index 920e8e0146852..6d2fba5492f9d 100644 --- a/content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md +++ b/content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md @@ -2,9 +2,7 @@ title: kube-apiserver Audit 配置 (v1) content_type: tool-reference package: audit.k8s.io/v1 -auto_generated: true --- - [必需]
    -authentication/v1.UserInfo +authentication/v1.UserInfo @@ -114,7 +112,7 @@ Event 结构包含可出现在 API 审计日志中的所有信息。 impersonatedUser
    -authentication/v1.UserInfo +authentication/v1.UserInfo @@ -189,7 +187,7 @@ Note: All but the last IP can be arbitrarily set by the client. responseStatus
    -meta/v1.Status +meta/v1.Status (可能会采用 JSON 重新编码),之后会进入版本转换、默认值填充、准入控制以及 配置信息合并等阶段。此对象为外部版本化的对象类型,甚至其自身可能并不是一个 合法的对象。对于非资源请求,此字段被忽略。 - 只有当审计级别为 Request 或更高的时候才会记录。 + 只有当审计级别为 Request 或更高的时候才会记录。

    - - + + responseObject
    k8s.io/apimachinery/pkg/runtime.Unknown @@ -239,7 +237,7 @@ at Response Level.--> requestReceivedTimestamp
    -meta/v1.MicroTime +meta/v1.MicroTime @@ -250,7 +248,7 @@ at Response Level.--> stageTimestamp
    -meta/v1.MicroTime +meta/v1.MicroTime @@ -285,7 +283,7 @@ at Response Level.--> - + ## `EventList` {#audit-k8s-io-v1-EventList} 列表结构元数据 @@ -319,9 +317,9 @@ EventList 是审计事件(Event)的列表。 - + ## `Policy` {#audit-k8s-io-v1-Policy} - + @@ -345,7 +343,7 @@ Policy 定义的是审计日志的配置以及不同类型请求的日志记录 kind
    stringPolicy metadata
    -meta/v1.ObjectMeta +meta/v1.ObjectMeta @@ -412,7 +410,7 @@ omitManagedFields 标明将请求和响应主体写入 API 审计日志时,是 - + ## `PolicyList` {#audit-k8s-io-v1-PolicyList} 列表结构元数据。 @@ -446,9 +444,9 @@ PolicyList 是由审计策略(Policy)组成的列表。 - + ## `GroupResources` {#audit-k8s-io-v1-GroupResources} - + @@ -534,7 +532,7 @@ For example: - + ## `Level` {#audit-k8s-io-v1-Level} +

    disable-compression 允许客户端针对到服务器的所有请求选择取消响应压缩。 + 当客户端服务器网络带宽充足时,这有助于通过节省压缩(服务器端)和解压缩(客户端)时间来加快请求(特别是列表)的速度: + https://github.com/kubernetes/kubernetes/issues/112296。

    + + + config
    k8s.io/apimachinery/pkg/runtime.RawExtension @@ -176,7 +189,7 @@ clusters: - + ## `ExecCredentialSpec` {#client-authentication-k8s-io-v1-ExecCredentialSpec} 字段描述 expirationTimestamp
    -meta/v1.Time +meta/v1.Time diff --git a/content/zh-cn/docs/reference/config-api/client-authentication.v1beta1.md b/content/zh-cn/docs/reference/config-api/client-authentication.v1beta1.md index b683ed5736e73..6c005e2c2b2c8 100644 --- a/content/zh-cn/docs/reference/config-api/client-authentication.v1beta1.md +++ b/content/zh-cn/docs/reference/config-api/client-authentication.v1beta1.md @@ -2,9 +2,7 @@ title: 客户端身份认证(Client Authentication)(v1beta1) content_type: tool-reference package: client.authentication.k8s.io/v1beta1 -auto_generated: true --- - - ## 资源类型 {#resource-types} - - [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential) - - - ## `ExecCredential` {#client-authentication-k8s-io-v1beta1-ExecCredential} - - - - [必需]
    ExecCredentialSpec @@ -166,8 +153,22 @@ If empty, system roots should be used. 此字段用来设置向集群发送所有请求时要使用的代理服务器。 - - + +disable-compression
    +bool + + + +

    disable-compression 允许客户端针对到服务器的所有请求选择取消响应压缩。 + 当客户端服务器网络带宽充足时,这有助于通过节省压缩(服务器端)和解压缩(客户端)时间来加快请求(特别是列表)的速度: + https://github.com/kubernetes/kubernetes/issues/112296。

    + + + config
    k8s.io/apimachinery/pkg/runtime.RawExtension @@ -289,15 +290,15 @@ exec 插件本身至少应通过文件访问许可来实施保护。

    expirationTimestamp
    -meta/v1.Time +meta/v1.Time 给出所提供的凭据到期的时间。 - - + + token [必需]
    string diff --git a/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md b/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md index 0d2606a4217e7..ce8e62f7fd4d6 100644 --- a/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md +++ b/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md @@ -194,18 +194,6 @@ in order to proxy service traffic. If unspecified (0-0) then ports will be rando 用来设置代理服务所使用的端口。如果未指定(即 ‘0-0’),则代理服务会随机选择端口号。

    -udpIdleTimeout [必需]
    -meta/v1.Duration - - - -

    udpIdleTimeout 字段用来设置 UDP 链接保持活跃的时长(例如,'250ms'、'2s')。 - 此值必须大于 0。此字段仅适用于 mode 值为 'userspace' 的场合。

    - - conntrack [必需]
    KubeProxyConntrackConfiguration @@ -458,6 +446,15 @@ the pure iptables proxy mode. Values must be within the range [0, 31]. 在使用纯 iptables 代理模式时对所有流量执行 SNAT 操作。

    +localhostNodePorts [必需]
    +bool + + + +

    localhostNodePorts 告知 kube-proxy 允许通过 localhost 访问服务 NodePorts(仅 iptables 模式)

    + + syncPeriod [必需]
    meta/v1.Duration @@ -711,40 +708,22 @@ LocalMode 代表的是对节点上本地流量进行检测的模式。 -ProxyMode 表示的是 Kubernetes 代理服务器所使用的模式。 +

    ProxyMode 表示的是 Kubernetes 代理服务器所使用的模式。

    -目前 Linux 平台上有三种可用的代理模式:'userspace'(相对较老,即将被淘汰)、 -'iptables'(相对较新,速度较快)、'ipvs'(最新,在性能和可扩缩性上表现好)。 - -在 Windows 平台上有两种可用的代理模式:'userspace'(相对较老,但稳定)和 -'kernelspace'(相对较新,速度更快)。 - - -在 Linux 平台上,如果代理的 mode 为空,则使用可用的最佳代理(目前是 iptables, -将来可能会发生变化)。如果选择的是 iptables 代理(无论原因如何),但系统的内核 -或者 iptables 的版本不够高,kube-proxy 也会回退为 userspace 代理服务器所使用的模式。 -当代理的 mode 设置为 'ipvs' 时会启用 IPVS 模式,对应的回退路径是先尝试 iptables, -最后回退到 userspace。 +

    目前 Linux 平台上有两种可用的代理模式:'iptables' 和 'ipvs'。 +在 Windows 平台上可用的一种代理模式是:'kernelspace'。

    -在 Windows 平台上,如果代理 mode 为空,则使用可用的最佳代理(目前是 userspace, -不过将来可能会发生变化)。如果所选择的是 winkernel 代理(无论原因如何), -但 Windows 内核不支持此代理模式,则 kube-proxy 会回退到 userspace 代理。 - +

    如果代理模式未被指定,将使用最佳可用的代理模式(目前在 Linux 上是 iptables,在 Windows 上是 kernelspace)。 +如果不能使用选定的代理模式(由于缺少内核支持、缺少用户空间组件等),则 kube-proxy 将出错并退出。

    ## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} @@ -755,10 +734,12 @@ this always falls back to the userspace proxy. - [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration) -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) - [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + - [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) **出现在:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) - [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + - [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)

    percentageOfNodesToScore 字段为所有节点的百分比,一旦调度器找到所设置比例的、能够运行 Pod 的节点, @@ -190,6 +190,7 @@ nodes will be scored. 例如:当集群规模为 500 个节点,而此字段的取值为 30, 则调度器在找到 150 个合适的节点后会停止继续寻找合适的节点。当此值为 0 时, 调度器会使用默认节点数百分比(基于集群规模确定的值,在 5% 到 50% 之间)来执行打分操作。 + 它可被配置文件级别的 PercentageofNodesToScore 覆盖。

    @@ -267,7 +268,7 @@ NodeAffinityArgs holds arguments to configure the NodeAffinity plugin. kind
    stringNodeAffinityArgs addedAffinity
    -core/v1.NodeAffinity +core/v1.NodeAffinity [必需]
    +int32 + + + +

    percentageOfNodesToScore 是已发现可运行 Pod 的节点与所有节点的百分比, + 调度器所发现的可行节点到达此阈值时,将停止在集群中继续搜索可行节点。 +这有助于提高调度器的性能。无论此标志的值是多少,调度器总是尝试至少找到 “minFeasibleNodesToFind” 个可行的节点。 +例如:如果集群大小为 500 个节点并且此标志的值为 30,则调度器在找到 150 个可行节点后将停止寻找更多可行的节点。 +当值为 0 时,默认百分比(根据集群大小为 5% - 50%)的节点将被评分。此设置值将覆盖全局的 PercentageOfNodesToScore 值。 +如果为空,将使用全局 PercentageOfNodesToScore。

    + + plugins [必需]
    Plugins @@ -1054,6 +1078,14 @@ be invoked before default plugins, default plugins must be disabled and re-enabl 字段描述 +preEnqueue [必需]
    +PluginSet + + + +

    preEnqueue 是在将 Pod 添加到调度队列之前应调用的插件的列表。

    + + queueSort [必需]
    PluginSet diff --git a/content/zh-cn/docs/reference/config-api/kubelet-config.v1beta1.md b/content/zh-cn/docs/reference/config-api/kubelet-config.v1beta1.md index f2fac243dadfc..469453c04f5a1 100644 --- a/content/zh-cn/docs/reference/config-api/kubelet-config.v1beta1.md +++ b/content/zh-cn/docs/reference/config-api/kubelet-config.v1beta1.md @@ -844,6 +844,21 @@ Default: "container"

    +topologyManagerPolicyOptions
    +map[string]string + + + +

    TopologyManagerPolicyOptions 是一组 key=value 键值映射,容许设置额外的选项来微调拓扑管理器策略的行为。需要同时启用 "TopologyManager" 和 "TopologyManagerPolicyOptions" 特性门控。 +默认值:nil

    + + + qosReserved
    map[string]string @@ -994,13 +1009,13 @@ Default: true

    cpuCFSQuotaPeriod设置 CPU CFS 配额周期值,cpu.cfs_period_us。 -此值需要介于 1 微秒和 1 秒之间,包含 1 微秒和 1 秒。 -此功能要求启用CustomCPUCFSQuotaPeriod特性门控被启用。

    +此值需要介于 1 毫秒和 1 秒之间,包含 1 毫秒和 1 秒。 +此功能要求启用 CustomCPUCFSQuotaPeriod 特性门控被启用。

    默认值:"100ms"

    @@ -1794,19 +1809,19 @@ Default: false when setting the cgroupv2 memory.high value to enforce MemoryQoS. Decreasing this factor will set lower high limit for container cgroups and put heavier reclaim pressure while increasing will put less reclaim pressure. -See http://kep.k8s.io/2570 for more details. +See https://kep.k8s.io/2570 for more details. Default: 0.8 -->

    当设置 cgroupv2 memory.high以实施MemoryQoS特性时, memoryThrottlingFactor用来作为内存限制或节点可分配内存的系数。

    减小此系数会为容器控制组设置较低的 high 限制值,从而增大回收压力;反之, -增大此系数会降低回收压力。更多细节参见 http://kep.k8s.io/2570。

    +增大此系数会降低回收压力。更多细节参见 https://kep.k8s.io/2570。

    默认值:0.8

    registerWithTaints
    -[]core/v1.Taint +[]core/v1.Taint +See https://kep.k8s.io/2832 for more details. -->

    tracing 为 OpenTelemetry 追踪客户端设置版本化的配置信息。 -参阅 http://kep.k8s.io/2832 了解更多细节。

    +参阅 https://kep.k8s.io/2832 了解更多细节。

    localStorageCapacityIsolation
    @@ -1885,7 +1900,7 @@ SerializedNodeConfigSource 允许对 `v1.NodeConfigSource` 执行序列化操作 kind
    stringSerializedNodeConfigSource source
    -core/v1.NodeConfigSource +core/v1.NodeConfigSource diff --git a/content/zh-cn/docs/reference/glossary/container-runtime-interface.md b/content/zh-cn/docs/reference/glossary/container-runtime-interface.md index 1beb11e18eb56..4f8639a6627ef 100644 --- a/content/zh-cn/docs/reference/glossary/container-runtime-interface.md +++ b/content/zh-cn/docs/reference/glossary/container-runtime-interface.md @@ -40,4 +40,4 @@ The Kubernetes Container Runtime Interface (CRI) defines the main Kubernetes 容器运行时接口(Container Runtime Interface;CRI)定义了主要 [gRPC](https://grpc.io) 协议, 用于[集群组件](/zh-cn/docs/concepts/overview/components/#node-components) {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 和 -{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}。 \ No newline at end of file +{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}之间的通信。 diff --git a/content/zh-cn/docs/reference/glossary/ephemeral-container.md b/content/zh-cn/docs/reference/glossary/ephemeral-container.md index 591c85b3f53a8..e97b8db7d1122 100644 --- a/content/zh-cn/docs/reference/glossary/ephemeral-container.md +++ b/content/zh-cn/docs/reference/glossary/ephemeral-container.md @@ -31,8 +31,10 @@ A {{< glossary_tooltip term_id="container" >}} type that you can temporarily run 如果想要调查运行中有问题的 Pod,可以向该 Pod 添加一个临时容器(Ephemeral Container)并进行诊断。 -临时容器没有资源或调度保证,因此不应该使用它们来运行任何部分的工作负荷本身。 -{{{< glossary_tooltip text="静态 Pod" term_id="static-pod" >}} 不支持临时容器。 +临时容器没有资源或调度保证,因此不应该使用它们来运行工作负载本身的任何部分。 + +{{< glossary_tooltip text="静态 Pod" term_id="static-pod" >}} 不支持临时容器。 diff --git a/content/zh-cn/docs/reference/glossary/istio.md b/content/zh-cn/docs/reference/glossary/istio.md index ab6d98f3601c5..e56bcff9052ac 100644 --- a/content/zh-cn/docs/reference/glossary/istio.md +++ b/content/zh-cn/docs/reference/glossary/istio.md @@ -2,7 +2,7 @@ title: Istio id: istio date: 2018-04-12 -full_link: https://istio.io/zh/docs/concepts/what-is-istio/ +full_link: https://istio.io/latest/about/service-mesh/#what-is-istio short_description: > Istio 是一个(非 Kubernetes 特有的)开放平台,提供了一种统一的方式来集成微服务、管理流量、实施策略和汇总度量数据。 aka: @@ -15,7 +15,7 @@ tags: title: Istio id: istio date: 2018-04-12 -full_link: https://istio.io/docs/concepts/what-is-istio/ +full_link: https://istio.io/latest/about/service-mesh/#what-is-istio short_description: > An open platform (not Kubernetes-specific) that provides a uniform way to integrate microservices, manage traffic flow, enforce policies, and aggregate telemetry data. diff --git a/content/zh-cn/docs/reference/glossary/kops.md b/content/zh-cn/docs/reference/glossary/kops.md index fca2e3a6a734e..c8d06da9a063a 100644 --- a/content/zh-cn/docs/reference/glossary/kops.md +++ b/content/zh-cn/docs/reference/glossary/kops.md @@ -1,10 +1,11 @@ --- -title: Kops +title: kOps (Kubernetes Operations) id: kops date: 2018-04-12 -full_link: /docs/getting-started-guides/kops/ +full_link: /docs/setup/production-environment/kops/ short_description: > - kops 是一个命令行工具,可以帮助你创建、销毁、升级和维护生产级,高可用性的 Kubernetes 集群。 + kOps 不仅会帮助你创建、销毁、升级和维护生产级、高可用性的 Kubernetes 集群, + 还会提供必要的云基础设施。 aka: tags: @@ -12,12 +13,12 @@ tags: - operation --- - -kops 是一个命令行工具,可以帮助你创建、销毁、升级和维护生产级,高可用性的 Kubernetes 集群。 +`kOps` 不仅会帮助你创建、销毁、升级和维护生产级、高可用性的 Kubernetes 集群, +还会提供必要的云基础设施。 {{< note >}} -官方仅支持 AWS。对 GCE 和 VMware vSphere 的支持还处于 Alpha 阶段。 +目前正式支持 AWS(Amazon Web Services),DigitalOcean、GCE 和 OpenStack +处于 beta 支持阶段,Azure 处于 alpha 阶段。 {{< /note >}} - -`kops` 为你的集群提供了: - - * 全自动化安装 - * 基于 DNS 的集群标识 - * 自愈功能:所有组件都在自动扩缩组(Auto-Scaling Groups)中运行 - * 有限的操作系统支持 (推荐使用 Debian,支持 Ubuntu 16.04,试验性支持 CentOS & RHEL) - * 高可用 (HA) 支持 - * 直接提供或者生成 Terraform 清单文件的能力 - - - -你也可以将自己的集群作为一个构造块,使用 {{< glossary_tooltip term_id="kubeadm" >}} 构造集群。 -`kops` 是建立在 kubeadm 之上的。 +`kOps` 是一个自动化的制备系统: + * 全自动安装流程 + * 使用 DNS 识别集群 + * 自我修复:一切都在自动扩缩组中运行 + * 支持多种操作系统(Amazon Linux、Debian、Flatcar、RHEL、Rocky 和 Ubuntu) + * 支持高可用 + * 可以直接提供或者生成 terraform 清单 \ No newline at end of file diff --git a/content/zh-cn/docs/reference/instrumentation/slis.md b/content/zh-cn/docs/reference/instrumentation/slis.md new file mode 100644 index 0000000000000..45e8fc2959e2e --- /dev/null +++ b/content/zh-cn/docs/reference/instrumentation/slis.md @@ -0,0 +1,111 @@ +--- +title: Kubernetes 组件 SLI 指标 +linkTitle: 服务水平指示器指标 +content_type: reference +weight: 20 +--- + + + + +{{< feature-state for_k8s_version="v1.26" state="alpha" >}} + + +作为一个 Alpha 特性,Kubernetes 允许你为每个 Kubernetes 组件二进制文件配置服务水平指示器 (SLI) 指标。 +此指标端点被暴露在每个组件提供 HTTPS 服务的端口上,路径为 `/metrics/slis`。 +你必须为想要抓取 SLI 指标的每个组件启用 `ComponentSLIs` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 + + + + +## SLI 指标 {#sli-metrics} + +启用 SLI 指标时,每个 Kubernetes 组件暴露两个指标,按照健康检查添加标签: + +- 计量值(表示健康检查的当前状态) +- 计数值(记录观察到的每个健康检查状态的累计次数) + + +你可以使用此指标信息计算每个组件的可用性统计信息。例如,API 服务器检查 etcd 的健康。 +你可以计算并报告 etcd 的可用或不可用情况,具体由其客户端(即 API 服务器)进行报告。 + +Prometheus 计量表数据看起来类似于: + +``` +# HELP kubernetes_healthcheck [ALPHA] This metric records the result of a single healthcheck. +# TYPE kubernetes_healthcheck gauge +kubernetes_healthcheck{name="autoregister-completion",type="healthz"} 1 +kubernetes_healthcheck{name="autoregister-completion",type="readyz"} 1 +kubernetes_healthcheck{name="etcd",type="healthz"} 1 +kubernetes_healthcheck{name="etcd",type="readyz"} 1 +kubernetes_healthcheck{name="etcd-readiness",type="readyz"} 1 +kubernetes_healthcheck{name="informer-sync",type="readyz"} 1 +kubernetes_healthcheck{name="log",type="healthz"} 1 +kubernetes_healthcheck{name="log",type="readyz"} 1 +kubernetes_healthcheck{name="ping",type="healthz"} 1 +kubernetes_healthcheck{name="ping",type="readyz"} 1 +``` + + +而计数器数据看起来类似于: + +``` +# HELP kubernetes_healthchecks_total [ALPHA] This metric records the results of all healthcheck. +# TYPE kubernetes_healthchecks_total counter +kubernetes_healthchecks_total{name="autoregister-completion",status="error",type="readyz"} 1 +kubernetes_healthchecks_total{name="autoregister-completion",status="success",type="healthz"} 15 +kubernetes_healthchecks_total{name="autoregister-completion",status="success",type="readyz"} 14 +kubernetes_healthchecks_total{name="etcd",status="success",type="healthz"} 15 +kubernetes_healthchecks_total{name="etcd",status="success",type="readyz"} 15 +kubernetes_healthchecks_total{name="etcd-readiness",status="success",type="readyz"} 15 +kubernetes_healthchecks_total{name="informer-sync",status="error",type="readyz"} 1 +kubernetes_healthchecks_total{name="informer-sync",status="success",type="readyz"} 14 +kubernetes_healthchecks_total{name="log",status="success",type="healthz"} 15 +kubernetes_healthchecks_total{name="log",status="success",type="readyz"} 15 +kubernetes_healthchecks_total{name="ping",status="success",type="healthz"} 15 +kubernetes_healthchecks_total{name="ping",status="success",type="readyz"} 15 +``` + + +## 使用此类数据 {#using-this-data} + +组件 SLI 指标端点旨在以高频率被抓取。 +高频率抓取意味着你最终会获得更细粒度的计量信号,然后可以将其用于计算 SLO。 +`/metrics/slis` 端点为各个 Kubernetes 组件提供了计算可用性 SLO 所需的原始数据。 diff --git a/content/zh-cn/docs/reference/kubectl/kubectl.md b/content/zh-cn/docs/reference/kubectl/kubectl.md index 5f8a4bc833344..f0e608fdc7e0f 100644 --- a/content/zh-cn/docs/reference/kubectl/kubectl.md +++ b/content/zh-cn/docs/reference/kubectl/kubectl.md @@ -512,7 +512,19 @@ kubectl 的配置 ("kubeconfig") 文件的路径。默认值: "$HOME/.kube/confi -设置为 false 时,将关闭额外的 HTTP 标头,不再详细说明被调用的 kubectl 命令 (此变量适用于 Kubernetes v1.22 或更高版本) +设置为 false 时,将关闭额外的 HTTP 标头,不再详细说明被调用的 kubectl 命令(此变量适用于 Kubernetes v1.22 或更高版本) + + + + +KUBECTL_EXPLAIN_OPENAPIV3 + + + + +切换对 `kubectl explain` 的调用是否使用可用的新 OpenAPIv3 数据源。OpenAPIV3 自 Kubernetes 1.24 起默认被启用。 @@ -566,6 +578,7 @@ When set to false, turns off extra HTTP headers detailing invoked kubectl comman * [kubectl diff](/docs/reference/generated/kubectl/kubectl-commands#diff) - Diff live version against would-be applied version * [kubectl drain](/docs/reference/generated/kubectl/kubectl-commands#drain) - Drain node in preparation for maintenance * [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands#edit) - Edit a resource on the server +* [kubectl events](/docs/reference/generated/kubectl/kubectl-commands#events) - List events * [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands#exec) - Execute a command in a container * [kubectl explain](/docs/reference/generated/kubectl/kubectl-commands#explain) - Documentation of resources * [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands#expose) - Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service @@ -574,6 +587,7 @@ When set to false, turns off extra HTTP headers detailing invoked kubectl comman * [kubectl diff](/docs/reference/generated/kubectl/kubectl-commands#diff) - 显示目前版本与将要应用的版本之间的差异 * [kubectl drain](/docs/reference/generated/kubectl/kubectl-commands#drain) - 腾空节点,准备维护 * [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands#edit) - 修改服务器上的某资源 +* [kubectl events](/docs/reference/generated/kubectl/kubectl-commands#events) - 列举事件 * [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands#exec) - 在容器中执行相关命令 * [kubectl explain](/docs/reference/generated/kubectl/kubectl-commands#explain) - 显示资源文档说明 * [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands#expose) - 给定副本控制器、服务、Deployment 或 Pod,将其暴露为新的 kubernetes Service @@ -617,7 +631,7 @@ When set to false, turns off extra HTTP headers detailing invoked kubectl comman * [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands#scale) - 为一个 Deployment、ReplicaSet 或 ReplicationController 设置一个新的规模值 * [kubectl set](/docs/reference/generated/kubectl/kubectl-commands#set) - 为对象设置功能特性 * [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) - 在一个或者多个节点上更新污点配置 -* [kubectl top](/docs/reference/generated/kubectl/kubectl-commands#top) - 显示资源(CPU /内存/存储)使用率 +* [kubectl top](/docs/reference/generated/kubectl/kubectl-commands#top) - 显示资源(CPU/内存/存储)使用率 * [kubectl uncordon](/docs/reference/generated/kubectl/kubectl-commands#uncordon) - 标记节点为可调度的 * [kubectl version](/docs/reference/generated/kubectl/kubectl-commands#version) - 打印客户端和服务器的版本信息 * [kubectl wait](/docs/reference/generated/kubectl/kubectl-commands#wait) - 实验性:等待一个或多个资源达到某种状态 diff --git a/content/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1.md b/content/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1.md index 6dd9cc19b8ed6..627873a128a8b 100644 --- a/content/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1.md +++ b/content/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1.md @@ -86,16 +86,11 @@ PersistentVolumeClaimSpec 描述存储设备的常用参数,并支持通过 so - **resources** (ResourceRequirements) @@ -106,7 +101,48 @@ PersistentVolumeClaimSpec 描述存储设备的常用参数,并支持通过 so **ResourceRequirements 描述计算资源要求。** - + + - **resources.claims** ([]ResourceClaim) + + + **集合:唯一值将在合并期间被保留** + + claims 列出了此容器使用的、在 spec.resourceClaims 中定义的资源的名称。 + + 这是一个 Alpha 字段,需要启用 DynamicResourceAllocation 特性门控。 + + 此字段是不可变的。 + + + + **ResourceClaim 引用 PodSpec.ResourceClaims 中的一个条目。** + + - **resources.claims.name** (string),必需 + + 对于使用此字段的 Pod,name 必须与 pod.spec.resourceClaims 中的一个条目的名称匹配。 + + - **resources.limits** (map[string]}}">Quantity) limits 描述允许的最大计算资源量。更多信息: @@ -146,9 +182,10 @@ PersistentVolumeClaimSpec 描述存储设备的常用参数,并支持通过 so - **dataSource** (}}">TypedLocalObjectReference) - dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. + dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. --> ### Beta 级别 + - **dataSource** (}}">TypedLocalObjectReference) dataSource 字段可用于二选一: @@ -158,33 +195,83 @@ PersistentVolumeClaimSpec 描述存储设备的常用参数,并支持通过 so * 现有的 PVC (PersistentVolumeClaim) 如果制备器或外部控制器可以支持指定的数据源,则它将根据指定数据源的内容创建新的卷。 - 如果 AnyVolumeDataSource 特性门控被启用,此字段的内容将始终与 dataSourceRef 字段内容相同。 + 当 AnyVolumeDataSource 特性门控被启用时,dataSource 内容将被复制到 dataSourceRef, + 当 dataSourceRef.namespace 未被指定时,dataSourceRef 内容将被复制到 dataSource。 + 如果名字空间被指定,则 dataSourceRef 不会被复制到 dataSource。 -- **dataSourceRef** (}}">TypedLocalObjectReference) +- **dataSourceRef** (TypedObjectReference) dataSourceRef 指定一个对象,当需要非空卷时,可以使用它来为卷填充数据。 - 此字段值可以是来自非空 API 组(非核心对象)的一个本地对象,或一个 PersistentVolumeClaim 对象。 + 此字段值可以是来自非空 API 组(非核心对象)的任意对象,或一个 PersistentVolumeClaim 对象。 如果设置了此字段,则仅当所指定对象的类型与所安装的某些卷填充器或动态制备器匹配时,卷绑定才会成功。 此字段将替换 dataSource 字段的功能,因此如果两个字段非空,其取值必须相同。 - 为了向后兼容,如果其中一个字段为空且另一个字段非空, - 则两个字段(dataSource 和 dataSourceRef)将被自动设为相同的值。 - dataSource 和 dataSourceRef 之间有两个重要的区别: + 为了向后兼容,当未在 dataSourceRef 中指定名字空间时, + 如果(dataSource 和 dataSourceRef)其中一个字段为空且另一个字段非空,则两个字段将被自动设为相同的值。 + 在 dataSourceRef 中指定名字空间时,dataSource 未被设置为相同的值且必须为空。 + dataSource 和 dataSourceRef 之间有三个重要的区别: - * dataSource 仅允许两个特定类型的对象,而 dataSourceRef 允许设置任何非核心对象以及 PersistentVolumeClaim 对象。 - - * dataSource 忽略不允许的值(这类值会被丢弃),dataSourceRef 保留所有值并在指定不允许的值时产生错误。 - - (Beta)使用此字段需要启用 AnyVolumeDataSource 特性门控。 + + * dataSource 仅允许两个特定类型的对象,而 dataSourceRef 允许任何非核心对象以及 PersistentVolumeClaim 对象。 + * dataSource 忽略不允许的值(这类值会被丢弃),而 dataSourceRef 保留所有值并在指定不允许的值时产生错误。 + * dataSource 仅允许本地对象,而 dataSourceRef 允许任意名字空间中的对象。 + + (Beta) 使用此字段需要启用 AnyVolumeDataSource 特性门控。 + (Alpha) 使用 dataSourceRef 的名字空间字段需要启用 CrossNamespaceVolumeDataSource 特性门控。 + + + ** + + + - **dataSourceRef.kind** (string),必需 + + kind 是正被引用的资源的类型。 + + - **dataSourceRef.name** (string),必需 + + name 是正被引用的资源的名称。 + + + - **dataSourceRef.apiGroup** (string) + + apiGroup 是正被引用的资源的组。如果 apiGroup 未被指定,则指定的 kind 必须在核心 API 组中。 + 对于任何第三方类型,apiGroup 是必需的。 + + - **dataSourceRef.namespace** (string) + + namespace 是正被引用的资源的名字空间。请注意,当指定一个名字空间时, + 在引用的名字空间中 gateway.networking.k8s.io/ReferenceGrant 对象是必需的, + 以允许该名字空间的所有者接受引用。有关详细信息,请参阅 ReferenceGrant 文档。 + (Alpha) 此字段需要启用 CrossNamespaceVolumeDataSource 特性门控。 ## PersistentVolumeClaimStatus {#PersistentVolumeClaimStatus} @@ -437,7 +438,7 @@ PersistentVolumeSpec 是持久卷的规约。 cephfs 表示在主机上挂载的 Ceph FS,该文件系统挂载与 Pod 的生命周期相同。 - + **表示在 Pod 的生命周期内持续的 Ceph Filesystem 挂载。cephfs 卷不支持所有权管理或 SELinux 重新打标签。** - -- **glusterfs** (GlusterfsPersistentVolumeSource) - - glusterfs 表示挂接到主机并暴露给 Pod 的 Glusterfs 卷。由管理员进行制备。更多信息: - https://examples.k8s.io/volumes/glusterfs/README.md - - - **表示与 Pod 生命周期相同的 Glusterfs 挂载。Glusterfs 卷不支持所有权管理或 SELinux 重新打标签。** - - - - - **glusterfs.endpoints** (string),必需 - - endpoints 是详细说明 Glusterfs 拓扑的端点名称。更多信息: - https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod - - - **glusterfs.path** (string),必需 - - path 是 Glusterfs 卷路径。更多信息: - https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod - - - **glusterfs.endpointsNamespace** (string) - - endpointsNamespace 是包含 Glusterfs 端点的名字空间。 - 如果此字段为空,则 EndpointNamespace 默认为与绑定的 PVC 相同的名字空间。 - 更多信息: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod - - - **glusterfs.readOnly** (boolean) - - 此处 readOnly 将强制使用只读权限挂载 Glusterfs 卷。默认为 false。更多信息: - https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod - + unhealthyPodEvictionPolicy 定义不健康的 Pod 应被考虑驱逐时的标准。 + 当前的实现将健康的 Pod 视为具有 status.conditions 项且 type="Ready"、status="True" 的 Pod。 + + 有效的策略是 IfHealthyBudget 和 AlwaysAllow。 + 如果没有策略被指定,则使用与 IfHealthyBudget 策略对应的默认行为。 + + + IfHealthyBudget 策略意味着正在运行(status.phase="Running")但还不健康的 Pod + 只有在被守护的应用未受干扰(status.currentHealthy 至少等于 status.desiredHealthy) + 时才能被驱逐。健康的 Pod 将受到 PDB 的驱逐。 + + AlwaysAllow 策略意味着无论是否满足 PDB 中的条件,所有正在运行(status.phase="Running")但还不健康的 + Pod 都被视为受干扰且可以被驱逐。这意味着受干扰应用的透视运行 Pod 可能没有机会变得健康。 + 健康的 Pod 将受到 PDB 的驱逐。 + + + 将来可能会添加其他策略。如果客户端在该字段遇到未识别的策略,则做出驱逐决定的客户端应禁止驱逐不健康的 Pod。 + + 该字段是 Alpha 级别的。当特性门控 PDBUnhealthyPodEvictionPolicy 被启用(默认禁用)时,驱逐 API 使用此字段。 + ## PodDisruptionBudgetStatus {#PodDisruptionBudgetStatus} loadBalancer 包含负载均衡器的当前状态。 - - **LoadBalancerStatus 表示负载均衡器的状态。** + + **IngressLoadBalancerStatus 表示负载均衡器的状态。** - - **loadBalancer.ingress** ([]LoadBalancerIngress) + - **loadBalancer.ingress** ([]IngressLoadBalancerIngress) - ingress 是一个包含负载均衡器入口点的列表。用于服务的流量应发送到这些入口点。 + ingress 是一个包含负载均衡器入口点的列表。 - - **LoadBalancerIngress 表示负载均衡器入口点的状态:用于服务的流量应发送到入口点。** + + **IngressLoadBalancerIngress 表示负载均衡器入口点的状态。** - **loadBalancer.ingress.hostname** (string) - hostname 是为基于 DNS 的负载平衡器(通常为 AWS 负载平衡器)入口点所设置的主机名。 + hostname 是为基于 DNS 的负载平衡器入口点所设置的主机名。 - **loadBalancer.ingress.ip** (string) - ip 是为基于 IP 的负载平衡器(通常为 GCE 或 OpenStack 负载平衡器)入口点设置的 IP。 + ip 是为基于 IP 的负载平衡器入口点设置的 IP。 - - **loadBalancer.ingress.ports** ([]PortStatus) + - **loadBalancer.ingress.ports** ([]IngressPortStatus) **Atomic: 将在合并期间被替换** - - ports 是服务端口的记录列表。如果使用了此字段,服务中定义的每个端口中都应该有一个条目与之对应。 + + ports 提供有关此 LoadBalancer 公开端口的信息。 + + + **IngressPortStatus 表示服务端口的错误情况** - port 在此是所记录状态对应的服务端口的端口号。 + port 是入栈端口的端口号 - protocol 是服务端口的协议,其状态记录在此。支持的值为:“TCP”、“UDP”、“SCTP”。 + protocol 是入栈端口的协议。支持的值为:“TCP”、“UDP”、“SCTP”。 - **loadBalancer.ingress.ports.error** (string) diff --git a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v1.md b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v1.md index d93b2a6fbef0f..42b11fe094acc 100644 --- a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v1.md +++ b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v1.md @@ -120,13 +120,13 @@ specification of a horizontal pod autoscaler. - **scaleTargetRef.kind** (string),必填 被引用对象的类别; - 更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds" + 更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds **CrossVersionObjectReference 包含足够的信息来让你识别出所引用的资源。** - **scaleTargetRef.kind** (string),必需 - 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds" + 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds **CrossVersionObjectReference 包含足够的信息来让你识别所引用的资源。** - **metrics.object.describedObject.kind** (string),必需 - 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"。 + 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds **CrossVersionObjectReference 包含足够的信息来让你识别所引用的资源。** - **currentMetrics.object.describedObject.kind** (string),必需 - 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds" + 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds - -`apiVersion: autoscaling/v2beta2` - -`import "k8s.io/api/autoscaling/v2beta2"` - - -## HorizontalPodAutoscaler {#HorizontalPodAutoscaler} - - -HorizontalPodAutoscaler 是水平 Pod 自动扩缩器的配置, -它根据指定的指标自动管理实现 scale 子资源的任何资源的副本数。 - -
    - -- **apiVersion**: autoscaling/v2beta2 - -- **kind**: HorizontalPodAutoscaler - -- **metadata** (}}">ObjectMeta) - - - - metadata 是标准的对象元数据。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata - -- **spec** (}}">HorizontalPodAutoscalerSpec) - - - - spec 是自动扩缩器行为的规约。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status. - -- **status** (}}">HorizontalPodAutoscalerStatus) - - - - status 是自动扩缩器的当前信息。 - -## HorizontalPodAutoscalerSpec {#HorizontalPodAutoscalerSpec} - - -HorizontalPodAutoscalerSpec 描述了 HorizontalPodAutoscaler 预期的功能。 - -
    - - - -- **maxReplicas** (int32),必需 - - maxReplicas 是自动扩缩器可以扩容的副本数的上限。不能小于 minReplicas。 - - - -- **scaleTargetRef** (CrossVersionObjectReference),必需 - - scaleTargetRef 指向要扩缩的目标资源,用于收集 Pod 的相关指标信息以及实际更改的副本数。 - - - - - - **CrossVersionObjectReference 包含足够的信息来让你识别出所引用的资源。** - - - **scaleTargetRef.kind** (string),必需 - - 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds" - - - - - **scaleTargetRef.name** (string),必需 - - 被引用对象的名称;更多信息: https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names - - - - - **scaleTargetRef.apiVersion** (string) - - 被引用对象的 API 版本。 - - - -- **minReplicas** (int32) - - minReplicas 是自动扩缩器可以缩减的副本数的下限。它默认为 1 个 Pod。 - 如果启用了 Alpha 特性门控 HPAScaleToZero 并且配置了至少一个 Object 或 External 度量指标, - 则 minReplicas 允许为 0。只要至少有一个度量值可用,扩缩就处于活动状态。 - - - -- **behavior** (HorizontalPodAutoscalerBehavior) - - behavior 配置目标在扩容(Up)和缩容(Down)两个方向的扩缩行为(分别用 scaleUp 和 scaleDown 字段)。 - 如果未设置,则会使用默认的 HPAScalingRules 进行扩缩容。 - - - - - - **HorizontalPodAutoscalerBehavior 配置目标在扩容(Up)和缩容(Down)两个方向的扩缩行为 - (分别用 scaleUp 和 scaleDown 字段)。** - - - **behavior.scaleDown** (HPAScalingRules) - - scaleDown 是缩容策略。如果未设置,则默认值允许缩减到 minReplicas 数量的 Pod, - 具有 300 秒的稳定窗口(使用最近 300 秒的最高推荐值)。 - - - - - - HPAScalingRules 为一个方向配置扩缩行为。在根据 HPA 的指标计算 desiredReplicas 后应用这些规则。 - 可以通过指定扩缩策略来限制扩缩速度。可以通过指定稳定窗口来防止抖动, - 因此不会立即设置副本数,而是选择稳定窗口中最安全的值。 - - - **behavior.scaleDown.policies** ([]HPAScalingPolicy) - - policies 是可在扩缩容过程中使用的潜在扩缩策略的列表。必须至少指定一个策略,否则 HPAScalingRules 将被视为无效而丢弃。 - - - - - - **HPAScalingPolicy 是一个单一的策略,它必须在指定的过去时间间隔内保持为 true。** - - - **behavior.scaleDown.policies.type** (string),必需 - - type 用于指定扩缩策略。 - - - - - **behavior.scaleDown.policies.value** (int32),必需 - - value 包含策略允许的更改量。它必须大于零。 - - - - - **behavior.scaleDown.policies.periodSeconds** (int32),必需 - - periodSeconds 表示策略应该保持为 true 的时间窗口长度。 - periodSeconds 必须大于零且小于或等于 1800(30 分钟)。 - - - - - **behavior.scaleDown.selectPolicy** (string) - - selectPolicy 用于指定应该使用哪个策略。如果未设置,则使用默认值 MaxPolicySelect。 - - - - - **behavior.scaleDown.stabilizationWindowSeconds** (int32) - - stabilizationWindowSeconds 是在扩缩容时应考虑的之前建议的秒数。stabilizationWindowSeconds - 必须大于或等于零且小于或等于 3600(一小时)。如果未设置,则使用默认值: - - - 扩容:0(不设置稳定窗口)。 - - 缩容:300(即稳定窗口为 300 秒)。 - - - - - **behavior.scaleUp** (HPAScalingRules) - - scaleUp 是用于扩容的扩缩策略。如果未设置,则默认值为以下值中的较高者: - - * 每 60 秒增加不超过 4 个 Pod - * 每 60 秒 Pod 数量翻倍 - - 不使用稳定窗口。 - - - - - - HPAScalingRules 为一个方向配置扩缩行为。在根据 HPA 的指标计算 desiredReplicas 后应用这些规则。 - 可以通过指定扩缩策略来限制扩缩速度。可以通过指定稳定窗口来防止抖动, - 因此不会立即设置副本数,而是选择稳定窗口中最安全的值。 - - - **behavior.scaleUp.policies** ([]HPAScalingPolicy) - - policies 是可在扩缩容过程中使用的潜在扩缩策略的列表。必须至少指定一个策略,否则 HPAScalingRules 将被视为无效而丢弃。 - - - - - - **HPAScalingPolicy 是一个单一的策略,它必须在指定的过去时间间隔内保持为 true。** - - - **behavior.scaleUp.policies.type** (string),必需 - - type 用于指定扩缩策略。 - - - - - **behavior.scaleUp.policies.value** (int32),必需 - - value 包含策略允许的更改量。它必须大于零。 - - - - - **behavior.scaleUp.policies.periodSeconds** (int32),必需 - - periodSeconds 表示策略应该保持为 true 的时间窗口长度。 - periodSeconds 必须大于零且小于或等于 1800(30 分钟)。 - - - - - **behavior.scaleUp.selectPolicy** (string) - - selectPolicy 用于指定应该使用哪个策略。如果未设置,则使用默认值 MaxPolicySelect。 - - - - - **behavior.scaleUp.stabilizationWindowSeconds** (int32) - - stabilizationWindowSeconds 是在扩缩容时应考虑的之前建议的秒数。stabilizationWindowSeconds - 必须大于或等于零且小于或等于 3600(一小时)。如果未设置,则使用默认值: - - - 扩容:0(不设置稳定窗口)。 - - 缩容:300(即稳定窗口为 300 秒)。 - - - -- **metrics** ([]MetricSpec) - - metrics 包含用于计算预期副本数的规约(将使用所有指标的最大副本数)。 - 预期副本数是通过将目标值与当前值之间的比率乘以当前 Pod 数来计算的。 - 因此,使用的指标必须随着 Pod 数量的增加而减少,反之亦然。 - 有关每种类别的指标必须如何响应的更多信息,请参阅各个指标源类别。 - 如果未设置,默认指标将设置为 80% 的平均 CPU 利用率。 - - - - - - **MetricSpec 指定如何基于单个指标进行扩缩容(一次只能设置 `type` 和一个其他匹配字段)** - - - **metrics.type** (string),必需 - - type 是指标源的类别。它取值是 “ContainerResource”、“External”、“Object”、“Pods” 或 “Resource” 之一, - 每个类别映射到对象中的一个对应的字段。注意:“ContainerResource” 类别在特性门控 HPAContainerMetrics 启用时可用。 - - - - - **metrics.containerResource** (ContainerResourceMetricSource) - - containerResource 是指 Kubernetes 已知的资源指标(例如在请求和限制中指定的那些), - 描述当前扩缩目标中每个 Pod 中的单个容器(例如 CPU 或内存)。 - 此类指标内置于 Kubernetes 中,在使用 “pods” 源的、按 Pod 计算的普通指标之外,还具有一些特殊的扩缩选项。 - 这是一个 Alpha 特性,可以通过 HPAContainerMetrics 特性标志启用。 - - - - - - ContainerResourceMetricSource 指示如何根据请求和限制中指定的 Kubernetes 已知的资源指标进行扩缩容, - 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。在与目标值比较之前,这些值先计算平均值。 - 此类指标内置于 Kubernetes 中,并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 - 只应设置一种 “target” 类别。 - - - **metrics.containerResource.container** (string),必需 - - container 是扩缩目标的 Pod 中容器的名称。 - - - - - **metrics.containerResource.name** (string),必需 - - name 是相关资源的名称。 - - - - - **metrics.containerResource.target** (MetricTarget),必需 - - target 指定给定指标的目标值。 - - - - - - **MetricTarget 定义特定指标的目标值、平均值或平均利用率** - - - **metrics.containerResource.target.type** (string),必需 - - type 表示指标类别是 `Utilization`、`Value` 或 `AverageValue`。 - - - - - **metrics.containerResource.target.averageUtilization** (int32) - - averageUtilization 是跨所有相关 Pod 的资源指标均值的目标值, - 表示为 Pod 资源请求值的百分比。目前仅对 “Resource” 指标源类别有效。 - - - - - **metrics.containerResource.target.averageValue** (}}">Quantity) - - 是跨所有相关 Pod 的指标均值的目标值(以数量形式给出)。 - - - - - **metrics.containerResource.target.value** (}}">Quantity) - - value 是指标的目标值(以数量形式给出)。 - - - - - **metrics.external** (ExternalMetricSource) - - external 指的是不与任何 Kubernetes 对象关联的全局指标。 - 这一字段允许基于来自集群外部运行的组件(例如云消息服务中的队列长度,或来自运行在集群外部的负载均衡器的 QPS)的信息进行自动扩缩容。 - - - - - - ExternalMetricSource 指示如何基于 Kubernetes 对象无关的指标 - (例如云消息传递服务中的队列长度,或来自集群外部运行的负载均衡器的 QPS)执行扩缩操作。 - - - **metrics.external.metric** (MetricIdentifier),必需 - - metric 通过名称和选择算符识别目标指标。 - - - - - - **MetricIdentifier 定义指标的名称和可选的选择算符** - - - **metrics.external.metric.name** (string),必需 - - name 是给定指标的名称。 - - - - - **metrics.external.metric.selector** (}}">LabelSelector) - - selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 - 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 - 未设置时,仅 metricName 参数将用于收集指标。 - - - - - **metrics.external.target** (MetricTarget),必需 - - target 指定给定指标的目标值。 - - - - - - **MetricTarget 定义特定指标的目标值、平均值或平均利用率** - - - **metrics.external.target.type** (string),必需 - - type 表示指标类别是 `Utilization`、`Value` 或 `AverageValue`。 - - - - - **metrics.external.target.averageUtilization** (int32) - - averageUtilization 是跨所有相关 Pod 得到的资源指标均值的目标值, - 表示为 Pod 资源请求值的百分比。目前仅对 “Resource” 指标源类别有效。 - - - - - **metrics.external.target.averageValue** (}}">Quantity) - - averageValue 是跨所有相关 Pod 得到的指标均值的目标值(以数量形式给出)。 - - - - - **metrics.external.target.value** (}}">Quantity) - - value 是指标的目标值(以数量形式给出)。 - - - - - **metrics.object** (ObjectMetricSource) - - object 是指描述单个 Kubernetes 对象的指标(例如,Ingress 对象上的 `hits-per-second`)。 - - - - - - **ObjectMetricSource 表示如何根据描述 Kubernetes 对象的指标进行扩缩容(例如,Ingress 对象的 `hits-per-second`)** - - - **metrics.object.describedObject** (CrossVersionObjectReference),必需 - - - - - - **CrossVersionObjectReference 包含足够的信息来让你识别所引用的资源。** - - - **metrics.object.describedObject.kind** (string),必需 - - 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"。 - - - - - **metrics.object.describedObject.name** (string),必需 - - 被引用对象的名称;更多信息: https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names - - - - - **metrics.object.describedObject.apiVersion** (string) - - 被引用对象的 API 版本。 - - - - - **metrics.object.metric** (MetricIdentifier),必需 - - metric 通过名称和选择算符识别目标指标。 - - - - - - **MetricIdentifier 定义指标的名称和可选的选择算符** - - - **metrics.object.metric.name** (string),必需 - - name 是给定指标的名称。 - - - - - **metrics.object.metric.selector** (}}">LabelSelector) - - selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 - 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 - 未设置时,仅 metricName 参数将用于收集指标。 - - - - - **metrics.object.target** (MetricTarget),必需 - - target 表示给定指标的目标值。 - - - - - - **MetricTarget 定义特定指标的目标值、平均值或平均利用率** - - - **metrics.object.target.type** (string),必需 - - type 表示指标类别是 `Utilization`、`Value` 或 `AverageValue`。 - - - - - **metrics.object.target.averageUtilization** (int32) - - averageUtilization 是跨所有相关 Pod 得出的资源指标均值的目标值, - 表示为 Pod 资源请求值的百分比。目前仅对 “Resource” 指标源类别有效。 - - - - - **metrics.object.target.averageValue** (}}">Quantity) - - averageValue 是跨所有 Pod 得出的指标均值的目标值(以数量形式给出)。 - - - - - **metrics.object.target.value** (}}">Quantity) - - value 是指标的目标值(以数量形式给出)。 - - - - - **metrics.pods** (PodsMetricSource) - - pods 是指描述当前扩缩目标中每个 Pod 的指标(例如,`transactions-processed-per-second`)。 - 在与目标值进行比较之前,这些指标值将被平均。 - - - - - - PodsMetricSource 表示如何根据描述当前扩缩目标中每个 Pod 的指标进行扩缩容(例如,`transactions-processed-per-second`)。 - 在与目标值进行比较之前,这些指标值将被平均。 - - - **metrics.pods.metric** (MetricIdentifier),必需 - - metric 通过名称和选择算符识别目标指标。 - - - - - - **MetricIdentifier 定义指标的名称和可选的选择算符** - - - **metrics.pods.metric.name** (string),必需 - - name 是给定指标的名称。 - - - - - **metrics.pods.metric.selector** (}}">LabelSelector) - - selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 - 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 - 未设置时,仅 metricName 参数将用于收集指标。 - - - - - **metrics.pods.target** (MetricTarget),必需 - - target 表示给定指标的目标值。 - - - - - - **MetricTarget 定义特定指标的目标值、平均值或平均利用率** - - - **metrics.pods.target.type** (string),必需 - - type 表示指标类别是 `Utilization`、`Value` 或 `AverageValue`。 - - - - - **metrics.pods.target.averageUtilization** (int32) - - averageUtilization 是跨所有相关 Pod 得出的资源指标均值的目标值, - 表示为 Pod 资源请求值的百分比。目前仅对 “Resource” 指标源类别有效。 - - - - - **metrics.pods.target.averageValue** (}}">Quantity) - - averageValue 是跨所有 Pod 得出的指标均值的目标值(以数量形式给出)。 - - - - - **metrics.pods.target.value** (}}">Quantity) - - value 是指标的目标值(以数量形式给出)。 - - - - - **metrics.resource** (ResourceMetricSource) - - resource 是指 Kubernetes 已知的资源指标(例如在请求和限制中指定的那些), - 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。此类指标内置于 Kubernetes 中, - 并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 - - - - - - ResourceMetricSource 指示如何根据请求和限制中指定的 Kubernetes 已知的资源指标进行扩缩容, - 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。在与目标值比较之前,这些指标值将被平均。 - 此类指标内置于 Kubernetes 中,并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 - 只应设置一种 “target” 类别。 - - - **metrics.resource.name** (string),必需 - - name 是相关资源的名称。 - - - - - **metrics.resource.target** (MetricTarget),必需 - - target 指定给定指标的目标值。 - - - - - - **MetricTarget 定义特定指标的目标值、平均值或平均利用率** - - - **metrics.resource.target.type** (string),必需 - - type 表示指标类别是 `Utilization`、`Value` 或 `AverageValue`。 - - - - - **metrics.resource.target.averageUtilization** (int32) - - averageUtilization 是跨所有相关 Pod 得出的资源指标均值的目标值, - 表示为 Pod 资源请求值的百分比。目前仅对 “Resource” 指标源类别有效。 - - - - - **metrics.resource.target.averageValue** (}}">Quantity) - - averageValue 是跨所有 Pod 得出的指标均值的目标值(以数量形式给出)。 - - - - - **metrics.resource.target.value** (}}">Quantity) - - value 是指标的目标值(以数量形式给出)。 - -## HorizontalPodAutoscalerStatus {#HorizontalPodAutoscalerStatus} - - -HorizontalPodAutoscalerStatus 描述了水平 Pod 自动扩缩器的当前状态。 - -
    - - - -- **currentReplicas** (int32),必需 - - currentReplicas 是此自动扩缩器管理的 Pod 的当前副本数,如自动扩缩器最后一次看到的那样。 - - - -- **desiredReplicas** (int32),必需 - - desiredReplicas 是此自动扩缩器管理的 Pod 的所期望的副本数,由自动扩缩器最后计算。 - - - -- **conditions** ([]HorizontalPodAutoscalerCondition) - - conditions 是此自动扩缩器扩缩其目标所需的一组条件,并指示是否满足这些条件。 - - - - - - **HorizontalPodAutoscalerCondition 描述 HorizontalPodAutoscaler 在某一时间点的状态。** - - - **conditions.status** (string),必需 - - status 是状况的状态(True、False、Unknown)。 - - - - - **conditions.type** (string),必需 - - type 描述当前状况。 - - - - - **conditions.lastTransitionTime** (Time) - - lastTransitionTime 是状况最近一次从一种状态转换到另一种状态的时间。 - - - - - **Time 是对 time.Time 的封装。Time 支持对 YAML 和 JSON 进行正确封包。为 time 包的许多函数方法提供了封装器。** - - - - - **conditions.message** (string) - - message 是一个包含有关转换的可读的详细信息。 - - - - - **conditions.reason** (string) - - reason 是状况最后一次转换的原因。 - - - -- **currentMetrics** ([]MetricStatus) - - currentMetrics 是此自动扩缩器使用的指标的最后读取状态。 - - - - - - **MetricStatus 描述了单个指标的最后读取状态。** - - - **currentMetrics.type** (string),必需 - - type 是指标源的类别。它取值是 “ContainerResource”、“External”、“Object”、“Pods” 或 “Resource” 之一, - 每个类别映射到对象中的一个对应的字段。注意:“ContainerResource” 类别在特性门控 HPAContainerMetrics 启用时可用。 - - - - - **currentMetrics.containerResource** (ContainerResourceMetricStatus) - - containerResource 是指 Kubernetes 已知的一种资源指标(例如在请求和限制中指定的那些), - 描述当前扩缩目标中每个 Pod 中的单个容器(例如 CPU 或内存)。 - 此类指标内置于 Kubernetes 中,并且在使用 "Pods" 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 - - - - - - ContainerResourceMetricStatus 指示如何根据请求和限制中指定的 Kubernetes 已知的资源指标进行扩缩容, - 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。此类指标内置于 Kubernetes 中, - 并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 - - - **currentMetrics.containerResource.container** (string),必需 - - container 是扩缩目标的 Pod 中的容器名称。 - - - - - **currentMetrics.containerResource.current** (MetricValueStatus),必需 - - current 包含给定指标的当前值。 - - - - - - **MetricValueStatus 保存指标的当前值** - - - **currentMetrics.containerResource.current.averageUtilization** (int32) - - averageUtilization 是跨所有相关 Pod 得出的资源指标均值的当前值,表示为 Pod 资源请求值的百分比。 - - - - - **currentMetrics.containerResource.current.averageValue** (}}">Quantity) - - averageValue 是跨所有相关 Pod 的指标均值的当前值(以数量形式给出)。 - - - - - **currentMetrics.containerResource.current.value** (}}">Quantity) - - value 是指标的当前值(以数量形式给出)。 - - - - - **currentMetrics.containerResource.name** (string),必需 - - name 是相关资源的名称。 - - - - - **currentMetrics.external** (ExternalMetricStatus) - - external 指的是不与任何 Kubernetes 对象关联的全局指标。这一字段允许基于来自集群外部运行的组件 - (例如云消息服务中的队列长度,或来自集群外部运行的负载均衡器的 QPS)的信息进行自动扩缩。 - - - - - - **ExternalMetricStatus 表示与任何 Kubernetes 对象无关的全局指标的当前值。** - - - **currentMetrics.external.current** (MetricValueStatus),必需 - - current 包含给定指标的当前值。 - - - - - - **MetricValueStatus 保存指标的当前值** - - - **currentMetrics.external.current.averageUtilization** (int32) - - averageUtilization 是跨所有相关 Pod 得出的资源指标均值的当前值,表示为 Pod 资源请求值的百分比。 - - - - - **currentMetrics.external.current.averageValue** (}}">Quantity) - - averageValue 是跨所有相关 Pod 的指标均值的当前值(以数量形式给出)。 - - - - - **currentMetrics.external.current.value** (}}">Quantity) - - value 是指标的当前值(以数量形式给出)。 - - - - - **currentMetrics.external.metric** (MetricIdentifier),必需 - - metric 通过名称和选择算符识别目标指标。 - - - - - - **MetricIdentifier 定义指标的名称和可选的选择算符** - - - **currentMetrics.external.metric.name** (string),必需 - - name 是给定指标的名称。 - - - - - **currentMetrics.external.metric.selector** (}}">LabelSelector) - - selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 - 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 - 未设置时,仅 metricName 参数将用于收集指标。 - - - - - **currentMetrics.object** (ObjectMetricStatus) - - object 是指描述单个 Kubernetes 对象的指标(例如,Ingress 对象的 `hits-per-second`)。 - - - - - - **ObjectMetricStatus 表示描述 Kubernetes 对象的指标的当前值(例如,Ingress 对象的 `hits-per-second`)。** - - - **currentMetrics.object.current** (MetricValueStatus),必需 - - current 包含给定指标的当前值。 - - - - - - **MetricValueStatus 保存指标的当前值** - - - **currentMetrics.object.current.averageUtilization** (int32) - - averageUtilization 是跨所有相关 Pod 得出的资源指标均值的当前值,表示为 Pod 资源请求值的百分比。 - - - - - **currentMetrics.object.current.averageValue** (}}">Quantity) - - averageValue 是跨所有相关 Pod 的指标均值的当前值(以数量形式给出)。 - - - - - **currentMetrics.object.current.value** (}}">Quantity) - - value 是指标的当前值(以数量形式给出)。 - - - - - **currentMetrics.object.describedObject** (CrossVersionObjectReference),必需 - - - - - - **CrossVersionObjectReference 包含足够的信息来让你识别所引用的资源。** - - - **currentMetrics.object.describedObject.kind** (string),必需 - - 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds" - - - - - **currentMetrics.object.describedObject.name** (string),必需 - - 被引用对象的名称;更多信息: https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names - - - - - **currentMetrics.object.describedObject.apiVersion** (string) - - 被引用对象的 API 版本。 - - - - - **currentMetrics.object.metric** (MetricIdentifier),必需 - - metric 通过名称和选择算符识别目标指标。 - - - - - - **MetricIdentifier 定义指标的名称和可选的选择算符** - - - **currentMetrics.object.metric.name** (string),必需 - - name 是给定指标的名称。 - - - - - **currentMetrics.object.metric.selector** (}}">LabelSelector) - - selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 - 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 - 未设置时,仅 metricName 参数将用于收集指标。 - - - - - **currentMetrics.pods** (PodsMetricStatus) - - pods 是指描述当前扩缩目标中每个 Pod 的指标(例如,`transactions-processed-per-second`)。 - 在与目标值进行比较之前,这些指标值将被平均。 - - - - - - **PodsMetricStatus 表示描述当前扩缩目标中每个 Pod 的指标的当前值(例如,`transactions-processed-per-second`)。** - - - **currentMetrics.pods.current** (MetricValueStatus),必需 - - current 包含给定指标的当前值。 - - - - - - **MetricValueStatus 保存指标的当前值** - - - **currentMetrics.pods.current.averageUtilization** (int32) - - averageUtilization 是跨所有相关 Pod 得出的资源指标均值的当前值,表示为 Pod 资源请求值的百分比。 - - - - - **currentMetrics.pods.current.averageValue** (}}">Quantity) - - averageValue 是跨所有相关 Pod 的指标均值的当前值(以数量形式给出)。 - - - - - **currentMetrics.pods.current.value** (}}">Quantity) - - value 是指标的当前值(以数量形式给出)。 - - - - - **currentMetrics.pods.metric** (MetricIdentifier),必需 - - metric 通过名称和选择算符识别目标指标。 - - - - - - **MetricIdentifier 定义指标的名称和可选的选择算符** - - - **currentMetrics.pods.metric.name** (string),必需 - - name 是给定指标的名称。 - - - - - **currentMetrics.pods.metric.selector** (}}">LabelSelector) - - selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 - 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 - 未设置时,仅 metricName 参数将用于收集指标。 - - - - - **currentMetrics.resource** (ResourceMetricStatus) - - resource 是指 Kubernetes 已知的资源指标(例如在请求和限制中指定的那些), - 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。此类指标内置于 Kubernetes 中, - 并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 - - - - - - ResourceMetricSource 指示如何根据请求和限制中指定的 Kubernetes 已知的资源指标进行扩缩容, - 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。在与目标值比较之前,这些指标值将被平均。 - 此类指标内置于 Kubernetes 中,并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 - - - **currentMetrics.resource.current** (MetricValueStatus),必需 - - current 包含给定指标的当前值。 - - - - - - **MetricValueStatus 保存指标的当前值** - - - **currentMetrics.resource.current.averageUtilization** (int32) - - averageUtilization 是跨所有相关 Pod 得出的资源指标均值的当前值, - 表示为 Pod 资源请求值的百分比。 - - - - - **currentMetrics.resource.current.averageValue** (}}">Quantity) - - averageValue 是跨所有相关 Pod 的指标均值的当前值(以数量形式给出)。 - - - - - **currentMetrics.resource.current.value** (}}">Quantity) - - value 是指标的当前值(以数量形式给出)。 - - - - - **currentMetrics.resource.name** (string),必需 - - name 是相关资源的名称。 - - - -- **lastScaleTime** (Time) - - lastScaleTime 是 HorizontalPodAutoscaler 上次扩缩 Pod 数量的时间,自动扩缩器使用它来控制更改 Pod 数量的频率。 - - - - - - **Time 是对 time.Time 的封装。Time 支持对 YAML 和 JSON 进行正确封包。为 time 包的许多函数方法提供了封装器。** - - - -- **observedGeneration** (int64) - - observedGeneration 是此自动扩缩器观察到的最新一代。 - -## HorizontalPodAutoscalerList {#HorizontalPodAutoscalerList} - - -HorizontalPodAutoscalerList 是水平 Pod 自动扩缩器对象列表。 - -
    - -- **apiVersion**: autoscaling/v2beta2 - -- **kind**: HorizontalPodAutoscalerList - - - -- **metadata** (}}">ListMeta) - - metadata 是标准的列表元数据。 - - - -- **items** ([]}}">HorizontalPodAutoscaler),必需 - - items 是水平 Pod 自动扩缩器对象的列表。 - -## Operations {#Operations} - -
    - - -### `get` 读取指定的 HorizontalPodAutoscaler - -#### HTTP 请求 - -GET /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name} - - -#### 参数 - -- **name** (**路径参数**): string,必需 - - HorizontalPodAutoscaler 的名称。 - - -- **namespace** (**路径参数**): string,必需 - - }}">namespace - -- **pretty** (**查询参数**): string - - }}">pretty - - -#### 响应 - -200 (}}">HorizontalPodAutoscaler): OK - -401: Unauthorized - - -### `get` 读取指定 HorizontalPodAutoscaler 的状态 - -#### HTTP 请求 - -GET /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status - - -#### 参数 - -- **name** (**路径参数**): string,必需 - - HorizontalPodAutoscaler 的名称。 - - -- **namespace** (**路径参数**): string,必需 - - }}">namespace - -- **pretty** (**查询参数**): string - - }}">pretty - - -#### 响应 - -200 (}}">HorizontalPodAutoscaler): OK - -401: Unauthorized - - -### `list` 列出或观察 HorizontalPodAutoscaler 类别的对象 - -#### HTTP 请求 - -GET /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers - - -#### 参数 - -- **namespace** (**路径参数**): string,必需 - - }}">namespace - -- **allowWatchBookmarks** (**查询参数**): boolean - - }}">allowWatchBookmarks - -- **continue** (**查询参数**): string - - }}">continue - -- **fieldSelector** (**查询参数**): string - - }}">fieldSelector - -- **labelSelector** (**查询参数**): string - - }}">labelSelector - -- **limit** (**查询参数**): integer - - }}">limit - -- **pretty** (**查询参数**): string - - }}">pretty - -- **resourceVersion** (**查询参数**): string - - }}">resourceVersion - -- **resourceVersionMatch** (**查询参数**): string - - }}">resourceVersionMatch - -- **timeoutSeconds** (**查询参数**): integer - - }}">timeoutSeconds - -- **watch** (**查询参数**): boolean - - }}">watch - - -#### 响应 - -200 (}}">HorizontalPodAutoscalerList): OK - -401: Unauthorized - - -### `list` 列出或观察 HorizontalPodAutoscaler 类别的对象 - -#### HTTP 请求 - -GET /apis/autoscaling/v2beta2/horizontalpodautoscalers - - -#### 参数 - -- **allowWatchBookmarks** (**查询参数**): boolean - - }}">allowWatchBookmarks - -- **continue** (**查询参数**): string - - }}">continue - -- **fieldSelector** (**查询参数**): string - - }}">fieldSelector - -- **labelSelector** (**查询参数**): string - - }}">labelSelector - -- **limit** (**查询参数**): integer - - }}">limit - -- **pretty** (**查询参数**): string - - }}">pretty - -- **resourceVersion** (**查询参数**): string - - }}">resourceVersion - -- **resourceVersionMatch** (**查询参数**): string - - }}">resourceVersionMatch - -- **timeoutSeconds** (**查询参数**): integer - - }}">timeoutSeconds - -- **watch** (**查询参数**): boolean - - }}">watch - - -#### 响应 - -200 (}}">HorizontalPodAutoscalerList): OK - -401: Unauthorized - - -### `create` 创建一个 HorizontalPodAutoscaler - -#### HTTP 请求 - -POST /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers - - -#### 参数 - -- **namespace** (**路径参数**): string,必需 - - }}">namespace - -- **body**: }}">HorizontalPodAutoscaler必需 - -- **dryRun** (**查询参数**): string - - }}">dryRun - -- **fieldManager** (**查询参数**): string - - }}">fieldManager - -- **fieldValidation** (**查询参数**): string - - }}">fieldValidation - -- **pretty** (**查询参数**): string - - }}">pretty - - -#### 响应 - -200 (}}">HorizontalPodAutoscaler): OK - -201 (}}">HorizontalPodAutoscaler): Created - -202 (}}">HorizontalPodAutoscaler): Accepted - -401: Unauthorized - - -### `update` 替换指定的 HorizontalPodAutoscaler - -#### HTTP 请求 - -PUT /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name} - - -#### 参数 - -- **name** (**路径参数**): string,必需 - - HorizontalPodAutoscaler 的名称。 - - -- **namespace** (**路径参数**): string,必需 - - }}">namespace - -- **body**: }}">HorizontalPodAutoscaler必需 - -- **dryRun** (**查询参数**): string - - }}">dryRun - -- **fieldManager** (**查询参数**): string - - }}">fieldManager - -- **fieldValidation** (**查询参数**): string - - }}">fieldValidation - -- **pretty** (**查询参数**): string - - }}">pretty - - -#### 响应 - -200 (}}">HorizontalPodAutoscaler): OK - -201 (}}">HorizontalPodAutoscaler): Created - -401: Unauthorized - - -### `update` 替换指定 HorizontalPodAutoscaler 的状态 - -#### HTTP 请求 - -PUT /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status - - -#### 参数 - -- **name** (**路径参数**): string,必需 - - HorizontalPodAutoscaler 的名称。 - - -- **namespace** (**路径参数**): string,必需 - - }}">namespace - -- **body**: }}">HorizontalPodAutoscaler必需 - -- **dryRun** (**查询参数**): string - - }}">dryRun - -- **fieldManager** (**查询参数**): string - - }}">fieldManager - -- **fieldValidation** (**查询参数**): string - - }}">fieldValidation - -- **pretty** (**查询参数**): string - - }}">pretty - - -#### 响应 - -200 (}}">HorizontalPodAutoscaler): OK - -201 (}}">HorizontalPodAutoscaler): Created - -401: Unauthorized - - -### `patch` 部分更新指定的 HorizontalPodAutoscaler - -#### HTTP 请求 - -PATCH /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name} - - -#### 参数 - -- **name** (**路径参数**): string,必需 - - HorizontalPodAutoscaler 的名称。 - - -- **namespace** (**路径参数**): string,必需 - - }}">namespace - -- **body**: }}">Patch必需 - -- **dryRun** (**查询参数**): string - - }}">dryRun - -- **fieldManager** (**查询参数**): string - - }}">fieldManager - -- **fieldValidation** (**查询参数**): string - - }}">fieldValidation - -- **force** (**查询参数**): boolean - - }}">force - -- **pretty** (**查询参数**): string - - }}">pretty - - -#### 响应 - -200 (}}">HorizontalPodAutoscaler): OK - -201 (}}">HorizontalPodAutoscaler): Created - -401: Unauthorized - - -### `patch` 部分更新指定 HorizontalPodAutoscaler 的状态 - -#### HTTP 请求 - -PATCH /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status - - -#### 参数 - -- **name** (**路径参数**): string,必需 - - HorizontalPodAutoscaler 的名称。 - - -- **namespace** (**路径参数**): string,必需 - - }}">namespace - -- **body**: }}">Patch必需 - -- **dryRun** (**查询参数**): string - - }}">dryRun - -- **fieldManager** (**查询参数**): string - - }}">fieldManager - -- **fieldValidation** (**查询参数**): string - - }}">fieldValidation - -- **force** (**查询参数**): boolean - - }}">force - -- **pretty** (**查询参数**): string - - }}">pretty - - -#### 响应 - -200 (}}">HorizontalPodAutoscaler): OK - -201 (}}">HorizontalPodAutoscaler): Created - -401: Unauthorized - - -### `delete` 删除一个 HorizontalPodAutoscaler - -#### HTTP 请求 - -DELETE /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name} - - -#### 参数 - -- **name** (**路径参数**): string,必需 - - HorizontalPodAutoscaler 的名称。 - - -- **namespace** (**路径参数**): string,必需 - - }}">namespace - -- **body**: }}">DeleteOptions - -- **dryRun** (**查询参数**): string - - }}">dryRun - -- **gracePeriodSeconds** (**查询参数**): integer - - }}">gracePeriodSeconds - -- **pretty** (**查询参数**): string - - }}">pretty - -- **propagationPolicy** (**查询参数**): string - - }}">propagationPolicy - - -#### 响应 - -200 (}}">Status): OK - -202 (}}">Status): Accepted - -401: Unauthorized - - -### `deletecollection` 删除 HorizontalPodAutoscaler 的集合 - -#### HTTP 请求 - -DELETE /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers - - -#### 参数 - -- **namespace** (**路径参数**): string,必需 - - }}">namespace - -- **body**: }}">DeleteOptions - -- **continue** (**查询参数**): string - - }}">continue - -- **dryRun** (**查询参数**): string - - }}">dryRun - -- **fieldSelector** (**查询参数**): string - - }}">fieldSelector - -- **gracePeriodSeconds** (**查询参数**): integer - - }}">gracePeriodSeconds - -- **labelSelector** (**查询参数**): string - - }}">labelSelector - -- **limit** (**查询参数**): integer - - }}">limit - -- **pretty** (**查询参数**): string - - }}">pretty - -- **propagationPolicy** (**查询参数**): string - - }}">propagationPolicy - -- **resourceVersion** (**查询参数**): string - - }}">resourceVersion - -- **resourceVersionMatch** (**查询参数**): string - - }}">resourceVersionMatch - -- **timeoutSeconds** (**查询参数**): integer - - }}">timeoutSeconds - - -#### 响应 - -200 (}}">Status): OK - -401: Unauthorized - diff --git a/content/zh-cn/docs/reference/labels-annotations-taints/_index.md b/content/zh-cn/docs/reference/labels-annotations-taints/_index.md index 3748d199c37ae..0a0a304474a88 100644 --- a/content/zh-cn/docs/reference/labels-annotations-taints/_index.md +++ b/content/zh-cn/docs/reference/labels-annotations-taints/_index.md @@ -901,6 +901,32 @@ ServiceAccount that the token (stored in the Secret of type `kubernetes.io/servi 该注解记录了令牌(存储在 `kubernetes.io/service-account-token` 类型的 Secret 中)所代表的 ServiceAccount 的{{}}。 + +### kubernetes.io/legacy-token-last-used + +例子:`kubernetes.io/legacy-token-last-used: 2022-10-24` + +用于:Secret + +控制面仅为 `kubernetes.io/service-account-token` 类型的 Secret 添加此标签。 +该标签的值记录着控制面最近一次接到客户端使用服务帐户令牌进行身份验证请求的日期(ISO 8601 +格式,UTC 时区) + +如果上一次使用老的令牌的时间在集群获得此特性(添加于 Kubernetes v1.26)之前,则不会设置此标签。 + -### batch.kubernetes.io/job-tracking {#batch-kubernetes-io-job-tracking} +### batch.kubernetes.io/job-tracking (已弃用) {#batch-kubernetes-io-job-tracking} 例子:`batch.kubernetes.io/job-tracking: ""` 用于:Job Job 上存在此注解表明控制平面正在[使用 Finalizer 追踪 Job](/zh-cn/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers)。 +控制平面使用此注解来安全地转换为使用 Finalizer 追踪 Job,而此特性正在开发中。 你 **不** 可以手动添加或删除此注解。 +{{< note >}} + +从 Kubernetes 1.26 开始,该注解被弃用。 +Kubernetes 1.27 及以上版本将忽略此注解,并始终使用 Finalizer 追踪 Job。 +{{< /note >}} + +### scheduler.alpha.kubernetes.io/critical-pod(已弃用){#scheduler-alpha-kubernetes-io-critical-pod} + +例子:`scheduler.alpha.kubernetes.io/critical-pod: ""` + +用于:Pod + +此注解让 Kubernetes 控制平面知晓某个 Pod 是一个关键的 Pod,这样 descheduler +将不会移除该 Pod。 + +{{< note >}} + +从 v1.16 开始,此注解被移除,取而代之的是 [Pod 优先级](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)。 +{{< /note >}} + + + + +Kubernetes 集群中的每个{{< glossary_tooltip term_id="node" text="节点" >}}会运行一个 +[kube-proxy](/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/) +(除非你已经部署了自己的替换组件来替代 `kube-proxy`)。 + + +`kube-proxy` 组件负责除 `type` 为 +[`ExternalName`](/zh-cn/docs/concepts/services-networking/service/#externalname) +以外的{{< glossary_tooltip term_id="service" text="服务">}},实现**虚拟 IP** 机制。 + + +一个时不时出现的问题是,为什么 Kubernetes 依赖代理将入站流量转发到后端。 +其他方案呢?例如,是否可以配置具有多个 A 值(或 IPv6 的 AAAA)的 DNS 记录, +使用轮询域名解析? + + +使用代理转发方式实现 Service 的原因有以下几个: + +* DNS 的实现不遵守记录的 TTL 约定的历史由来已久,在记录过期后可能仍有结果缓存。 +* 有些应用只做一次 DNS 查询,然后永久缓存结果。 +* 即使应用程序和库进行了适当的重新解析,TTL 取值较低或为零的 DNS 记录可能会给 DNS 带来很大的压力, + 从而变得难以管理。 + + +在下文中,你可以了解到 kube-proxy 各种实现方式的工作原理。 +总的来说,你应该注意到,在运行 `kube-proxy` 时, +可能会修改内核级别的规则(例如,可能会创建 iptables 规则), +在某些情况下,这些规则直到重启才会被清理。 +因此,运行 kube-proxy 这件事应该只由了解在计算机上使用低级别、特权网络代理服务会带来的后果的管理员执行。 +尽管 `kube-proxy` 可执行文件支持 `cleanup` 功能,但这个功能并不是官方特性,因此只能根据具体情况使用。 + + + +本文中的一些细节会引用这样一个例子: +运行了 3 个 Pod 副本的无状态图像处理后端工作负载。 +这些副本是可互换的;前端不需要关心它们调用了哪个后端副本。 +即使组成这一组后端程序的 Pod 实际上可能会发生变化, +前端客户端不应该也没必要知道,而且也不需要跟踪这一组后端的状态。 + + + + +## 代理模式 {#proxy-modes} + + +注意,kube-proxy 会根据不同配置以不同的模式启动。 + +- kube-proxy 的配置是通过 ConfigMap 完成的,kube-proxy 的 ConfigMap 实际上弃用了 kube-proxy 大部分标志的行为。 +- kube-proxy 的 ConfigMap 不支持配置的实时重新加载。 +- kube-proxy 不能在启动时验证和检查所有的 ConfigMap 参数。 + 例如,如果你的操作系统不允许你运行 iptables 命令,标准的 kube-proxy 内核实现将无法工作。 + 同样,如果你的操作系统不支持 `netsh`,它也无法在 Windows 用户空间模式下运行。 + + +### `iptables` 代理模式 {#proxy-mode-iptables} + + +在这种模式下,kube-proxy 监视 Kubernetes 控制平面,获知对 Service 和 EndpointSlice 对象的添加和删除操作。 +对于每个 Service,kube-proxy 会添加 iptables 规则,这些规则捕获流向 Service 的 `clusterIP` 和 `port` 的流量, +并将这些流量重定向到 Service 后端集合中的其中之一。 +对于每个端点,它会添加指向一个特定后端 Pod 的 iptables 规则。 + + +默认情况下,iptables 模式下的 kube-proxy 会随机选择一个后端。 + +使用 iptables 处理流量的系统开销较低,因为流量由 Linux netfilter 处理, +无需在用户空间和内核空间之间切换。这种方案也更为可靠。 + + +如果 kube-proxy 以 iptables 模式运行,并且它选择的第一个 Pod 没有响应, +那么连接会失败。这与用户空间模式不同: +在后者这种情况下,kube-proxy 会检测到与第一个 Pod 的连接失败, +并会自动用不同的后端 Pod 重试。 + + +你可以使用 Pod [就绪探针](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)来验证后端 Pod 是否健康。 +这样可以避免 kube-proxy 将流量发送到已知失败的 Pod 上。 + + +{{< figure src="/images/docs/services-iptables-overview.svg" title="iptables 模式下 Service 的虚拟 IP 机制" class="diagram-medium" >}} + + +#### 示例 {#packet-processing-iptables} + + +例如,考虑本页中[前面](#example)描述的图像处理应用程序。 +当创建后端 Service 时,Kubernetes 控制平面会分配一个虚拟 IP 地址,例如 10.0.0.1。 +对于这个例子而言,假设 Service 端口是 1234。 +集群中的所有 kube-proxy 实例都会观察到新 Service 的创建。 + + +当节点上的 kube-proxy 观察到新 Service 时,它会添加一系列 iptables 规则, +这些规则从虚拟 IP 地址重定向到更多 iptables 规则,每个 Service 都定义了这些规则。 +每个 Service 规则链接到每个后端端点的更多规则, +并且每个端点规则将流量重定向(使用目标 NAT)到后端。 + + +当客户端连接到 Service 的虚拟 IP 地址时,iptables 规则会生效。 +会选择一个后端(基于会话亲和性或随机选择),并将数据包重定向到后端,无需重写客户端 IP 地址。 + + +当流量通过节点端口或负载均衡器进入时,也会执行相同的基本流程, +只是在这些情况下,客户端 IP 地址会被更改。 + + +#### 优化 iptables 模式性能 {#optimizing-iptables-mode-performance} + +在大型集群(有数万个 Pod 和 Service)中,当 Service(或其 EndpointSlices)发生变化时 +iptables 模式的 kube-proxy 在更新内核中的规则时可能要用较长时间。 +你可以通过(`kube-proxy --config ` 指定的)kube-proxy +[配置文件](/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1/)的 +[`iptables` 节](/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1/#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPTablesConfiguration)中的选项来调整 +kube-proxy 的同步行为: + +```none +... +iptables: + minSyncPeriod: 1s + syncPeriod: 30s +... +``` + +##### `minSyncPeriod` + + +`minSyncPeriod` 参数设置尝试同步 iptables 规则与内核之间的最短时长。 +如果是 `0s`,那么每次有任一 Service 或 Endpoint 发生变更时,kube-proxy 都会立即同步这些规则。 +这种方式在较小的集群中可以工作得很好,但如果在很短的时间内很多东西发生变更时,它会导致大量冗余工作。 +例如,如果你有一个由 Deployment 支持的 Service,共有 100 个 Pod,你删除了这个 Deployment, +且设置了 `minSyncPeriod: 0s`,kube-proxy 最终会从 iptables 规则中逐个删除 Service 的 Endpoint, +总共更新 100 次。使用较大的 `minSyncPeriod` 值时,多个 Pod 删除事件将被聚合在一起, +因此 kube-proxy 最终可能会进行例如 5 次更新,每次移除 20 个端点, +这样在 CPU 利用率方面更有效率,能够更快地同步所有变更。 + + +`minSyncPeriod` 的值越大,可以聚合的工作越多, +但缺点是每个独立的变更可能最终要等待整个 `minSyncPeriod` 周期后才能被处理, +这意味着 iptables 规则要用更多时间才能与当前的 apiserver 状态同步。 + + +默认值 `1s` 对于中小型集群是一个很好的折衷方案。 +在大型集群中,可能需要将其设置为更大的值。 +(特别是,如果 kube-proxy 的 `sync_proxy_rules_duration_seconds` 指标表明平均时间远大于 1 秒, +那么提高 `minSyncPeriod` 可能会使更新更有效率。) + +##### `syncPeriod` + + +`syncPeriod` 参数控制与单次 Service 和 Endpoint 的变更没有直接关系的少数同步操作。 +特别是,它控制 kube-proxy 在外部组件已干涉 kube-proxy 的 iptables 规则时通知的速度。 +在大型集群中,kube-proxy 也仅在每隔 `syncPeriod` 时长执行某些清理操作,以避免不必要的工作。 + + +在大多数情况下,提高 `syncPeriod` 预计不会对性能产生太大影响, +但在过去,有时将其设置为非常大的值(例如 `1h`)很有用。 +现在不再推荐这种做法,因为它对功能的破坏可能会超过对性能的改进。 + + +##### 实验性的性能改进 {#minimize-iptables-restore} + +{{< feature-state for_k8s_version="v1.26" state="alpha" >}} + + +在 Kubernetes 1.26 中,社区对 iptables 代理模式进行了一些新的性能改进, +但默认未启用(并且可能还不应该在生产集群中启用)。要试用它们, +请使用 `--feature-gates=MinimizeIPTablesRestore=true,…` 为 kube-proxy 启用 `MinimizeIPTablesRestore` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 + + +如果你启用该特性门控并且之前覆盖了 `minSyncPeriod`, +你应该尝试移除该覆盖并让 kube-proxy 使用默认值 (`1s`) 或至少使用比之前更小的值。 + + +如果你注意到 kube-proxy 的 `sync_proxy_rules_iptables_restore_failures_total` 或 +`sync_proxy_rules_iptables_partial_restore_failures_total` 指标在启用此特性后升高, +这可能表明你发现了该特性的错误,你应该提交错误报告。 + + +### IPVS 代理模式 {#proxy-mode-ipvs} + + +在 `ipvs` 模式下,kube-proxy 监视 Kubernetes Service 和 EndpointSlice, +然后调用 `netlink` 接口创建 IPVS 规则, +并定期与 Kubernetes Service 和 EndpointSlice 同步 IPVS 规则。 +该控制回路确保 IPVS 状态与期望的状态保持一致。 +访问 Service 时,IPVS 会将流量导向到某一个后端 Pod。 + + +IPVS 代理模式基于 netfilter 回调函数,类似于 iptables 模式, +但它使用哈希表作为底层数据结构,在内核空间中生效。 +这意味着 IPVS 模式下的 kube-proxy 比 iptables 模式下的 kube-proxy +重定向流量的延迟更低,同步代理规则时性能也更好。 +与其他代理模式相比,IPVS 模式还支持更高的网络流量吞吐量。 + + +IPVS 为将流量均衡到后端 Pod 提供了更多选择: + +* `rr`:轮询 +* `lc`:最少连接(打开连接数最少) +* `dh`:目标地址哈希 +* `sh`:源地址哈希 +* `sed`:最短预期延迟 +* `nq`:最少队列 + +{{< note >}} + +要在 IPVS 模式下运行 kube-proxy,必须在启动 kube-proxy 之前确保节点上的 IPVS 可用。 + +当 kube-proxy 以 IPVS 代理模式启动时,它会验证 IPVS 内核模块是否可用。 +如果未检测到 IPVS 内核模块,则 kube-proxy 会退回到 iptables 代理模式运行。 +{{< /note >}} + + +{{< figure src="/images/docs/services-ipvs-overview.svg" title="IPVS 模式下 Service 的虚拟 IP 地址机制" class="diagram-medium" >}} + + +## 会话亲和性 {#session-affinity} + + +在这些代理模型中,绑定到 Service IP:Port 的流量被代理到合适的后端, +客户端不需要知道任何关于 Kubernetes、Service 或 Pod 的信息。 + + +如果要确保来自特定客户端的连接每次都传递给同一个 Pod, +你可以通过设置 Service 的 `.spec.sessionAffinity` 为 `ClientIP` +来设置基于客户端 IP 地址的会话亲和性(默认为 `None`)。 + + +### 会话粘性超时 {#session-stickiness-timeout} + + +你还可以通过设置 Service 的 `.spec.sessionAffinityConfig.clientIP.timeoutSeconds` +来设置最大会话粘性时间(默认值为 10800,即 3 小时)。 + +{{< note >}} + +在 Windows 上不支持为 Service 设置最大会话粘性时间。 +{{< /note >}} + + +## 将 IP 地址分配给 Service {#ip-address-assignment-to-services} + + +与实际路由到固定目标的 Pod IP 地址不同,Service IP 实际上不是由单个主机回答的。 +相反,kube-proxy 使用数据包处理逻辑(例如 Linux 的 iptables) +来定义**虚拟** IP 地址,这些地址会按需被透明重定向。 + + +当客户端连接到 VIP 时,其流量会自动传输到适当的端点。 +实际上,Service 的环境变量和 DNS 是根据 Service 的虚拟 IP 地址(和端口)填充的。 + + +### 避免冲突 {#avoiding-collisions} + + +Kubernetes 的主要哲学之一是, +你不应需要在完全不是你的问题的情况下面对可能导致你的操作失败的情形。 +对于 Service 资源的设计,也就是如果你选择的端口号可能与其他人的选择冲突, +就不应该让你自己选择端口号。这是一种失败隔离。 + + +为了允许你为 Service 选择端口号,我们必须确保没有任何两个 Service 会发生冲突。 +Kubernetes 通过从为 API 服务器配置的 `service-cluster-ip-range` +CIDR 范围内为每个 Service 分配自己的 IP 地址来实现这一点。 + + +为了确保每个 Service 都获得唯一的 IP,内部分配器在创建每个 Service +之前更新 {{< glossary_tooltip term_id="etcd" >}} 中的全局分配映射,这种更新操作具有原子性。 +映射对象必须存在于数据库中,这样 Service 才能获得 IP 地址分配, +否则创建将失败,并显示无法分配 IP 地址。 + + +在控制平面中,后台控制器负责创建该映射(从使用内存锁定的旧版本的 Kubernetes 迁移时需要这一映射)。 +Kubernetes 还使用控制器来检查无效的分配(例如,因管理员干预而导致无效分配) +以及清理已分配但没有 Service 使用的 IP 地址。 + + +#### Service 虚拟 IP 地址的地址段 {#service-ip-static-sub-range} + +{{< feature-state for_k8s_version="v1.25" state="beta" >}} + + +Kubernetes 根据配置的 `service-cluster-ip-range` 的大小使用公式 +`min(max(16, cidrSize / 16), 256)` 将 `ClusterIP` 范围分为两段。 +该公式可以解释为:介于 16 和 256 之间,并在上下界之间存在渐进阶梯函数的分配。 + + +Kubernetes 优先通过从高段中选择来为 Service 分配动态 IP 地址, +这意味着如果要将特定 IP 地址分配给 `type: ClusterIP` Service, +则应手动从**低**段中分配 IP 地址。 +该方法降低了分配导致冲突的风险。 + + +如果你禁用 `ServiceIPStaticSubrange` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), +则 Kubernetes 用于手动分配和动态分配的 IP 共享单个地址池,这适用于 `type: ClusterIP` 的 Service。 + + +## 流量策略 {#traffic-policies} + + +你可以设置 `.spec.internalTrafficPolicy` 和 `.spec.externalTrafficPolicy` +字段来控制 Kubernetes 如何将流量路由到健康(“就绪”)的后端。 + + +### 内部流量策略 {#internal-traffic-policy} + +{{< feature-state for_k8s_version="v1.22" state="beta" >}} + + +你可以设置 `.spec.internalTrafficPolicy` 字段来控制来自内部源的流量如何被路由。 +有效值为 `Cluster` 和 `Local`。 +将字段设置为 `Cluster` 会将内部流量路由到所有准备就绪的端点, +将字段设置为 `Local` 仅会将流量路由到本地节点准备就绪的端点。 +如果流量策略为 `Local` 但没有本地节点端点,那么 kube-proxy 会丢弃该流量。 + + +### 外部流量策略 {#external-traffic-policy} + + +你可以设置 `.spec.externalTrafficPolicy` 字段来控制从外部源路由的流量。 +有效值为 `Cluster` 和 `Local`。 +将字段设置为 `Cluster` 会将外部流量路由到所有准备就绪的端点, +将字段设置为 `Local` 仅会将流量路由到本地节点上准备就绪的端点。 +如果流量策略为 `Local` 并且没有本地节点端点, +那么 kube-proxy 不会转发与相关 Service 相关的任何流量。 + + +### 流向正终止的端点的流量 {#traffic-to-terminating-endpoints} + +{{< feature-state for_k8s_version="v1.26" state="beta" >}} + + +如果为 kube-proxy 启用了 `ProxyTerminatingEndpoints` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)且流量策略为 `Local`, +则节点的 kube-proxy 将使用更复杂的算法为 Service 选择端点。 +启用此特性时,kube-proxy 会检查节点是否具有本地端点以及是否所有本地端点都标记为正在终止过程中。 +如果有本地端点并且**所有**本地端点都被标记为处于终止过程中, +则 kube-proxy 会将转发流量到这些正在终止过程中的端点。 +否则,kube-proxy 会始终选择将流量转发到并未处于终止过程中的端点。 + + +这种对处于终止过程中的端点的转发行为使得 `NodePort` 和 `LoadBalancer` Service +能有条不紊地腾空设置了 `externalTrafficPolicy: Local` 时的连接。 + +当一个 Deployment 被滚动更新时,处于负载均衡器后端的节点可能会将该 Deployment 的 N 个副本缩减到 +0 个副本。在某些情况下,外部负载均衡器可能在两次执行健康检查探针之间将流量发送到具有 0 个副本的节点。 +将流量路由到处于终止过程中的端点可确保正在缩减 Pod 的节点能够正常接收流量, +并逐渐降低指向那些处于终止过程中的 Pod 的流量。 +到 Pod 完成终止时,外部负载均衡器应该已经发现节点的健康检查失败并从后端池中完全移除该节点。 + +## {{% heading "whatsnext" %}} + + +要了解有关 Service 的更多信息, +请阅读[使用 Service 连接应用](/zh-cn/docs/tutorials/services/connect-applications-service/)。 + + +也可以: + +* 阅读 [Service](/zh-cn/docs/concepts/services-networking/service/) 了解其概念 +* 阅读 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) 了解其概念 +* 阅读 [API 参考](/zh-cn/docs/reference/kubernetes-api/service-resources/service-v1/)进一步了解 Service API diff --git a/content/zh-cn/docs/reference/node/device-plugin-api-versions.md b/content/zh-cn/docs/reference/node/device-plugin-api-versions.md new file mode 100644 index 0000000000000..56c03d226ad28 --- /dev/null +++ b/content/zh-cn/docs/reference/node/device-plugin-api-versions.md @@ -0,0 +1,61 @@ +--- +content_type: "reference" +title: Kubelet 设备管理器 API 版本 +weight: 10 +--- + + + +本页详述了 Kubernetes +[设备插件 API](https://github.com/kubernetes/kubelet/tree/master/pkg/apis/deviceplugin) +与不同版本的 Kubernetes 本身之间的版本兼容性。 + + +## 兼容性矩阵 {#compatibility-matrix} + +| | `v1alpha1` | `v1beta1` | +|-----------------|-------------|-------------| +| Kubernetes 1.21 | - | ✓ | +| Kubernetes 1.22 | - | ✓ | +| Kubernetes 1.23 | - | ✓ | +| Kubernetes 1.24 | - | ✓ | +| Kubernetes 1.25 | - | ✓ | +| Kubernetes 1.26 | - | ✓ | + + +简要说明: + +* `✓` 设备插件 API 和 Kubernetes 版本中的特性或 API 对象完全相同。 + +* `+` 设备插件 API 具有 Kubernetes 集群中可能不存在的特性或 API 对象, + 不是因为设备插件 API 添加了额外的新 API 调用,就是因为服务器移除了旧的 API 调用。 + 但它们的共同点是(大多数其他 API)都能工作。 + 请注意,Alpha API 可能会在次要版本的迭代过程中消失或出现重大变更。 + +* `-` Kubernetes 集群具有设备插件 API 无法使用的特性,不是因为服务器添加了额外的 API 调用, + 就是因为设备插件 API 移除了旧的 API 调用。但它们的共同点是(大多数 API)都能工作。 diff --git a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md index fc7b1e50efc15..0e04f3713900a 100644 --- a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md +++ b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md @@ -3,7 +3,7 @@ The file is auto-generated from the Go source code of the component using a gene [generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how to generate the reference documentation, please read [Contributing to the reference documentation](/docs/contribute/generate-ref-docs/). -To update the reference conent, please follow the +To update the reference content, please follow the [Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/) guide. You can file document formatting bugs against the [reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project. @@ -20,9 +20,9 @@ Upload certificates to kubeadm-certs ### 概要 -此命令并非设计用来单独运行。请参阅可用子命令列表。 +将控制平面证书上传到 kubeadm-certs Secret ``` kubeadm init phase upload-certs [flags] @@ -68,6 +68,20 @@ kubeadm 配置文件的路径。 + +--dry-run + + + + +

    +不应用任何变更;只是输出将要执行的操作。 +

    + + + -h, --help diff --git a/content/zh-cn/docs/reference/using-api/api-concepts.md b/content/zh-cn/docs/reference/using-api/api-concepts.md index 8a9d25f67681b..588d08b1abc0a 100644 --- a/content/zh-cn/docs/reference/using-api/api-concepts.md +++ b/content/zh-cn/docs/reference/using-api/api-concepts.md @@ -148,6 +148,28 @@ namespace (`/apis/GROUP/VERSION/namespaces/NAMESPACE/*`). A namespace-scoped res type will be deleted when its namespace is deleted and access to that resource type is controlled by authorization checks on the namespace scope. +Note: core resources use `/api` instead of `/apis` and omit the GROUP path segment. + +Examples: +--> +## 资源 URI {#resource-uris} + +所有资源类型要么是集群作用域的(`/apis/GROUP/VERSION/*`), +要么是名字空间作用域的(`/apis/GROUP/VERSION/namespaces/NAMESPACE/*`)。 +名字空间作用域的资源类型会在其名字空间被删除时也被删除, +并且对该资源类型的访问是由定义在名字空间域中的授权检查来控制的。 + +注意: 核心资源使用 `/api` 而不是 `/apis`,并且不包含 GROUP 路径段。 + +例如: +* `/api/v1/namespaces` +* `/api/v1/pods` +* `/api/v1/namespaces/my-namespace/pods` +* `/apis/apps/v1/deployments` +* `/apis/apps/v1/namespaces/my-namespace/deployments` +* `/apis/apps/v1/namespaces/my-namespace/deployments/my-deployment` + + -## 资源 URI {#resource-uris} - -所有资源类型要么是集群作用域的(`/apis/GROUP/VERSION/*`), -要么是名字空间作用域的(`/apis/GROUP/VERSION/namespaces/NAMESPACE/*`)。 -名字空间作用域的资源类型会在其名字空间被删除时也被删除, -并且对该资源类型的访问是由定义在名字空间域中的授权检查来控制的。 - 你还可以访问资源集合(例如:列出所有 Node)。以下路径用于检索集合和资源: * 集群作用域的资源: @@ -1175,7 +1190,7 @@ by default. The `kubectl` tool uses the `--validate` flag to set the level of field validation. Historically `--validate` was used to toggle client-side validation on or off as a boolean flag. Since Kubernetes 1.25, kubectl uses -server-side field validation when sending requests to a serer with this feature +server-side field validation when sending requests to a server with this feature enabled. Validation will fall back to client-side only when it cannot connect to an API server with field validation enabled. --> diff --git a/content/zh-cn/docs/reference/using-api/deprecation-guide.md b/content/zh-cn/docs/reference/using-api/deprecation-guide.md index 2d2f7476c08d7..ef8324d275d17 100644 --- a/content/zh-cn/docs/reference/using-api/deprecation-guide.md +++ b/content/zh-cn/docs/reference/using-api/deprecation-guide.md @@ -34,6 +34,36 @@ deprecated API versions to newer and more stable API versions. --> ## 各发行版本中移除的 API {#removed-apis-by-release} +### v1.29 + + +**v1.29** 发行版本将停止提供以下已弃用的 API 版本: + + +#### 流控制资源 {#flowcontrol-resources-v129} + + +**flowcontrol.apiserver.k8s.io/v1beta2** API 版本的 FlowSchema +和 PriorityLevelConfiguration 将不会在 v1.29 中提供。 + +* 迁移清单和 API 客户端使用 **flowcontrol.apiserver.k8s.io/v1beta3** API 版本, + 此 API 从 v1.26 版本开始可用; +* 所有的已保存的对象都可以通过新的 API 来访问; +* **flowcontrol.apiserver.k8s.io/v1beta3** 中需要额外注意的变更: + * PriorityLevelConfiguration 的 `spec.limited.assuredConcurrencyShares` + 字段已被更名为 `spec.limited.nominalConcurrencyShares` + ### v1.27 -**storage.k8s.io/v1beta1** API版本的 CSIStorageCapacity 将不再在 v1.27 提供。 +**storage.k8s.io/v1beta1** API 版本的 CSIStorageCapacity 将不会在 v1.27 提供。 * 自 v1.24 版本起,迁移清单和 API 客户端使用 **storage.k8s.io/v1** API 版本 * 所有现有的持久化对象都可以通过新的 API 访问 @@ -59,7 +89,7 @@ The **storage.k8s.io/v1beta1** API version of CSIStorageCapacity will no longer ### v1.26 **v1.26** 发行版本中将去除以下已弃用的 API 版本: @@ -69,30 +99,30 @@ The **v1.26** release will stop serving the following deprecated API versions: #### 流控制资源 {#flowcontrol-resources-v126} -**flowcontrol.apiserver.k8s.io/v1beta1** API 版本的 FlowSchema -和 PriorityLevelConfiguration 将不会在 v1.26 中提供。 +从 v1.26 版本开始不再提供 **flowcontrol.apiserver.k8s.io/v1beta1** API 版本的 +FlowSchema 和 PriorityLevelConfiguration。 -* 迁移清单和 API 客户端使用 **flowcontrol.apiserver.k8s.io/v1beta2** API 版本, - 此 API 从 v1.23 版本开始可用; +* 迁移清单和 API 客户端使用 **flowcontrol.apiserver.k8s.io/v1beta3** API 版本, + 此 API 从 v1.26 版本开始可用; * 所有的已保存的对象都可以通过新的 API 来访问; * 没有需要额外注意的变更 #### HorizontalPodAutoscaler {#horizontalpodautoscaler-v126} -**autoscaling/v2beta2** API 版本的 HorizontalPodAutoscaler 将不会在 -v1.26 版本中提供。 +从 v1.26 版本开始不再提供 **autoscaling/v2beta2** API 版本的 +HorizontalPodAutoscaler。 * 迁移清单和 API 客户端使用 **autoscaling/v2** API 版本, 此 API 从 v1.23 版本开始可用; @@ -101,20 +131,20 @@ v1.26 版本中提供。 ### v1.25 **v1.25** 发行版本将停止提供以下已废弃 API 版本: #### CronJob {#cronjob-v125} -**batch/v1beta1** API 版本的 CronJob 将不会在 v1.25 版本中继续提供。 +从 v1.25 版本开始不再提供 **batch/v1beta1** API 版本的 CronJob。 * 迁移清单和 API 客户端使用 **batch/v1** API 版本,此 API 从 v1.21 版本开始可用; * 所有的已保存的对象都可以通过新的 API 来访问; @@ -123,7 +153,7 @@ The **batch/v1beta1** API version of CronJob will no longer be served in v1.25. #### EndpointSlice {#endpointslice-v125} -**discovery.k8s.io/v1beta1** API 版本的 EndpointSlice 将不会在 v1.25 版本中继续提供。 +从 v1.25 版本开始不再提供 **discovery.k8s.io/v1beta1** API 版本的 EndpointSlice。 * 迁移清单和 API 客户端使用 **discovery.k8s.io/v1** API 版本,此 API 从 v1.21 版本开始可用; * 所有的已保存的对象都可以通过新的 API 来访问; @@ -146,12 +176,12 @@ The **discovery.k8s.io/v1beta1** API version of EndpointSlice will no longer be #### Event {#event-v125} -**events.k8s.io/v1beta1** API 版本的 Event 将不会在 v1.25 版本中继续提供。 +从 v1.25 版本开始不再提供 **events.k8s.io/v1beta1** API 版本的 Event。 * 迁移清单和 API 客户端使用 **events.k8s.io/v1** API 版本,此 API 从 v1.19 版本开始可用; * 所有的已保存的对象都可以通过新的 API 来访问; @@ -186,12 +216,14 @@ The **events.k8s.io/v1beta1** API version of Event will no longer be served in v #### HorizontalPodAutoscaler {#horizontalpodautoscaler-v125} -**autoscaling/v2beta1** API 版本的 HorizontalPodAutoscaler 将不会在 v1.25 版本中继续提供。 +从 v1.25 版本开始不再提供 **autoscaling/v2beta1** API 版本的 +HorizontalPodAutoscaler。 * 迁移清单和 API 客户端使用 **autoscaling/v2** API 版本,此 API 从 v1.23 版本开始可用; * 所有的已保存的对象都可以通过新的 API 来访问; @@ -199,14 +231,14 @@ The **autoscaling/v2beta1** API version of HorizontalPodAutoscaler will no longe #### PodDisruptionBudget {#poddisruptionbudget-v125} -**policy/v1beta1** API 版本的 PodDisruptionBudget 将不会在 v1.25 版本中继续提供。 +从 v1.25 版本开始不再提供 **policy/v1beta1** API 版本的 PodDisruptionBudget。 * 迁移清单和 API 客户端使用 **policy/v1** API 版本,此 API 从 v1.21 版本开始可用; * 所有的已保存的对象都可以通过新的 API 来访问; @@ -219,14 +251,14 @@ The **policy/v1beta1** API version of PodDisruptionBudget will no longer be serv #### PodSecurityPolicy {#psp-v125} -**policy/v1beta1** API 版本中的 PodSecurityPolicy 将不会在 v1.25 中提供, +从 v1.25 版本开始不再提供 **policy/v1beta1** API 版本中的 PodSecurityPolicy, 并且 PodSecurityPolicy 准入控制器也会被删除。 迁移到 [Pod 安全准入](/zh-cn/docs/concepts/security/pod-security-admission/)或[第三方准入 webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/)。 @@ -236,13 +268,13 @@ For more information on the deprecation, see [PodSecurityPolicy Deprecation: Pas #### RuntimeClass {#runtimeclass-v125} -**node.k8s.io/v1beta1** API 版本中的 RuntimeClass 将不会在 v1.25 中提供。 +从 v1.25 版本开始不再提供 **node.k8s.io/v1beta1** API 版本中的 RuntimeClass。 * 迁移清单和 API 客户端使用 **node.k8s.io/v1** API 版本,此 API 从 v1.20 版本开始可用; * 所有的已保存的对象都可以通过新的 API 来访问; diff --git a/content/zh-cn/docs/reference/using-api/deprecation-policy.md b/content/zh-cn/docs/reference/using-api/deprecation-policy.md index 4b376014a144d..e85fcd7815a7d 100644 --- a/content/zh-cn/docs/reference/using-api/deprecation-policy.md +++ b/content/zh-cn/docs/reference/using-api/deprecation-policy.md @@ -424,7 +424,7 @@ behavior get removed. 考虑一个假想的名为 Widget 的 REST 资源,在上述时间线中位于 API v1,而现在打算将其弃用。 我们会在文档和[公告](https://groups.google.com/forum/#!forum/kubernetes-announce)中与 X+1 版本的发布同步记述此弃用决定。 -Wdiget 资源仍会在 API 版本 v1(已弃用)中存在,但不会出现在 v2alpha1 中。 +Widget 资源仍会在 API 版本 v1(已弃用)中存在,但不会出现在 v2alpha1 中。 Widget 资源会 X+8 发布版本之前(含 X+8)一直存在并可用。 只有在发布版本 X+9 中,API v1 寿终正寝时,Widget 才彻底消失,相应的资源行为也被移除。 diff --git a/content/zh-cn/docs/reference/using-api/server-side-apply.md b/content/zh-cn/docs/reference/using-api/server-side-apply.md index 288b49ce43e9a..8fe599fba3150 100644 --- a/content/zh-cn/docs/reference/using-api/server-side-apply.md +++ b/content/zh-cn/docs/reference/using-api/server-side-apply.md @@ -645,7 +645,25 @@ First, the user defines a new configuration containing only the `replicas` field 首先,用户新定义一个只包含 `replicas` 字段的配置文件: -{{< codenew file="application/ssa/nginx-deployment-replicas-only.yaml" >}} +```yaml +# 将此文件另存为 'nginx-deployment-replicas-only.yaml' +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment +spec: + replicas: 3 +``` + +{{< note >}} + +此场景中针对 SSA 的 YAML 文件仅包含你要更改的字段。 +如果只想使用 SSA 来修改 `spec.replicas` 字段,你无需提供完全兼容的 Deployment 清单。 +{{< /note >}} Kubernetes 需要 PKI 证书才能进行基于 TLS 的身份验证。如果你是使用 @@ -60,7 +62,8 @@ Kubernetes 需要 PKI 才能执行以下操作: {{< note >}} 只有当你运行 kube-proxy 并要支持[扩展 API 服务器](/zh-cn/docs/tasks/extend-kubernetes/setup-extension-api-server/)时, @@ -75,7 +78,9 @@ etcd 还实现了双向 TLS 来对客户端和对其他对等节点进行身份 ## 证书存放的位置 {#where-certificates-are-stored} @@ -85,8 +90,11 @@ If you install Kubernetes with kubeadm, most certificates are stored in `/etc/ku ## 手动配置证书 {#configure-certificates-manually} @@ -98,7 +106,8 @@ See [Certificate Management with kubeadm](/docs/tasks/administer-cluster/kubeadm ### 单根 CA {#single-root-ca} @@ -113,7 +122,8 @@ Required CAs: | etcd/ca.crt,key | etcd-ca | For all etcd-related functions | | front-proxy-ca.crt,key | kubernetes-front-proxy-ca | For the [front-end proxy](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) | -On top of the above CAs, it is also necessary to get a public/private key pair for service account management, `sa.key` and `sa.pub`. +On top of the above CAs, it is also necessary to get a public/private key pair for service account +management, `sa.key` and `sa.pub`. --> 需要这些 CA: @@ -153,39 +163,43 @@ Required certificates: 需要这些证书: -| 默认 CN | 父级 CA | O (位于 Subject 中) | 类型 | 主机 (SAN) | -|-------------------------------|---------------------------|----------------|----------------------------------------|---------------------------------------------| -| kube-etcd | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | -| kube-etcd-peer | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | -| kube-etcd-healthcheck-client | etcd-ca | | client | | -| kube-apiserver-etcd-client | etcd-ca | system:masters | client | | -| kube-apiserver | kubernetes-ca | | server | ``, ``, ``, `[1]` | -| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | | -| front-proxy-client | kubernetes-front-proxy-ca | | client | | +| 默认 CN | 父级 CA |O(位于 Subject 中)| kind | 主机 (SAN) | +|-------------------------------|---------------------------|-------------------|------------------|-----------------------------------------------------| +| kube-etcd | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | +| kube-etcd-peer | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | +| kube-etcd-healthcheck-client | etcd-ca | | client | | +| kube-apiserver-etcd-client | etcd-ca | system:masters | client | | +| kube-apiserver | kubernetes-ca | | server | ``, ``, ``, `[1]` | +| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | | +| front-proxy-client | kubernetes-front-proxy-ca | | client | | [1]: 用来连接到集群的不同 IP 或 DNS 名 (就像 [kubeadm](/zh-cn/docs/reference/setup-tools/kubeadm/) 为负载均衡所使用的固定 IP 或 DNS 名:`kubernetes`、`kubernetes.default`、`kubernetes.default.svc`、 `kubernetes.default.svc.cluster`、`kubernetes.default.svc.cluster.local`)。 -其中,`kind` 对应一种或多种类型的 [x509 密钥用途](https://pkg.go.dev/k8s.io/api/certificates/v1beta1#KeyUsage): +其中 `kind` 对应一种或多种类型的 x509 密钥用途,也可记录在 +[CertificateSigningRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1#CertificateSigningRequest) +类型的 `.spec.usages` 中: | kind | 密钥用途 | |--------|---------------------------------------------------------------------------------| -| server | 数字签名、密钥加密、服务端认证 | -| client | 数字签名、密钥加密、客户端认证 | +| server | 数字签名、密钥加密、服务端认证 | +| client | 数字签名、密钥加密、客户端认证 | {{< note >}} 上面列出的 Hosts/SAN 是推荐的配置方式;如果需要特殊安装,则可以在所有服务器证书上添加其他 SAN。 {{< /note >}} @@ -209,9 +224,11 @@ Hosts/SAN listed above are the recommended ones for getting a working cluster; i 对于 kubeadm 用户: @@ -233,22 +250,22 @@ Paths should be specified using the given argument regardless of location. 使用)。无论使用什么位置,都应使用给定的参数指定路径。 | 默认 CN | 建议的密钥路径 | 建议的证书路径 | 命令 | 密钥参数 | 证书参数 | |------------------------------|------------------------------|-----------------------------|----------------|------------------------------|-------------------------------------------| @@ -273,18 +290,19 @@ Same considerations apply for the service account key pair: 注意事项同样适用于服务帐户密钥对: -| 私钥路径 | 公钥路径 | 命令 | 参数 | -|------------------------------|-----------------------------|-------------------------|--------------------------------------| -| sa.key | | kube-controller-manager | --service-account-private-key-file | -| | sa.pub | kube-apiserver | --service-account-key-file | +| 私钥路径 | 公钥路径 | 命令 | 参数 | +|-------------------|------------------|-------------------------|--------------------------------------| +| sa.key | | kube-controller-manager | --service-account-private-key-file | +| | sa.pub | kube-apiserver | --service-account-key-file | 下面的例子展示了自行生成所有密钥和证书时所需要提供的文件路径。 这些路径基于[前面的表格](/zh-cn/docs/setup/best-practices/certificates/#certificate-paths)。 @@ -324,12 +342,12 @@ You must manually configure these administrator account and service accounts: 你必须手动配置以下管理员帐户和服务帐户: | 文件名 | 凭据名称 | 默认 CN | O (位于 Subject 中) | |-------------------------|----------------------------|--------------------------------|---------------------| @@ -340,7 +358,9 @@ You must manually configure these administrator account and service accounts: {{< note >}} `kubelet.conf` 中 `` 的值 **必须** 与 kubelet 向 apiserver 注册时提供的节点名称的值完全匹配。 有关更多详细信息,请阅读[节点授权](/zh-cn/docs/reference/access-authn-authz/node/)。 @@ -355,7 +375,7 @@ The value of `` for `kubelet.conf` **must** match precisely the value 1. 为每个配置运行下面的 `kubectl` 命令: -```shell +``` KUBECONFIG= kubectl config set-cluster default-cluster --server=https://:6443 --certificate-authority --embed-certs KUBECONFIG= kubectl config set-credentials --client-key .pem --client-certificate .pem --embed-certs KUBECONFIG= kubectl config set-context default-system --cluster default-cluster --user @@ -367,19 +387,19 @@ These files are used as follows: | filename | command | comment | |-------------------------|-------------------------|-----------------------------------------------------------------------| -| admin.conf | kubectl | Configures administrator user for the cluster | +| admin.conf | kubectl | Configures administrator user for the cluster | | kubelet.conf | kubelet | One required for each node in the cluster. | | controller-manager.conf | kube-controller-manager | Must be added to manifest in `manifests/kube-controller-manager.yaml` | | scheduler.conf | kube-scheduler | Must be added to manifest in `manifests/kube-scheduler.yaml` | --> 这些文件用途如下: -| 文件名 | 命令 | 说明 | +| 文件名 | 命令 | 说明 | |-------------------------|-------------------------|-----------------------------------------------------------------------| -| admin.conf | kubectl | 配置集群的管理员 | -| kubelet.conf | kubelet | 集群中的每个节点都需要一份 | -| controller-manager.conf | kube-controller-manager | 必需添加到 `manifests/kube-controller-manager.yaml` 清单中 | -| scheduler.conf | kube-scheduler | 必需添加到 `manifests/kube-scheduler.yaml` 清单中 | +| admin.conf | kubectl | 配置集群的管理员 | +| kubelet.conf | kubelet | 集群中的每个节点都需要一份 | +| controller-manager.conf | kube-controller-manager | 必需添加到 `manifests/kube-controller-manager.yaml` 清单中 | +| scheduler.conf | kube-scheduler | 必需添加到 `manifests/kube-scheduler.yaml` 清单中 | 集群是运行 Kubernetes 代理的、 由{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}管理的一组 {{< glossary_tooltip text="节点" term_id="node" >}}(物理机或虚拟机)。 -Kubernetes {{< param "version" >}} 单个集群支持的最大节点数为 5000。 +Kubernetes {{< param "version" >}} 单个集群支持的最大节点数为 5,000。 更具体地说,Kubernetes 旨在适应满足以下**所有**标准的配置: * 每个节点的 Pod 数量不超过 110 -* 节点数不超过 5000 -* Pod 总数不超过 150000 -* 容器总数不超过 300000 +* 节点数不超过 5,000 +* Pod 总数不超过 150,000 +* 容器总数不超过 300,000 ## 云供应商资源配额 {#quota-issues} 为避免遇到云供应商配额问题,在创建具有大规模节点的集群时,请考虑以下事项: + * 请求增加云资源的配额,例如: - * 计算实例 - * CPUs - * 存储卷 - * 使用中的 IP 地址 - * 数据包过滤规则集 - * 负载均衡数量 - * 网络子网 - * 日志流 + * 计算实例 + * CPU + * 存储卷 + * 使用中的 IP 地址 + * 数据包过滤规则集 + * 负载均衡数量 + * 网络子网 + * 日志流 * 由于某些云供应商限制了创建新实例的速度,因此通过分批启动新节点来控制集群扩展操作,并在各批之间有一个暂停。 -`VerticalPodAutoscaler` 是一种自定义资源,你可以将其部署到集群中,帮助你管理资源请求和 Pod 的限制。 -访问 [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) -以了解有关 `VerticalPodAutoscaler` 的更多信息, -以及如何使用它来扩展集群组件(包括对集群至关重要的插件)的信息。 +* `VerticalPodAutoscaler` 是一种自定义资源,你可以将其部署到集群中,帮助你管理 Pod 的资源请求和资源限制。 + 了解有关 [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) + 的更多信息,了解如何用它扩展集群组件(包括对集群至关重要的插件)的信息。 -[集群自动扩缩器](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme) -与许多云供应商集成在一起,帮助你在你的集群中,按照资源需求级别运行正确数量的节点。 +* [集群自动扩缩器](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme) + 与许多云供应商集成在一起,帮助你在你的集群中,按照资源需求级别运行正确数量的节点。 -[addon resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#readme) -可帮助你在集群规模变化时自动调整插件的大小。 +* [addon resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#readme) + 可帮助你在集群规模变化时自动调整插件的大小。 diff --git a/content/zh-cn/docs/setup/production-environment/container-runtimes.md b/content/zh-cn/docs/setup/production-environment/container-runtimes.md index 79aca92840097..700cf596f2f90 100644 --- a/content/zh-cn/docs/setup/production-environment/container-runtimes.md +++ b/content/zh-cn/docs/setup/production-environment/container-runtimes.md @@ -61,9 +61,8 @@ v1.24 之前的 Kubernetes 版本直接集成了 Docker Engine 的一个组件 你可以阅读[检查 Dockershim 移除是否会影响你](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/)以了解此删除可能会如何影响你。 @@ -99,20 +98,11 @@ For more information, see [Network Plugin Requirements](/docs/concepts/extend-ku ### 转发 IPv4 并让 iptables 看到桥接流量 -通过运行 `lsmod | grep br_netfilter` 来验证 `br_netfilter` 模块是否已加载。 - -若要显式加载此模块,请运行 `sudo modprobe br_netfilter`。 - -为了让 Linux 节点的 iptables 能够正确查看桥接流量,请确认 `sysctl` 配置中的 -`net.bridge.bridge-nf-call-iptables` 设置为 1。例如: +执行下述指令: ```bash cat < +通过运行以下指令确认 `br_netfilter` 和 `overlay` 模块被加载: + +```bash +lsmod | grep br_netfilter +lsmod | grep overlay +``` + + +通过运行以下指令确认 `net.bridge.bridge-nf-call-iptables`、`net.bridge.bridge-nf-call-ip6tables` +和 `net.ipv4.ip_forward` 系统变量在你的 `sysctl` 配置中被设置为 1: + +```bash +sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward +``` + 按照[开始使用 containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md) 的说明进行操作。 @@ -326,11 +336,15 @@ Follow the instructions for [getting started with containerd](https://github.com {{< tabs name="找到 config.toml 文件" >}} {{% tab name="Linux" %}} - + 你可以在路径 `/etc/containerd/config.toml` 下找到此文件。 {{% /tab %}} {{% tab name="Windows" %}} - + 你可以在路径 `C:\Program Files\containerd\config.toml` 下找到此文件。 {{% /tab %}} {{< /tabs >}} @@ -378,6 +392,20 @@ CRI 集成插件。 你需要启用 CRI 支持才能在 Kubernetes 集群中使用 containerd。 要确保 `cri` 没有出现在 `/etc/containerd/config.toml` 文件中 `disabled_plugins` 列表内。如果你更改了这个文件,也请记得要重启 `containerd`。 + + +如果你在初次安装集群后或安装 CNI 后遇到容器崩溃循环,则随软件包提供的 containerd +配置可能包含不兼容的配置参数。考虑按照 +[getting-started.md](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#advanced-topics) +中指定的 `containerd config default > /etc/containerd/config.toml` 重置 containerd +配置,然后相应地设置上述配置参数。 {{< /note >}} 以下操作假设你使用 [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd) 适配器来将 Docker Engine 与 Kubernetes 集成。 -{{< /note >}} +{{< /note >}} 1. 在你的每个节点上,遵循[安装 Docker Engine](https://docs.docker.com/engine/install/#server) 指南为你的 Linux 发行版安装 Docker。 @@ -539,7 +567,8 @@ visit [MCR Deployment Guide](https://docs.mirantis.com/mcr/20.10/install.html). 请访问 [MCR 部署指南](https://docs.mirantis.com/mcr/20.10/install.html)。 检查名为 `cri-docker.socket` 的 systemd 单元以找出 CRI 套接字的路径。 diff --git a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability.md b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability.md index a51bae45d60a0..deceb54251184 100644 --- a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability.md +++ b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability.md @@ -334,21 +334,21 @@ option. Your cluster requirements may need a different configuration. {{< /note >}} - - 输出类似于: + 输出类似于: - ```sh - ... - You can now join any number of control-plane node by running the following command on each as a root: - kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 + ```sh + ... + You can now join any number of control-plane node by running the following command on each as a root: + kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 - Please note that the certificate-key gives access to cluster sensitive data, keep it secret! - As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward. + Please note that the certificate-key gives access to cluster sensitive data, keep it secret! + As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward. - Then you can join any number of worker nodes by running the following on each as root: + Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 - ``` + ``` 2. 应用你所选择的 CNI 插件: [请遵循以下指示](/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network) @@ -438,6 +438,7 @@ For each additional control plane node you should: - The `--certificate-key ...` will cause the control plane certificates to be downloaded from the `kubeadm-certs` Secret in the cluster and be decrypted using the given key. +You can join multiple control-plane nodes in parallel. --> 对于每个其他控制平面节点,你应该: @@ -452,6 +453,7 @@ For each additional control plane node you should: - `--certificate-key ...` 将导致从集群中的 `kubeadm-certs` Secret 下载控制平面证书并使用给定的密钥进行解密。 +你可以并行地加入多个控制面节点。 - 在你的集群中,将配置模板中的以下变量替换为适当值: diff --git a/content/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index df6af3f768b33..37de6ac9add45 100644 --- a/content/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -6,7 +6,6 @@ card: name: tasks weight: 40 --- - + 要检查 {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} 是否安装, -执行 `kubectl version --client` 命令。 -kubectl 的版本应该与集群的 API +执行 `kubectl version --client` 命令。kubectl 的版本应该与集群的 API 服务器[使用同一次版本号](/zh-cn/releases/version-skew-policy/#kubectl)。 @@ -415,7 +414,7 @@ $Env:KUBECONFIG_SAVED=$ENV:KUBECONFIG ``` +## 检查 kubeconfig 所表示的主体 {#check-the-subject} + +你在通过集群的身份验证后将获得哪些属性(用户名、组),这一点并不总是很明显。 +如果你同时管理多个集群,这可能会更具挑战性。 + + +对于你所选择的 Kubernetes 客户端上下文,有一个 `kubectl` Alpha 子命令可以检查用户名等主体属性: +`kubectl alpha auth whoami`。 + +更多细节请参阅[通过 API 访问客户端的身份验证信息](/zh-cn/docs/reference/access-authn-authz/authentication/#self-subject-review)。 + ## {{% heading "whatsnext" %}} -## 列出所有命名空间下的所有容器 +## 列出所有命名空间下的所有容器镜像 - 使用 `kubectl get pods --all-namespaces` 获取所有命名空间下的所有 Pod - 使用 `-o jsonpath={.items[*].spec.containers[*].image}` 来格式化输出,以仅包含容器镜像名称。 @@ -80,7 +80,7 @@ jsonpath 解释如下: - `.image`: 获取镜像 @@ -105,12 +105,12 @@ sort ``` -## 列出以标签过滤后的 Pod 的所有容器 +## 列出以标签过滤后的 Pod 的所有容器镜像 要获取匹配特定标签的 Pod,请使用 -l 参数。以下匹配仅与标签 `app=nginx` 相符的 Pod。 @@ -119,12 +119,12 @@ kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].ima ``` -## 列出以命名空间过滤后的 Pod 的所有容器 +## 列出以命名空间过滤后的 Pod 的所有容器镜像 要获取匹配特定命名空间的 Pod,请使用 namespace 参数。以下仅匹配 `kube-system` 命名空间下的 Pod。 @@ -133,12 +133,12 @@ kubectl get pods --namespace kube-system -o jsonpath="{.items[*].spec.containers ``` -## 使用 go-template 代替 jsonpath 来获取容器 +## 使用 go-template 代替 jsonpath 来获取容器镜像 作为 jsonpath 的替代,Kubectl 支持使用 [go-templates](https://golang.org/pkg/text/template/) 来格式化输出: diff --git a/content/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard.md index b21a652db20f9..47375267f4627 100644 --- a/content/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -6,6 +6,7 @@ card: name: tasks weight: 30 title: 使用 Web 界面 Dashboard + description: 部署并访问 Web 界面(Kubernetes 仪表板)。 --- ## 访问 Dashboard 用户界面 为了保护你的集群数据,默认情况下,Dashboard 会使用最少的 RBAC 配置进行部署。 当前,Dashboard 仅支持使用 Bearer 令牌登录。 要为此样本演示创建令牌,你可以按照 -[创建示例用户](https://github.com/kubernetes/dashboard/wiki/Creating-sample-user) +[创建示例用户](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md) 上的指南进行操作。 - **CPU 需求(核数)** 和 **内存需求(MiB)**:你可以为容器定义最小的 [资源限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)。 默认情况下,Pod 没有 CPU 和内存限制。 - **运行命令**和**运行命令参数**:默认情况下,你的容器会运行 Docker 镜像的默认 [入口命令](/zh-cn/docs/tasks/inject-data-application/define-command-argument-container/)。 你可以使用 command 选项覆盖默认值。 - **以特权模式运行**:这个设置决定了在 [特权容器](/zh-cn/docs/concepts/workloads/pods/#privileged-mode-for-containers) @@ -344,7 +355,7 @@ If needed, you can expand the **Advanced options** section where you can specify Kubernetes supports declarative configuration. In this style, all configuration is stored in manifests (YAML or JSON configuration files). -The manifests use the Kubernetes [API](/docs/concepts/overview/kubernetes-api/) resource schemas. +The manifests use Kubernetes [API](/docs/concepts/overview/kubernetes-api/) resource schemas. --> ### 上传 YAML 或者 JSON 文件 @@ -354,7 +365,7 @@ Kubernetes 支持声明式配置。所有的配置都存储在清单文件 作为一种替代在部署向导中指定应用详情的方式,你可以在一个或多个清单文件中定义应用,并且使用 Dashboard 上传文件。 @@ -384,7 +395,7 @@ Dashboard shows most Kubernetes object kinds and groups them in a few menu categ Dashboard 展示大部分 Kubernetes 对象,并将它们分组放在几个菜单类别中。 工作负载的详情视图展示了对象的状态、详细信息和相互关系。 例如,ReplicaSet 所控制的 Pod,或者 Deployment 所关联的新 ReplicaSet 和 @@ -441,11 +452,11 @@ Storage view shows PersistentVolumeClaim resources which are used by application 存储视图展示持久卷申领(PVC)资源,这些资源被应用程序用来存储数据。 -#### ConfigMap 和 Secret +#### ConfigMap 和 Secret {#config-maps-and-secrets} 展示的所有 Kubernetes 资源是在集群中运行的应用程序的实时配置。 通过这个视图可以编辑和管理配置对象,并显示那些默认隐藏的 Secret。 @@ -453,7 +464,8 @@ Shows all Kubernetes resources that are used for live configuration of applicati #### 日志查看器 diff --git a/content/zh-cn/docs/tasks/administer-cluster/access-cluster-api.md b/content/zh-cn/docs/tasks/administer-cluster/access-cluster-api.md index 7813a5da9b95b..83e000a80b5e6 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/zh-cn/docs/tasks/administer-cluster/access-cluster-api.md @@ -1,10 +1,12 @@ --- title: 使用 Kubernetes API 访问集群 content_type: task +weight: 60 --- @@ -21,11 +23,11 @@ This page shows how to access clusters using the Kubernetes API. -## 访问集群 API +## 访问 Kubernetes API ### 使用 kubectl 进行首次访问 @@ -72,8 +74,8 @@ kubectl handles locating and authenticating to the API server. If you want to di kubectl 处理对 API 服务器的定位和身份验证。如果你想通过 http 客户端(如 `curl` 或 `wget`, 或浏览器)直接访问 REST API,你可以通过多种方式对 API 服务器进行定位和身份验证: - 1. 以代理模式运行 kubectl(推荐)。 @@ -84,7 +86,7 @@ kubectl 处理对 API 服务器的定位和身份验证。如果你想通过 htt 为防止中间人攻击,你需要将根证书导入浏览器。 使用 Go 或 Python 客户端库可以在代理模式下访问 kubectl。 @@ -98,7 +100,9 @@ locating the API server and authenticating. 下列命令使 kubectl 运行在反向代理模式下。它处理 API 服务器的定位和身份认证。 - + 像这样运行它: ```shell @@ -119,7 +123,9 @@ Then you can explore the API with curl, wget, or a browser, like so: curl http://localhost:8080/api/ ``` - + 输出类似如下: ```json @@ -184,7 +190,9 @@ TOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 -- curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure ``` - + 输出类似如下: ```json @@ -239,7 +247,9 @@ Kubernetes 官方支持 [Go](#go-client)、[Python](#python-client)、[Java](#ja 参考[客户端库](/zh-cn/docs/reference/using-api/client-libraries/)了解如何使用其他语言来访问 API 以及如何执行身份认证。 - + #### Go 客户端 {#go-client} @@ -252,16 +262,16 @@ Kubernetes 官方支持 [Go](#go-client)、[Python](#python-client)、[Java](#ja 参见 [https://github.com/kubernetes/client-go/releases](https://github.com/kubernetes/client-go/releases) 查看受支持的版本。 * 基于 client-go 客户端编写应用程序。 +{{< note >}} -{{< note >}} -注意 client-go 定义了自己的 API 对象,因此如果需要,请从 client-go 而不是主仓库导入 +client-go 定义了自己的 API 对象,因此如果需要,从 client-go 而不是主仓库导入 API 定义,例如 `import "k8s.io/client-go/kubernetes"` 是正确做法。 {{< /note >}} Go 客户端可以使用与 kubectl 命令行工具相同的 @@ -273,11 +283,11 @@ Go 客户端可以使用与 kubectl 命令行工具相同的 package main import ( - "context" - "fmt" - "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/tools/clientcmd" + "context" + "fmt" + "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/tools/clientcmd" ) func main() { @@ -298,7 +308,9 @@ If the application is deployed as a Pod in the cluster, see [Accessing the API f 如果该应用程序部署为集群中的一个 Pod,请参阅[从 Pod 内访问 API](/zh-cn/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod)。 - + #### Python 客户端 {#python-client} Python 客户端可以使用与 kubectl 命令行工具相同的 @@ -329,7 +341,9 @@ for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name)) ``` - + #### Java 客户端 {#java-client} - 你必须有一个集群。 -本页内容涉及从 Kubernetes {{< skew currentVersionAddMinor -1 >}} +本页内容涉及从 Kubernetes {{< skew currentVersionAddMinor -1 >}} 升级到 Kubernetes {{< skew currentVersion >}}。 如果你的集群未运行 Kubernetes {{< skew currentVersionAddMinor -1 >}}, 那请参考目标 Kubernetes 版本的文档。 - + ## 升级方法 {#upgrade-approaches} ### kubeadm {#upgrade-kubeadm} - 如果你的集群是使用 `kubeadm` 安装工具部署而来, -那么升级集群的详细信息,请参阅 -[升级 kubeadm 集群](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。 +那么升级集群的详细信息,请参阅[升级 kubeadm 集群](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。 -升级集群之后,要记得 -[安装最新版本的 `kubectl`](/zh-cn/docs/tasks/tools/). +升级集群之后,要记得[安装最新版本的 `kubectl`](/zh-cn/docs/tasks/tools/)。 - + ### 手动部署 {#manual-deployments} - -{{< caution >}} -这些步骤不考虑第三方扩展,例如网络和存储插件。 +这些步骤不考虑网络和存储插件等第三方扩展。 {{< /caution >}} -你应该跟随下面操作顺序,手动更新控制平面: + +你应该按照下面的操作顺序,手动更新控制平面: + - etcd (所有实例) - kube-apiserver (所有控制平面的宿主机) - kube-controller-manager - kube-scheduler - cloud controller manager (在你用到时) - -现在,你应该 -[安装最新版本的 `kubectl`](/zh-cn/docs/tasks/tools/). +现在,你应该[安装最新版本的 `kubectl`](/zh-cn/docs/tasks/tools/)。 对于集群中的每个节点, -首先需要[腾空](/zh-cn/docs/tasks/administer-cluster/safely-drain-node/) -节点,然后使用一个运行了 kubelet {{< skew currentVersion >}} 版本的新节点替换它; +首先需要[腾空](/zh-cn/docs/tasks/administer-cluster/safely-drain-node/)节点, +然后使用一个运行了 kubelet {{< skew currentVersion >}} 版本的新节点替换它; 或者升级此节点的 kubelet,并使节点恢复服务。 - `kubectl` 替换了 `pod.yaml` 的内容, 在新的清单文件中,`kind` 被设置为 Pod(未变), 但 `apiVersion` 则被修订了。 + + +### 设备插件 {#device-plugins} + +如果你的集群正在运行设备插件(Device Plugin)并且节点需要升级到具有更新的设备插件(Device Plugin) +API 版本的 Kubernetes 版本,则必须在升级节点之前升级设备插件以同时支持这两个插件 API 版本, +以确保升级过程中设备分配能够继续成功完成。 + +有关详细信息,请参阅 +[API 兼容性](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#api-compatibility)和 +[kubelet 设备管理器 API 版本](/zh-cn/docs/reference/node/device-plugin-api-versions/)。 diff --git a/content/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration.md b/content/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration.md index f4e54653b14e7..0c548ab635c06 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration.md +++ b/content/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration.md @@ -2,6 +2,7 @@ title: 迁移多副本的控制面以使用云控制器管理器 linkTitle: 迁移多副本的控制面以使用云控制器管理器 content_type: task +weight: 250 --- @@ -22,7 +24,7 @@ content_type: task ## 背景 -作为[云驱动提取工作](https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/) +作为[云驱动提取工作](/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/) 的一部分,所有特定于云的控制器都必须移出 `kube-controller-manager`。 所有在 `kube-controller-manager` 中运行云控制器的现有集群必须迁移到特定于云厂商的 `cloud-controller-manager` 中运行这些控制器。 diff --git a/content/zh-cn/docs/tasks/administer-cluster/coredns.md b/content/zh-cn/docs/tasks/administer-cluster/coredns.md index fedc1585107e8..d1962837d4a9e 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/coredns.md +++ b/content/zh-cn/docs/tasks/administer-cluster/coredns.md @@ -2,6 +2,7 @@ title: 使用 CoreDNS 进行服务发现 min-kubernetes-server-version: v1.9 content_type: task +weight: 380 --- @@ -119,9 +121,9 @@ can take care of retaining the existing CoreDNS configuration automatically. ## CoreDNS 调优 diff --git a/content/zh-cn/docs/tasks/administer-cluster/cpu-management-policies.md b/content/zh-cn/docs/tasks/administer-cluster/cpu-management-policies.md index 4cef49f570b90..710dccef30d6d 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/cpu-management-policies.md +++ b/content/zh-cn/docs/tasks/administer-cluster/cpu-management-policies.md @@ -1,6 +1,8 @@ --- title: 控制节点上的 CPU 管理策略 content_type: task +min-kubernetes-server-version: v1.26 +weight: 140 --- -{{< feature-state for_k8s_version="v1.12" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} +如果你正在运行一个旧版本的 Kubernetes,请参阅与该版本对应的文档。 + ## CPU 管理策略 {#cpu-management-policies} @@ -74,12 +84,12 @@ CPU 管理策略通过 kubelet 参数 `--cpu-manager-policy` 支持两种策略: -* `none`: 默认策略,表示现有的调度行为。 -* `static`: 允许为节点上具有某些资源特征的 Pod 赋予增强的 CPU 亲和性和独占性。 +* [`none`](#none-policy):默认策略。 +* [`static`](#static-policy):允许为节点上具有某些资源特征的 Pod 赋予增强的 CPU 亲和性和独占性。 Static 策略的行为可以使用 `--cpu-manager-policy-options` 参数来微调。 该参数采用一个逗号分隔的 `key=value` 策略选项列表。 -此特性可以通过 `CPUManagerPolicyOptions` 特性门控来完全禁用。 +如果你禁用 `CPUManagerPolicyOptions` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), +则你不能微调 CPU 管理器策略。这种情况下,CPU 管理器仅使用其默认设置运行。 -策略选项分为两组:alpha 质量(默认隐藏)和 beta 质量(默认可见)。 +除了顶级的 `CPUManagerPolicyOptions` 特性门控, +策略选项分为两组:Alpha 质量(默认隐藏)和 Beta 质量(默认可见)。 这些组分别由 `CPUManagerPolicyAlphaOptions` 和 `CPUManagerPolicyBetaOptions` 特性门控来管控。 不同于 Kubernetes 标准,这里是由这些特性门控来管控选项组,因为为每个单独选项都添加一个特性门控过于繁琐。 @@ -144,10 +161,6 @@ CPUManager so that the cpu-sets set up by the new policy won’t conflict with i 对需要更改其 CPU 管理器策略的每个节点重复此过程。 跳过此过程将导致 kubelet crashlooping 并出现以下错误: @@ -186,20 +199,20 @@ using the [cpuset cgroup controller](https://www.kernel.org/doc/Documentation/cg 它允许该类 Pod 中的容器访问节点上的独占 CPU 资源。这种独占性是使用 [cpuset cgroup 控制器](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt)来实现的。 +{{< note >}} -{{< note >}} 诸如容器运行时和 kubelet 本身的系统服务可以继续在这些独占 CPU 上运行。独占性仅针对其他 Pod。 {{< /note >}} +{{< note >}} -{{< note >}} CPU 管理器不支持运行时下线和上线 CPU。此外,如果节点上的在线 CPU 集合发生变化, 则必须驱逐节点上的 Pod,并通过删除 kubelet 根目录中的状态文件 `cpu_manager_state` 来手动重置 CPU 管理器。 @@ -231,13 +244,13 @@ exclusive CPUs. `Guaranteed` Pod 中的容器,如果声明了非整数值的 CPU `requests`,也将运行在共享池的 CPU 上。 只有 `Guaranteed` Pod 中,指定了整数型 CPU `requests` 的容器,才会被分配独占 CPU 资源。 +{{< note >}} -{{< note >}} 当启用 static 策略时,要求使用 `--kube-reserved` 和/或 `--system-reserved` 或 `--reserved-cpus` 来保证预留的 CPU 值大于零。 这是因为零预留 CPU 值可能使得共享池变空。 @@ -407,9 +420,9 @@ The following policy options exist for the static `CPUManager` policy: 你仍然必须使用 `CPUManagerPolicyOptions` kubelet 选项启用每个选项。 静态 `CPUManager` 策略存在以下策略选项: -* `full-pcpus-only`(beta,默认可见)(1.22 或更高版本) +* `full-pcpus-only`(Beta,默认可见)(1.22 或更高版本) * `distribute-cpus-across-numa`(alpha,默认隐藏)(1.23 或更高版本) -* `align-by-socket`(alpha,默认隐藏)(1.25 或更高版本) +* `align-by-socket`(Alpha,默认隐藏)(1.25 或更高版本) @@ -90,11 +92,11 @@ kubectl get svc,pod ``` ```none NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -svc/kubernetes 10.100.0.1 443/TCP 46m -svc/nginx 10.100.0.16 80/TCP 33s +service/kubernetes 10.100.0.1 443/TCP 46m +service/nginx 10.100.0.16 80/TCP 33s NAME READY STATUS RESTARTS AGE -po/nginx-701339712-e0qfq 1/1 Running 0 35s +pod/nginx-701339712-e0qfq 1/1 Running 0 35s ``` @@ -48,19 +50,19 @@ Kubernetes 核心代码导入软件包来实现一个 cloud-controller-manager ### 树外(Out of Tree) 要为你的云环境构建一个树外(Out-of-Tree)云控制器管理器: 1. 使用满足 [`cloudprovider.Interface`](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go) 接口的实现来创建一个 Go 语言包。 2. 使用来自 Kubernetes 核心代码库的 - [cloud-controller-manager 中的 main.go](https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/main.go) + [cloud-controller-manager 中的 `main.go`](https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/main.go) 作为 `main.go` 的模板。如上所述,唯一的区别应该是将导入的云包不同。 3. 在 `main.go` 中导入你的云包,确保你的包有一个 `init` 块来运行 [`cloudprovider.RegisterCloudProvider`](https://github.com/kubernetes/cloud-provider/blob/master/plugins.go)。 diff --git a/content/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers.md b/content/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers.md index 22930e0a24494..cc2cd873a1429 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers.md +++ b/content/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers.md @@ -2,6 +2,7 @@ title: 自定义 DNS 服务 content_type: task min-kubernetes-server-version: v1.12 +weight: 160 --- @@ -169,7 +171,7 @@ Corefile 配置包括以下 CoreDNS [插件](https://coredns.io/plugins/): @@ -84,7 +85,7 @@ dnsutils 1/1 Running 0 ``` 一旦 Pod 处于运行状态,你就可以在该环境里执行 `nslookup`。 @@ -204,7 +205,7 @@ The value for label `k8s-app` is `kube-dns` for both CoreDNS and kube-dns deploy {{< /note >}} @@ -212,9 +213,9 @@ will have to deploy it manually. 那可能这个 DNS 插件在你当前的环境里并没有成功部署,你将需要手动去部署它。 ### 检查 DNS Pod 里的错误 {#check-for-errors-in-the-dns-pod} @@ -308,8 +309,8 @@ kube-dns 10.180.3.17:53,10.180.3.17:53 1h ``` 然后按下面的例子给 Corefile 添加 `log`。 @@ -377,7 +378,7 @@ CoreDNS 的 Pod 里。 接下来,发起一些查询并依照前文所述查看日志信息,如果 CoreDNS 的 Pod 接收到这些查询, 你将可以在日志信息里看到它们。 @@ -504,9 +505,9 @@ To learn more about name resolution, see @@ -527,45 +528,24 @@ This should probably be implemented eventually. Kubernetes 的安装并不会默认配置节点的 `resolv.conf` 文件来使用集群的 DNS 服务,因为这个配置对于不同的发行版本是不一样的。这个问题应该迟早会被解决的。 -Linux 的 libc 限制 `nameserver` 只能有三个记录。不仅如此,对于 glibc-2.17-222 -之前的版本([参见此 Issue 了解新版本的更新](https://access.redhat.com/solutions/58028)),`search` 的记录不能超过 6 个 -( [详情请查阅这个 2005 年的 bug](https://bugzilla.redhat.com/show_bug.cgi?id=168253))。 -Kubernetes 需要占用一个 `nameserver` 记录和三个`search`记录。 -这意味着如果一个本地的安装已经使用了三个 `nameserver` 或者使用了超过三个 -`search` 记录,而你的 glibc 版本也在有问题的版本列表中,那么有些配置很可能会丢失。 -为了绕过 DNS `nameserver` 个数限制,节点可以运行 `dnsmasq`,以提供更多的 -`nameserver` 记录。你也可以使用kubelet 的 `--resolv-conf` 标志来解决这个问题。 -要想修复 DNS `search` 记录个数限制问题,可以考虑升级你的 Linux 发行版本,或者 -升级 glibc 到一个不再受此困扰的版本。 - -{{< note >}} - -使用[扩展 DNS 设置](/zh-cn/docs/concepts/services-networking/dns-pod-service/#expanded-dns-configuration), -Kubernetes 允许更多的 `search` 记录。 -{{< /note >}} +Linux 的 libc(又名 glibc)默认将 DNS `nameserver` 记录限制为 3, +而 Kubernetes 需要使用 1 条 `nameserver` 记录。 +这意味着如果本地的安装已经使用了 3 个 `nameserver`,那么其中有些条目将会丢失。 +要解决此限制,节点可以运行 `dnsmasq`,以提供更多 `nameserver` 条目。 +你也可以使用 kubelet 的 `--resolv-conf` 标志来解决这个问题。 + 如果你使用 Alpine 3.3 或更早版本作为你的基础镜像,DNS 可能会由于 Alpine 中 一个已知的问题导致无法正常工作。 diff --git a/content/zh-cn/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md b/content/zh-cn/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md index d2f2b64bdf983..cff450b59e0e8 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md +++ b/content/zh-cn/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md @@ -1,18 +1,20 @@ --- title: 自动扩缩集群 DNS 服务 content_type: task +weight: 80 --- -本页展示了如何在集群中启用和配置 DNS 服务的自动扩缩功能。 +本页展示了如何在你的 Kubernetes 集群中启用和配置 DNS 服务的自动扩缩功能。 ## {{% heading "prerequisites" %}} @@ -21,78 +23,66 @@ Kubernetes cluster. * 本指南假设你的节点使用 AMD64 或 Intel 64 CPU 架构 -* 确保已启用 [DNS 功能](/zh-cn/docs/concepts/services-networking/dns-pod-service/)本身。 +* 确保 [Kubernetes DNS](/zh-cn/docs/concepts/services-networking/dns-pod-service/) 已启用。 -* 建议使用 Kubernetes 1.4.0 或更高版本。 ## 确定是否 DNS 水平自动扩缩特性已经启用 {#determining-whether-dns-horizontal-autoscaling-is-already-enabled} -在 kube-system 命名空间中列出集群中的 {{< glossary_tooltip text="Deployments" term_id="deployment" >}} : +在 kube-system {{< glossary_tooltip text="命名空间" term_id="namespace" >}}中列出集群中的 +{{< glossary_tooltip text="Deployment" term_id="deployment" >}}: ```shell kubectl get deployment --namespace=kube-system ``` + 输出类似如下这样: ``` -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +NAME READY UP-TO-DATE AVAILABLE AGE ... -dns-autoscaler 1 1 1 1 ... +dns-autoscaler 1/1 1 1 ... ... ``` -如果在输出中看到 “dns-autoscaler”,说明 DNS 水平自动扩缩已经启用,可以跳到 -[调优自动扩缩参数](#tuning-autoscaling-parameters)。 - +如果在输出中看到 “dns-autoscaler”,说明 DNS 水平自动扩缩已经启用, +可以跳到[调优 DNS 自动扩缩参数](#tuning-autoscaling-parameters)。 -```shell -kubectl get deployment --namespace=kube-system -``` + ## 获取 DNS Deployment 的名称 {#find-scaling-target} -列出集群内 kube-system 名字空间中的 DNS Deployment: +列出集群内 kube-system 命名空间中的 DNS Deployment: ```shell kubectl get deployment -l k8s-app=kube-dns --namespace=kube-system ``` + 输出类似如下这样: ``` @@ -117,7 +107,7 @@ and look for a deployment named `coredns` or `kube-dns`. 并在输出中寻找名称为 `coredns` 或 `kube-dns` 的 Deployment。 你的扩缩目标为: @@ -127,7 +117,7 @@ Deployment/ 其中 `` 是 DNS Deployment 的名称。 例如,如果你的 DNS Deployment 名称是 `coredns`,则你的扩展目标是 Deployment/coredns。 @@ -143,16 +133,16 @@ CoreDNS 是 Kubernetes 的默认 DNS 服务。CoreDNS 设置标签 `k8s-app=kube {{< /note >}} ## 启用 DNS 水平自动扩缩 {#enablng-dns-horizontal-autoscaling} -在本节,我们创建一个 Deployment。Deployment 中的 Pod 运行一个基于 +在本节,我们创建一个新的 Deployment。Deployment 中的 Pod 运行一个基于 `cluster-proportional-autoscaler-amd64` 镜像的容器。 创建文件 `dns-horizontal-autoscaler.yaml`,内容如下所示: @@ -188,11 +178,11 @@ DNS horizontal autoscaling is now enabled. DNS 水平自动扩缩在已经启用了。 -## 调优自动扩缩参数 {#tuning-autoscaling-parameters} +## 调优 DNS 自动扩缩参数 {#tuning-autoscaling-parameters} 验证 dns-autoscaler {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} 是否存在: @@ -232,7 +222,7 @@ linear: '{"coresPerReplica":256,"min":1,"nodesPerReplica":16}' @@ -240,12 +230,12 @@ calculated using this equation: 实际后端的数量通过使用如下公式来计算: ``` -replicas = max( ceil( cores * 1/coresPerReplica ) , ceil( nodes * 1/nodesPerReplica ) ) +replicas = max( ceil( cores × 1/coresPerReplica ) , ceil( nodes × 1/nodesPerReplica ) ) ``` -注意 `coresPerReplica` 和 `nodesPerReplica` 的值都是整数。 +注意 `coresPerReplica` 和 `nodesPerReplica` 的值都是浮点数。 背后的思想是,当一个集群使用具有很多核心的节点时,由 `coresPerReplica` 来控制。 当一个集群使用具有较少核心的节点时,由 `nodesPerReplica` 来控制。 @@ -285,7 +275,9 @@ This option works for all situations. Enter this command: kubectl scale deployment --replicas=0 dns-autoscaler --namespace=kube-system ``` - + 输出如下所示: ``` @@ -327,7 +319,9 @@ no one will re-create it: kubectl delete deployment dns-autoscaler --namespace=kube-system ``` - + 输出内容如下所示: ``` @@ -341,6 +335,7 @@ This option works if dns-autoscaler is under control of the (deprecated) [Addon Manager](https://git.k8s.io/kubernetes/cluster/addons/README.md), and you have write access to the master node. --> + ### 选项 3:从主控节点删除 dns-autoscaler 清单文件 如果 dns-autoscaler 在[插件管理器](https://git.k8s.io/kubernetes/cluster/addons/README.md) diff --git a/content/zh-cn/docs/tasks/administer-cluster/enable-disable-api.md b/content/zh-cn/docs/tasks/administer-cluster/enable-disable-api.md index c1eb5ec970902..42e98b3ab6563 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/enable-disable-api.md +++ b/content/zh-cn/docs/tasks/administer-cluster/enable-disable-api.md @@ -1,11 +1,13 @@ --- title: 启用/禁用 Kubernetes API content_type: task +weight: 200 --- @@ -40,7 +42,7 @@ The `runtime-config` command line argument also supports 2 special keys: - `api/legacy`, representing only legacy APIs. Legacy APIs are any APIs that have been explicitly [deprecated](/zh-cn/docs/reference/using-api/deprecation-policy/). -For example, to turning off all API versions except v1, pass `--runtime-config=api/all=false,api/v1=true` +For example, to turn off all API versions except v1, pass `--runtime-config=api/all=false,api/v1=true` to the `kube-apiserver`. --> - `api/all`:指所有已知的 API diff --git a/content/zh-cn/docs/tasks/administer-cluster/encrypt-data.md b/content/zh-cn/docs/tasks/administer-cluster/encrypt-data.md index 533a148109518..4a387565ebee8 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/encrypt-data.md +++ b/content/zh-cn/docs/tasks/administer-cluster/encrypt-data.md @@ -1,13 +1,17 @@ --- title: 静态加密 Secret 数据 content_type: task +min-kubernetes-server-version: 1.13 +weight: 210 --- - @@ -22,9 +26,13 @@ This page shows how to enable and configure encryption of secret data at rest. * 需要 etcd v3.0 或者更高版本 +* 要加密自定义资源,你的集群必须运行 Kubernetes v1.26 或更高版本。 + ## 配置并确定是否已启用静态数据加密 {#configuration-and-determing-wheter-encryption-at-rest-is-already-enabled} `kube-apiserver` 的参数 `--encryption-provider-config` 控制 API 数据在 etcd 中的加密方式。 该配置作为一个名为 [`EncryptionConfiguration`](/zh-cn/docs/reference/config-api/apiserver-encryption.v1/) 的 API 提供。 +`--encryption-provider-config-automatic-reload` 布尔参数决定了磁盘内容发生变化时是否应自动重新加载 +`--encryption-provider-config` 设置的文件。这样可以在不重启 API 服务器的情况下进行密钥轮换。 + 下面提供了一个示例配置。 +{{< caution >}} -{{< caution >}} **重要:** 对于高可用配置(有两个或多个控制平面节点),加密配置文件必须相同! 否则,`kube-apiserver` 组件无法解密存储在 etcd 中的数据。 {{< /caution >}} ## 理解静态数据加密 {#understanding-the-encryption-at-rest-configuration} @@ -63,6 +73,8 @@ kind: EncryptionConfiguration resources: - resources: - secrets + - configmaps + - pandas.awesome.bears.example providers: - identity: {} - aesgcm: @@ -85,10 +97,29 @@ resources: +每个 `resources` 数组项目是一个单独的完整的配置。 +`resources.resources` 字段是应加密的 Kubernetes 资源(例如 Secret、ConfigMap 或其他资源)名称 +(`resource` 或 `resource.group`)的数组。 + +如果自定义资源被添加到 `EncryptionConfiguration` 并且集群版本为 1.26 或更高版本, +则 `EncryptionConfiguration` 中提到的任何新创建的自定义资源都将被加密。 +在该版本之前存在于 etcd 中的任何自定义资源和配置不会被加密,直到它们被下一次写入到存储为止。 +这与内置资源的行为相同。请参阅[确保所有 Secret 都已加密](#ensure-all-secrets-are-encrypted)一节。 + +`providers` 数组是可能的加密 provider 的有序列表,用于你所列出的 API。 + -每个 `resources` 数组项目是一个单独的完整的配置。 -`resources.resources` 字段是要加密的 Kubernetes 资源名称(`resource` 或 `resource.group`)的数组。 -`providers` 数组是可能的加密 provider 的有序列表。 - 每个条目只能指定一个 provider 类型(可以是 `identity` 或 `aescbc`,但不能在同一个项目中同时指定二者)。 列表中的第一个 provider 用于加密写入存储的资源。 当从存储器读取资源时,与存储的数据匹配的所有 provider 将按顺序尝试解密数据。 @@ -110,17 +137,17 @@ For more detailed information about the `EncryptionConfiguration` struct, please 有关 `EncryptionConfiguration` 结构体的更多详细信息,请参阅[加密配置 API](/zh-cn/docs/reference/config-api/apiserver-encryption.v1/)。 +{{< caution >}} -{{< caution >}} 如果通过加密配置无法读取资源(因为密钥已更改),唯一的方法是直接从底层 etcd 中删除该密钥。 任何尝试读取资源的调用将会失败,直到它被删除或提供有效的解密密钥。 {{< /caution >}} -### Providers: +### Providers -{{< caution >}} 在 EncryptionConfig 中保存原始的加密密钥与不加密相比只会略微地提升安全级别。 请使用 `kms` 驱动以获得更强的安全性。 {{< /caution >}} -默认情况下,`identity` 驱动被用来对 etcd 中的 Secret 提供保护,而这个驱动不提供加密能力。 +默认情况下,`identity` 驱动被用来对 etcd 中的 Secret 数据提供保护,而这个驱动不提供加密能力。 `EncryptionConfiguration` 的引入是为了能够使用本地管理的密钥来在本地加密 Secret 数据。 -使用本地管理的密钥来加密 Secret 能够保护数据免受 etcd 破坏的影响,不过无法针对主机被侵入提供防护。 +使用本地管理的密钥来加密 Secret 数据能够保护数据免受 etcd 破坏的影响,不过无法针对主机被侵入提供防护。 这是因为加密的密钥保存在主机上的 EncryptionConfig YAML 文件中, 有经验的入侵者仍能访问该文件并从中提取出加密密钥。 @@ -194,6 +221,8 @@ kind: EncryptionConfiguration resources: - resources: - secrets + - configmaps + - pandas.awesome.bears.example providers: - aescbc: keys: @@ -286,15 +315,17 @@ permissions on your control-plane nodes so only the user who runs the `kube-apis ## Verifying that data is encrypted Data is encrypted when written to etcd. After restarting your `kube-apiserver`, any newly created or -updated Secret should be encrypted when stored. To check this, you can use the `etcdctl` command line -program to retrieve the contents of your Secret. +updated Secret or other resource types configured in `EncryptionConfiguration` should be encrypted +when stored. To check this, you can use the `etcdctl` command line +program to retrieve the contents of your secret data. 1. Create a new Secret called `secret1` in the `default` namespace: --> ## 验证数据已被加密 {#verifying-that-data-is-encryped} -数据在写入 etcd 时会被加密。重新启动你的 `kube-apiserver` 后,任何新创建或更新的密码在存储时都应该被加密。 -如果想要检查,你可以使用 `etcdctl` 命令行程序来检索你的加密内容。 +数据在写入 etcd 时会被加密。重新启动你的 `kube-apiserver` 后,任何新创建或更新的 Secret +或在 `EncryptionConfiguration` 中配置的其他资源类型都应在存储时被加密。 +如果想要检查,你可以使用 `etcdctl` 命令行程序来检索你的 Secret 数据内容。 1. 创建一个新的 secret,名称为 `secret1`,命名空间为 `default`: @@ -307,7 +338,7 @@ program to retrieve the contents of your Secret. --> 2. 使用 etcdctl 命令行,从 etcd 中读取 Secret: - ```shell + ``` ETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C ``` @@ -363,7 +394,7 @@ program to retrieve the contents of your Secret. 4. 通过 API 检索,验证 Secret 是否被正确解密: ```shell - kubectl describe secret secret1 -n default + kubectl get secret secret1 -n default -o yaml ``` 上面的命令读取所有 Secret,然后使用服务端加密来更新其内容。 +{{< note >}} -{{< note >}} 如果由于冲突写入而发生错误,请重试该命令。 对于较大的集群,你可能希望通过命名空间或更新脚本来对 Secret 进行划分。 {{< /note >}} @@ -460,8 +491,7 @@ resources: ``` 然后运行以下命令以强制解密所有 Secret: diff --git a/content/zh-cn/docs/tasks/administer-cluster/extended-resource-node.md b/content/zh-cn/docs/tasks/administer-cluster/extended-resource-node.md index 007c9021eb397..052f1b148cf45 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/extended-resource-node.md +++ b/content/zh-cn/docs/tasks/administer-cluster/extended-resource-node.md @@ -1,10 +1,12 @@ --- title: 为节点发布扩展资源 content_type: task +weight: 70 --- @@ -26,7 +28,6 @@ resources that would otherwise be unknown to Kubernetes. ## 获取你的节点名称 @@ -34,6 +35,9 @@ Choose one of your Nodes to use for this exercise. kubectl get nodes ``` + 选择一个节点用于此练习。 -{{< note >}} 在前面的请求中,`~1` 为 patch 路径中 “/” 符号的编码。 JSON-Patch 中的操作路径值被解析为 JSON 指针。 更多细节,请查看 [IETF RFC 6901](https://tools.ietf.org/html/rfc6901) 的第 3 节。 @@ -119,21 +123,25 @@ The output shows that the Node has a capacity of 4 dongles: "example.com/dongle": "4", ``` - + 描述你的节点: -```shell +``` kubectl describe node ``` - + 输出再次展示了 dongle 资源: ```yaml Capacity: - cpu: 2 - memory: 2049008Ki - example.com/dongle: 4 + cpu: 2 + memory: 2049008Ki + example.com/dongle: 4 ``` (你应该看不到任何输出) - ## {{% heading "whatsnext" %}} ### 针对应用开发人员 -* [将扩展资源分配给容器](/zh-cn/docs/tasks/configure-pod-container/extended-resource/) - -### 针对集群管理员 +- [将扩展资源分配给容器](/zh-cn/docs/tasks/configure-pod-container/extended-resource/) -* [为名字空间配置最小和最大内存约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [为名字空间配置最小和最大 CPU 约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) +### 针对集群管理员 +- [为名字空间配置最小和最大内存约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) +- [为名字空间配置最小和最大 CPU 约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) diff --git a/content/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md b/content/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md index d5ce305d88add..5e9d7e6eafa79 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md +++ b/content/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md @@ -1,6 +1,7 @@ --- title: 关键插件 Pod 的调度保证 content_type: concept +weight: 220 --- diff --git a/content/zh-cn/docs/tasks/administer-cluster/ip-masq-agent.md b/content/zh-cn/docs/tasks/administer-cluster/ip-masq-agent.md index ce7835b23ea7c..89a68ea78c649 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/ip-masq-agent.md +++ b/content/zh-cn/docs/tasks/administer-cluster/ip-masq-agent.md @@ -1,10 +1,12 @@ --- title: IP Masquerade Agent 用户指南 content_type: task +weight: 230 --- @@ -171,7 +173,7 @@ You must also apply the appropriate node label to any nodes in your cluster that 你必须同时将适当的节点标签应用于集群中希望代理运行的任何节点。 ```shell -kubectl label nodes my-node beta.kubernetes.io/masq-agent-ds-ready=true +kubectl label nodes my-node node.kubernetes.io/masq-agent-ds-ready=true ``` + - 本页展示了如何配置密钥管理服务(Key Management Service,KMS)驱动和插件以启用 Secret 数据加密。 目前有两个 KMS API 版本。KMS v1 将继续工作,而 KMS v2 将开发得逐渐成熟。 如果你不确定要选用哪个 KMS API 版本,可选择 v1。 @@ -82,7 +83,7 @@ as the Kubernetes control plane, is responsible for all communication with the r KMS 加密驱动使用封套加密模型来加密 etcd 中的数据。 数据使用数据加密密钥(DEK)加密;每次加密都生成一个新的 DEK。 这些 DEK 经一个密钥加密密钥(KEK)加密后在一个远端的 KMS 中存储和管理。 -KMS 驱动使用 gRPC 与一个特定的 KMS 插件通信。这个 KMS 插件作为一个 gRPC +KMS 驱动使用 gRPC 与一个特定的 KMS 插件通信。这个 KMS 插件作为一个 gRPC 服务器被部署在 Kubernetes 控制平面的相同主机上,负责与远端 KMS 的通信。 1. 使用适合于 `kms` 驱动的属性创建一个新的 `EncryptionConfiguration` 文件,以加密 Secret 和 ConfigMap 等资源。 + 如果要加密使用 CustomResourceDefinition 定义的扩展 API,你的集群必须运行 Kubernetes v1.26 或更高版本。 2. 设置 kube-apiserver 的 `--encryption-provider-config` 参数指向配置文件的位置。 -3. 重启你的 API 服务器。 + +3. `--encryption-provider-config-automatic-reload` 布尔参数决定了磁盘内容发生变化时是否应自动重新加载 + 通过 `--encryption-provider-config` 设置的文件。这样可以在不重启 API 服务器的情况下进行密钥轮换。 + +4. 重启你的 API 服务器。 ### KMS v1 {#encrypting-your-data-with-the-kms-provider-kms-v1} @@ -340,6 +350,8 @@ To encrypt the data: resources: - resources: - secrets + - configmaps + - pandas.awesome.bears.example providers: - kms: name: myKmsPluginFoo @@ -361,6 +373,8 @@ To encrypt the data: resources: - resources: - secrets + - configmaps + - pandas.awesome.bears.example providers: - kms: apiVersion: v2 @@ -375,6 +389,46 @@ To encrypt the data: timeout: 3s ``` + +`--encryption-provider-config-automatic-reload` 设置为 `true` 会将所有健康检查集中到同一个健康检查端点。 +只有 KMS v1 驱动正使用且加密配置未被自动重新加载时,才能进行独立的健康检查。 + +下表总结了每个 KMS 版本的健康检查端点: + + +| KMS 配置 | 没有自动重新加载 | 有自动重新加载 | +| ------------ | ----------------------- | ------------------ | +| 仅 KMS v1 | Individual Healthchecks | Single Healthcheck | +| 仅 KMS v2 | Single Healthcheck | Single Healthcheck | +| KMS v1 和 v2 | Individual Healthchecks | Single Healthcheck | +| 没有 KMS | 无 | Single Healthcheck | + + +`Single Healthcheck` 意味着唯一的健康检查端点是 `/healthz/kms-providers`。 + +`Individual Healthchecks` 意味着每个 KMS 插件都有一个对应的健康检查端点, +并且这一端点基于插件在加密配置中的位置确定,例如 `/healthz/kms-provider-0`、`/healthz/kms-provider-1` 等。 + +这些健康检查端点路径是由服务器硬编码、生成并控制的。 +`Individual Healthchecks` 的索引序号对应于 KMS 加密配置被处理的顺序。 + ## 验证数据已经加密 {#verifying-that-the-data-is-encrypted} -写入 etcd 时数据被加密。重启 `kube-apiserver` 后,任何新建或更新的 Secret 在存储时应该已被加密。 -要验证这点,你可以用 `etcdctl` 命令行程序获取 Secret 内容。 +写入 etcd 时数据被加密。重启 `kube-apiserver` 后,所有新建或更新的 Secret 或在 +`EncryptionConfiguration` 中配置的其他资源类型在存储时应该已被加密。 +要验证这点,你可以用 `etcdctl` 命令行程序获取私密数据的内容。 + Secret 应包含 `mykey: mydata`。 此页还详述了如何安装若干不同的容器运行时,并将 `systemd` 设为其默认驱动。 @@ -62,12 +62,12 @@ kubeadm 支持在执行 `kubeadm init` 时,传递一个 `KubeletConfiguration` {{< note >}} 在版本 1.22 中,如果用户没有在 `KubeletConfiguration` 中设置 `cgroupDriver` 字段, -`kubeadm init` 会将它设置为默认值 `systemd`。 +`kubeadm` 会将它设置为默认值 `systemd`。 {{< /note >}} 该命令显示 `/etc/kubernetes/pki` 文件夹中的客户端证书以及 kubeadm(`admin.conf`、`controller-manager.conf` 和 `scheduler.conf`) -使用的 KUBECONFIG 文件中嵌入的客户端证书的到期时间/剩余时间。 +使用的 kubeconfig 文件中嵌入的客户端证书的到期时间/剩余时间。 另外,kubeadm 会通知用户证书是否由外部管理; 在这种情况下,用户应该小心的手动/使用其他工具来管理证书更新。 +{{< warning >}} -{{< warning >}} -`kubeadm` 不能管理由外部 CA 签名的证书 +--> +`kubeadm` 不能管理由外部 CA 签名的证书。 {{< /warning >}} +{{< note >}} -{{< note >}} -上面的列表中没有包含 `kubelet.conf`,因为 kubeadm 将 kubelet 配置为 -[自动更新证书](/zh-cn/docs/tasks/tls/certificate-rotation/)。 +上面的列表中没有包含 `kubelet.conf`,因为 kubeadm 将 kubelet +配置为[自动更新证书](/zh-cn/docs/tasks/tls/certificate-rotation/)。 轮换的证书位于目录 `/var/lib/kubelet/pki`。 要修复过期的 kubelet 客户端证书,请参阅 [kubelet 客户端证书轮换失败](/zh-cn/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#kubelet-client-cert)。 {{< /note >}} +{{< warning >}} -{{< warning >}} -在通过 `kubeadm init` 创建的节点上,在 kubeadm 1.17 版本之前有一个 -[缺陷](https://github.com/kubernetes/kubeadm/issues/1753),该缺陷 -使得你必须手动修改 `kubelet.conf` 文件的内容。 -`kubeadm init` 操作结束之后,你必须更新 `kubelet.conf` 文件 -将 `client-certificate-data` 和 `client-key-data` 改为如下所示的内容 -以便使用轮换后的 kubelet 客户端证书: +在通过 `kubeadm init` 创建的节点上,在 kubeadm 1.17 +版本之前有一个[缺陷](https://github.com/kubernetes/kubeadm/issues/1753), +该缺陷使得你必须手动修改 `kubelet.conf` 文件的内容。 +`kubeadm init` 操作结束之后,你必须更新 `kubelet.conf` 文件将 `client-certificate-data` +和 `client-key-data` 改为如下所示的内容以便使用轮换后的 kubelet 客户端证书: ```yaml client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem @@ -186,42 +190,45 @@ client-key: /var/lib/kubelet/pki/kubelet-client-current.pem - ## 自动更新证书 {#automatic-certificate-renewal} -kubeadm 会在控制面 -[升级](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) -的时候更新所有证书。 +kubeadm +会在控制面[升级](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)的时候更新所有证书。 这个功能旨在解决最简单的用例;如果你对此类证书的更新没有特殊要求, 并且定期执行 Kubernetes 版本升级(每次升级之间的间隔时间少于 1 年), 则 kubeadm 将确保你的集群保持最新状态并保持合理的安全性。 +{{< note >}} -{{< note >}} 最佳的做法是经常升级集群以确保安全。 {{< /note >}} 如果你对证书更新有更复杂的需求,则可通过将 `--certificate-renewal=false` 传递给 `kubeadm upgrade apply` 或者 `kubeadm upgrade node`,从而选择不采用默认行为。 +{{< warning >}} -{{< warning >}} kubeadm 在 1.17 版本之前有一个[缺陷](https://github.com/kubernetes/kubeadm/issues/1818), 该缺陷导致 `kubeadm update node` 执行时 `--certificate-renewal` 的默认值被设置为 `false`。 在这种情况下,你需要显式地设置 `--certificate-renewal=true`。 @@ -259,19 +266,21 @@ the Pod and the certificate renewal for the component can complete. 如果 Pod 不在清单目录里,kubelet 将会终止它。 在另一个 `fileCheckFrequency` 周期之后你可以将文件移回去,为了组件可以完成 kubelet 将重新创建 Pod 和证书更新。 +{{< warning >}} -{{< warning >}} 如果你运行了一个 HA 集群,这个命令需要在所有控制面板节点上执行。 {{< /warning >}} +{{< note >}} -{{< note >}} `certs renew` 使用现有的证书作为属性(Common Name、Organization、SAN 等)的权威来源, -而不是 kubeadm-config ConfigMap。强烈建议使它们保持同步。 +而不是 `kubeadm-config` ConfigMap。强烈建议使它们保持同步。 {{< /note >}} -Kubernetes 证书通常在一年后到期。 +- Kubernetes 证书通常在一年后到期。 +- `--csr-only` can be used to renew certificates with an external CA by generating certificate + signing requests (without actually renewing certificates in place); see next paragraph for more + information. +- It's also possible to renew a single certificate instead of all. +--> - `--csr-only` 可用于经过一个外部 CA 生成的证书签名请求来更新证书(无需实际替换更新证书); 更多信息请参见下节。 + - 可以更新单个证书而不是全部证书。 -{{< caution >}} 这些是针对需要将其组织的证书基础结构集成到 kubeadm 构建的集群中的用户的高级主题。 如果默认的 kubeadm 配置满足了你的需求,则应让 kubeadm 管理证书。 {{< /caution >}} @@ -314,24 +328,33 @@ These are advanced topics for users who need to integrate their organization's c ### Set up a signer The Kubernetes Certificate Authority does not work out of the box. -You can configure an external signer such as [cert-manager](https://cert-manager.io/docs/configuration/ca/), or you can use the built-in signer. +You can configure an external signer such as [cert-manager](https://cert-manager.io/docs/configuration/ca/), +or you can use the built-in signer. + The built-in signer is part of [`kube-controller-manager`](/docs/reference/command-line-tools-reference/kube-controller-manager/). -To activate the built-in signer, you must pass the `--cluster-signing-cert-file` and `--cluster-signing-key-file` flags. + +To activate the built-in signer, you must pass the `--cluster-signing-cert-file` and +`--cluster-signing-key-file` flags. --> ### 设置一个签名者(Signer) {#set-up-a-signer} -Kubernetes 证书颁发机构不是开箱即用。你可以配置外部签名者,例如 [cert-manager](https://cert-manager.io/docs/configuration/ca/), +Kubernetes 证书颁发机构不是开箱即用。你可以配置外部签名者,例如 +[cert-manager](https://cert-manager.io/docs/configuration/ca/), 也可以使用内置签名者。 + 内置签名者是 -[`kube-controller-manager`](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/) 的一部分。 +[`kube-controller-manager`](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/) +的一部分。 + 要激活内置签名者,请传递 `--cluster-signing-cert-file` 和 `--cluster-signing-key-file` 参数。 -如果你正在创建一个新的集群,你可以使用 kubeadm 的 -[配置文件](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/)。 +如果你正在创建一个新的集群,你可以使用 kubeadm +的[配置文件](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/)。 ```yaml apiVersion: kubeadm.k8s.io/v1beta3 @@ -348,7 +371,8 @@ controllerManager: ### 创建证书签名请求 (CSR) {#create-certificate-signing-requests-csr} 有关使用 Kubernetes API 创建 CSR 的信息, 请参见[创建 CertificateSigningRequest](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest)。 @@ -365,7 +389,8 @@ This section provide more details about how to execute manual certificate renewa 为了更好的与外部 CA 集成,kubeadm 还可以生成证书签名请求(CSR)。 CSR 表示向 CA 请求客户的签名证书。 @@ -395,8 +420,6 @@ As with `kubeadm init`, an output directory can be specified with the `--csr-dir 证书可以通过 `kubeadm certs renew --csr-only` 来续订。 和 `kubeadm init` 一样,可以使用 `--csr-dir` 标志指定一个输出目录。 -CSR 签署证书后,必须将证书和私钥复制到 PKI 目录(默认情况下为 `/etc/kubernetes/pki`)。 - -使用首选方法对证书签名后,必须将证书和私钥复制到 PKI 目录(默认为 `/etc/kubernetes/pki` )。 +使用首选方法对证书签名后,必须将证书和私钥复制到 PKI 目录(默认为 `/etc/kubernetes/pki`)。 如果你已经创建了集群,你必须通过执行下面的操作来完成适配: -- 找到 `kube-system` 名字空间中名为 `kubelet-config-{{< skew currentVersion>}}` +- 找到 `kube-system` 名字空间中名为 `kubelet-config-{{< skew currentVersion >}}` 的 ConfigMap 并编辑之。 在该 ConfigMap 中,`kubelet` 键下面有一个 [KubeletConfiguration](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) @@ -497,10 +521,9 @@ This will require action from the user or a third party controller. These CSRs can be viewed using: --> -字段 `serverTLSBootstrap` 将允许启动引导 kubelet 的服务证书,方式 -是从 `certificates.k8s.io` API 处读取。这种方式的一种局限在于这些 -证书的 CSR(证书签名请求)不能被 kube-controller-manager 中默认的 -签名组件 +字段 `serverTLSBootstrap` 将允许启动引导 kubelet 的服务证书,方式是从 +`certificates.k8s.io` API 处读取。这种方式的一种局限在于这些证书的 +CSR(证书签名请求)不能被 kube-controller-manager 中默认的签名组件 [`kubernetes.io/kubelet-serving`](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers) 批准。需要用户或者第三方控制器来执行此操作。 @@ -534,11 +557,9 @@ be approved to complete the rotation. To understand more see --> 默认情况下,这些服务证书会在一年后过期。 kubeadm 将 `KubeletConfiguration` 的 `rotateCertificates` 字段设置为 -`true`;这意味着证书快要过期时,会生成一组针对服务证书的新的 CSR,而 -这些 CSR 也要被批准才能完成证书轮换。 -要进一步了解这里的细节,可参阅 -[证书轮换](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#certificate-rotation) -文档。 +`true`;这意味着证书快要过期时,会生成一组针对服务证书的新的 CSR,而这些 +CSR 也要被批准才能完成证书轮换。要进一步了解这里的细节, +可参阅[证书轮换](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#certificate-rotation)文档。 你要使用 [`kubeadm kubeconfig user`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig) 命令为其他用户生成 kubeconfig 文件,这个命令支持命令行参数和 diff --git a/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md b/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md index 78fa556b39790..e6f51807be3fa 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md +++ b/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md @@ -1,14 +1,14 @@ --- title: 重新配置 kubeadm 集群 content_type: task -weight: 10 +weight: 30 --- @@ -107,7 +107,6 @@ in a ConfigMap called `kubeadm-config` in the `kube-system` namespace. To change a particular option in the `ClusterConfiguration` you can edit the ConfigMap with this command: -The configuration is located under the `data.ClusterConfiguration` key. --> ### 应用集群配置更改 @@ -123,6 +122,9 @@ The configuration is located under the `data.ClusterConfiguration` key. kubectl edit cm -n kube-system kubeadm-config ``` + 配置位于 `data.ClusterConfiguration` 键下。 {{< note >}} @@ -170,7 +172,6 @@ Before proceeding with these changes, make sure you have backed up the directory 要编写新证书,你可以使用: @@ -179,6 +180,9 @@ To write new manifest files in `/etc/kubernetes/manifests` you can use: kubeadm init phase certs --config ``` + 要在 `/etc/kubernetes/manifests` 中编写新的清单文件,你可以使用: ```shell @@ -212,7 +216,6 @@ in a ConfigMap called `kubelet-config` in the `kube-system` namespace. You can edit the ConfigMap with this command: -The configuration is located under the `data.kubelet` key. --> ### 应用 kubelet 配置更改 @@ -227,6 +230,9 @@ The configuration is located under the `data.kubelet` key. kubectl edit cm -n kube-system kubelet-config ``` + 配置位于 `data.kubelet` 键下。 ### 应用 kube-proxy 配置更改 @@ -302,6 +307,9 @@ The configuration is located under the `data.config.conf` key. kubectl edit cm -n kube-system kube-proxy ``` + 配置位于 `data.config.conf` 键下。 #### 反映 kube-proxy 的更改 @@ -325,12 +330,18 @@ New Pods that use the updated ConfigMap will be created. kubectl get po -n kube-system | grep kube-proxy ``` + 使用以下命令删除 Pod: ```shell kubectl delete po -n kube-system ``` + 将创建使用更新的 ConfigMap 的新 Pod。 {{< note >}} @@ -373,7 +384,6 @@ Once the CoreDNS changes are applied you can delete the CoreDNS Pods: Obtain the Pod names: -Delete a Pod with: --> #### 反映 CoreDNS 的更改 @@ -385,6 +395,9 @@ Delete a Pod with: kubectl get po -n kube-system | grep coredns ``` + 使用以下命令删除 Pod: ```shell @@ -400,6 +413,7 @@ New Pods with the updated CoreDNS configuration will be created. kubeadm 不允许在集群创建和升级期间配置 CoreDNS。 这意味着如果执行了 `kubeadm upgrade apply`,你对 diff --git a/content/zh-cn/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md b/content/zh-cn/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md index 8ffed02fa258f..a0f05a43cdebf 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md +++ b/content/zh-cn/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md @@ -2,13 +2,13 @@ title: 升级 Windows 节点 min-kubernetes-server-version: 1.17 content_type: task -weight: 40 +weight: 50 --- @@ -16,10 +16,9 @@ weight: 40 {{< feature-state for_k8s_version="v1.18" state="beta" >}} -本页解释如何升级[用 kubeadm 创建的](/zh-cn/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes) -Windows 节点。 +本页解释如何升级用 kubeadm 创建的 Windows 节点。 ## {{% heading "prerequisites" %}} @@ -150,7 +149,8 @@ upgrade the control plane nodes before upgrading your Windows nodes. {{< note >}} 如果你是在 Pod 内的 HostProcess 容器中运行 kube-proxy,而不是作为 Windows 服务, 你可以通过应用更新版本的 kube-proxy 清单文件来升级 kube-proxy。 diff --git a/content/zh-cn/docs/tasks/administer-cluster/kubelet-config-file.md b/content/zh-cn/docs/tasks/administer-cluster/kubelet-config-file.md index 4585eb7576915..a1ea20805c5c5 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/content/zh-cn/docs/tasks/administer-cluster/kubelet-config-file.md @@ -34,7 +34,7 @@ is defined by the [`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/) struct. --> -## 创建配置文件 +## 创建配置文件 {#create-config-file} [`KubeletConfiguration`](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/) 结构体定义了可以通过文件配置的 Kubelet 配置子集, @@ -86,21 +86,22 @@ the threshold values respectively. - -## 启动通过配置文件配置的 Kubelet 进程 +## 启动通过配置文件配置的 Kubelet 进程 {#start-kubelet-via-config-file} {{< note >}} -如果你使用 kubeadm 初始化你的集群,在使用 `kubeadmin init` 创建你的集群的时候请使用 kubelet-config。 + +如果你使用 kubeadm 初始化你的集群,在使用 `kubeadm init` 创建你的集群的时候请使用 kubelet-config。 更多细节请阅读[使用 kubeadm 配置 kubelet](/zh-cn/docs/setup/production-environment/tools/kubeadm/kubelet-integration/) {{< /note >}} + 启动 Kubelet 需要将 `--config` 参数设置为 Kubelet 配置文件的路径。Kubelet 将从此文件加载其配置。 -- 参阅 [`KubeletConfiguration`](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/) +- 参阅 [`KubeletConfiguration`](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/) 进一步学习 kubelet 的配置。 diff --git a/content/zh-cn/docs/tasks/kubelet-credential-provider/kubelet-credential-provider.md b/content/zh-cn/docs/tasks/administer-cluster/kubelet-credential-provider.md similarity index 86% rename from content/zh-cn/docs/tasks/kubelet-credential-provider/kubelet-credential-provider.md rename to content/zh-cn/docs/tasks/administer-cluster/kubelet-credential-provider.md index 4ac37f4b40f7a..ec64d044e1bbb 100644 --- a/content/zh-cn/docs/tasks/kubelet-credential-provider/kubelet-credential-provider.md +++ b/content/zh-cn/docs/tasks/administer-cluster/kubelet-credential-provider.md @@ -2,6 +2,8 @@ title: 配置 kubelet 镜像凭据提供程序 description: 配置 kubelet 的镜像凭据提供程序插件 content_type: task +min-kubernetes-server-version: v1.26 +weight: 120 --- -{{< feature-state for_k8s_version="v1.24" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} @@ -45,18 +49,23 @@ This guide demonstrates how to configure the kubelet's image credential provider * 凭据的到期时间很短,需要频繁请求新凭据。 * 将镜像库凭据存储在磁盘或者 imagePullSecret 是不可接受的。 +本指南演示如何配置 kubelet 的镜像凭证提供程序插件机制。 + ## {{% heading "prerequisites" %}} -* kubelet 镜像凭证提供程序在 v1.20 版本作为 Alpha 特性引入。 - 与其他 Alpha 功能一样,当前仅当在 kubelet 启用 `KubeletCredentialProviders` - 特性门控时,该功能才能正常工作。 +* 你需要一个 Kubernetes 集群,其节点支持 kubelet 凭证提供程序插件。 + 这种支持在 Kubernetes {{< skew currentVersion >}} 中可用; + Kubernetes v1.24 和 v1.25 将此作为 Beta 特性包含在内,默认启用。 * 凭据提供程序 exec 插件的一种可用的实现。你可以构建自己的插件或使用云提供商提供的插件。 +{{< version-check >}} + @@ -219,7 +228,7 @@ Some example values of `matchImages` patterns are: * 两者都包含相同数量的域部分并且每个部分都匹配。 * 匹配图片的 URL 路径必须是目标图片 URL 路径的前缀。 -* 如果 imageMatch 包含端口,则该端口也必须在镜像中匹配。 +* 如果 matchImages 包含端口,则该端口也必须在镜像中匹配。 `matchImages` 模式的一些示例值: @@ -233,10 +242,10 @@ Some example values of `matchImages` patterns are: -* 阅读 [kubelet 配置 API (v1alpha1) 参考](/zh-cn/docs/reference/config-api/kubelet-config.v1alpha1/)中有关 +* 阅读 [kubelet 配置 API (v1) 参考](/docs/reference/config-api/kubelet-config.v1/)中有关 `CredentialProviderConfig` 的详细信息。 -* 阅读 [kubelet 凭据提供程序 API 参考 (v1alpha1)](/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/)。 +* 阅读 [kubelet 凭据提供程序 API 参考 (v1)](/docs/reference/config-api/kubelet-credentialprovider.v1/)。 diff --git a/content/zh-cn/docs/tasks/administer-cluster/kubelet-in-userns.md b/content/zh-cn/docs/tasks/administer-cluster/kubelet-in-userns.md index bc6d34c7426ea..4489013a94a18 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/kubelet-in-userns.md +++ b/content/zh-cn/docs/tasks/administer-cluster/kubelet-in-userns.md @@ -2,12 +2,14 @@ title: 以非 root 用户身份运行 Kubernetes 节点组件 content_type: task min-kubernetes-server-version: 1.22 +weight: 300 --- @@ -21,7 +23,7 @@ without root privileges, by using a {{< glossary_tooltip text="user namespace" t This technique is also known as _rootless mode_. {{< note >}} -This document describes how to run Kubernetes Node components (and hence pods) a non-root user. +This document describes how to run Kubernetes Node components (and hence pods) as a non-root user. If you are just looking for how to run a pod as a non-root user, see [SecurityContext](/docs/tasks/configure-pod-container/security-context/). {{< /note >}} @@ -318,6 +320,7 @@ the host with an external port forwarder, such as RootlessKit, slirp4netns, or You can use the port forwarder from K3s. See [Running K3s in Rootless Mode](https://rancher.com/docs/k3s/latest/en/advanced/#known-issues-with-rootless-mode) for more details. +The implementation can be found in [the `pkg/rootlessports` package](https://github.com/k3s-io/k3s/blob/v1.22.3+k3s1/pkg/rootlessports/controller.go) of k3s. ### Configuring CRI @@ -343,6 +346,7 @@ Pod 的网络命名空间可以使用常规的 CNI 插件配置。对于多节 你可以使用 K3s 的端口转发器。更多细节请参阅 [在 Rootless 模式下运行 K3s](https://rancher.com/docs/k3s/latest/en/advanced/#known-issues-with-rootless-mode)。 +该实现可以在 k3s 的 [`pkg/rootlessports` 包](https://github.com/k3s-io/k3s/blob/v1.22.3+k3s1/pkg/rootlessports/controller.go)中找到。 ### 配置 CRI @@ -355,8 +359,7 @@ kubelet 依赖于容器运行时。你需要部署一个容器运行时(例如 Running CRI plugin of containerd in a user namespace is supported since containerd 1.4. -Running containerd within a user namespace requires the following configurations -in `/etc/containerd/containerd-config.toml`. +Running containerd within a user namespace requires the following configurations. ```toml version = 2 @@ -379,6 +382,9 @@ version = 2 SystemdCgroup = false ``` +The default path of the configuration file is `/etc/containerd/config.toml`. +The path can be specified with `containerd -c /path/to/containerd/config.toml`. + {{% /tab %}} {{% tab name="CRI-O" %}} @@ -387,7 +393,7 @@ Running CRI-O in a user namespace is supported since CRI-O 1.22. CRI-O requires an environment variable `_CRIO_ROOTLESS=1` to be set. -The following configurations (in `/etc/crio/crio.conf`) are also recommended: +The following configurations are also recommended: ```toml [crio] @@ -401,6 +407,8 @@ The following configurations (in `/etc/crio/crio.conf`) are also recommended: cgroup_manager = "cgroupfs" ``` +The default path of the configuration file is `/etc/crio/crio.conf`. +The path can be specified with `crio --config /path/to/crio/crio.conf`. {{% /tab %}} {{< /tabs >}} --> @@ -410,7 +418,7 @@ The following configurations (in `/etc/crio/crio.conf`) are also recommended: containerd 1.4 开始支持在用户命名空间运行 containerd 的 CRI 插件。 -在用户命名空间运行 containerd 需要在 `/etc/containerd/containerd-config.toml` 文件包含以下配置: +在用户命名空间运行 containerd 必须进行如下配置: ```toml version = 2 @@ -432,7 +440,8 @@ version = 2 # (除非你在命名空间内运行了另一个 systemd) SystemdCgroup = false ``` - +配置文件的默认路径是 `/etc/containerd/config.toml`。 +可以用 `containerd -c /path/to/containerd/config.toml` 来指定该路径。 {{% /tab %}} {{% tab name="CRI-O" %}} @@ -441,7 +450,7 @@ CRI-O 1.22 开始支持在用户命名空间运行 CRI-O。 CRI-O 必须配置一个环境变量 `_CRIO_ROOTLESS=1`。 -也推荐使用 `/etc/crio/crio.conf` 文件内的以下配置: +也推荐使用以下配置: ```toml [crio] @@ -454,7 +463,8 @@ CRI-O 必须配置一个环境变量 `_CRIO_ROOTLESS=1`。 # (除非你在命名空间内运行了另一个 systemd) cgroup_manager = "cgroupfs" ``` - +配置文件的默认路径是 `/etc/containerd/config.toml`。 +可以用 `containerd -c /path/to/containerd/config.toml` 来指定该路径。 {{% /tab %}} {{< /tabs >}} diff --git a/content/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md b/content/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md index d0c84c6726b57..80df9a884bd6d 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md +++ b/content/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md @@ -211,12 +211,12 @@ kubectl delete pod constraints-cpu-demo --namespace=constraints-cpu-example ## 尝试创建一个超过最大 CPU 限制的 Pod -这里给出了包含一个容器的 Pod 的配置文件。容器声明了 500 millicpu 的 CPU +这里给出了包含一个容器的 Pod 清单。容器声明了 500 millicpu 的 CPU 请求和 1.5 CPU 的 CPU 限制。 {{< codenew file="admin/resource/cpu-constraints-pod-2.yaml" >}} @@ -273,7 +273,7 @@ enforced minimum: ``` Error from server (Forbidden): error when creating "examples/admin/resource/cpu-constraints-pod-3.yaml": -pods "constraints-cpu-demo-4" is forbidden: minimum cpu usage per Container is 200m, but request is 100m. +pods "constraints-cpu-demo-3" is forbidden: minimum cpu usage per Container is 200m, but request is 100m. ``` @@ -424,8 +424,8 @@ kubectl delete namespace constraints-cpu-example ### 集群管理员参考: * [为命名空间配置默认内存请求和限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) +* [为命名空间配置默认 CPU 请求和限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) * [为命名空间配置内存限制的最小值和最大值](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [为命名空间配置 CPU 限制的最小值和最大值](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) * [为命名空间配置内存和 CPU 配额](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) * [为命名空间配置 Pod 配额](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) * [为 API 对象配置配额](/zh-cn/docs/tasks/administer-cluster/quota-api-object/) diff --git a/content/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md b/content/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md index fd50963ca109a..eb75914401c51 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md +++ b/content/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md @@ -256,19 +256,24 @@ kubectl delete namespace quota-mem-cpu-example ### 集群管理员参考 * [为命名空间配置默认内存请求和限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) +* [为命名空间配置默认 CPU 请求和限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) * [为命名空间配置内存限制的最小值和最大值](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) * [为命名空间配置 CPU 限制的最小值和最大值](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) -* [为命名空间配置内存和 CPU 配额](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) * [为命名空间配置 Pod 配额](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) * [为 API 对象配置配额](/zh-cn/docs/tasks/administer-cluster/quota-api-object/) diff --git a/content/zh-cn/docs/tasks/administer-cluster/memory-manager.md b/content/zh-cn/docs/tasks/administer-cluster/memory-manager.md index e4185fa44fcc7..25bf618504756 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/memory-manager.md +++ b/content/zh-cn/docs/tasks/administer-cluster/memory-manager.md @@ -772,14 +772,17 @@ by using `--reserved-memory` flag. ### 设备插件资源 API {#device-plugin-resource-api} -通过使用此 [API](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/), +kubelet 提供了一个 `PodResourceLister` gRPC 服务来启用对资源和相关元数据的检测。 +通过使用它的 +[List gRPC 端点](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#grpc-endpoint-list), 可以获得每个容器的预留内存信息,该信息位于 protobuf 协议的 `ContainerMemory` 消息中。 只能针对 Guaranteed QoS 类中的 Pod 来检索此信息。 diff --git a/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md b/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md index d2ac31c881b56..d405ad22fc87c 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md +++ b/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/_index.md @@ -1,24 +1,25 @@ --- title: 从 dockershim 迁移 -weight: 10 -content_type: task +weight: 20 +content_type: task no_list: true --- - - 本节提供从 dockershim 迁移到其他容器运行时的必备知识。 - Dockershim 在 Kubernetes v1.24 版本已经被移除。 @@ -39,13 +40,12 @@ Dockershim 在 Kubernetes v1.24 版本已经被移除。 建议你迁移到其他容器运行时或使用其他方法以获得 Docker 引擎支持。 -建议从 dockershim 迁移到其他替代的容器运行时。 请参阅[容器运行时](/zh-cn/docs/setup/production-environment/container-runtimes/) 一节以了解可用的备选项。 当在迁移过程中遇到麻烦,请[上报问题](https://github.com/kubernetes/kubernetes/issues)。 @@ -57,7 +57,7 @@ configuration. These tasks will help you to migrate: -* [Check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) +* [Check whether Dockershim removal affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) * [Migrate Docker Engine nodes from dockershim to cri-dockerd](/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/) * [Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/) --> @@ -65,7 +65,7 @@ These tasks will help you to migrate: 下面这些任务可以帮助你完成迁移: -* [检查弃用 Dockershim 是否影响到你](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) +* [检查移除 Dockershim 是否影响到你](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) * [将 Docker Engine 节点从 dockershim 迁移到 cri-dockerd](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/) * [从 dockershim 迁移遥测和安全代理](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/) @@ -73,11 +73,11 @@ These tasks will help you to migrate: @@ -86,4 +86,3 @@ These tasks will help you to migrate: dockershim 的弃用和删除的讨论。 * 如果你发现与 dockershim 迁移相关的缺陷或其他技术问题, 可以在 Kubernetes 项目[报告问题](https://github.com/kubernetes/kubernetes/issues/new/choose)。 - diff --git a/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md b/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md index 09541e7033199..3a9b66d4963ea 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md +++ b/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md @@ -1,14 +1,14 @@ --- title: 检查移除 Dockershim 是否对你有影响 content_type: task -weight: 20 +weight: 50 --- @@ -131,7 +131,9 @@ You can read about it in [Kubernetes Containerd integration goes GA](/blog/2018/ 你可以阅读博文 [Kubernetes 正式支持集成 Containerd](/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/)。 - + ![Dockershim 和 Containerd CRI 的实现对比图](/images/blog/2018-05-24-kubernetes-containerd-integration-goes-ga/cri-containerd.png) @@ -171,7 +171,7 @@ nodes. 如果你将节点上的容器运行时从 Docker Engine 改变为 containerd,可在 diff --git a/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md b/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md index 6a5bf88d52df8..8ebb3bdc69542 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md +++ b/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md @@ -1,14 +1,14 @@ --- title: 从 dockershim 迁移遥测和安全代理 content_type: task -weight: 70 +weight: 60 --- @@ -16,7 +16,13 @@ weight: 70 {{% thirdparty-content %}} Kubernetes 对与 Docker Engine 直接集成的支持已被弃用且已经被删除。 大多数应用程序不直接依赖于托管容器的运行时。但是,仍然有大量的遥测和监控代理依赖 @@ -65,8 +71,8 @@ might run a command such as [`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/) or [`docker top`](https://docs.docker.com/engine/reference/commandline/top/) to list containers and processes or [`docker logs`](https://docs.docker.com/engine/reference/commandline/logs/) -+to receive streamed logs. If nodes in your existing cluster use -+Docker Engine, and you switch to a different container runtime, +to receive streamed logs. If nodes in your existing cluster use +Docker Engine, and you switch to a different container runtime, these commands will not work any longer. --> 一些代理和 Docker 工具紧密绑定。比如代理会用到 @@ -164,6 +170,9 @@ Please contact the vendor to get up to date instructions for migrating from dock 提供了为各类遥测和安全代理供应商准备的持续更新的迁移指导。 请与供应商联系,获取从 dockershim 迁移的最新说明。 + ## 从 dockershim 迁移 {#migration-from-dockershim} ### [Aqua](https://www.aquasec.com) diff --git a/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md b/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md index d7b6e3174ddfc..f064d0bfc9db4 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md +++ b/content/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors.md @@ -235,7 +235,8 @@ cat << EOF | tee /etc/cni/net.d/10-containerd-net.conflist }, { "type": "portmap", - "capabilities": {"portMappings": true} + "capabilities": {"portMappings": true}, + "externalSetMarkChain": "KUBE-MARK-MASQ" } ] } diff --git a/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md b/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md index 6331be657247f..440f1a7f8829f 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md +++ b/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md @@ -1,7 +1,7 @@ --- title: 使用 Romana 提供 NetworkPolicy content_type: task -weight: 40 +weight: 50 --- @@ -22,7 +22,7 @@ This page shows how to use Romana for NetworkPolicy. ## {{% heading "prerequisites" %}} 完成 [kubeadm 入门指南](/zh-cn/docs/reference/setup-tools/kubeadm/)中的 1、2、3 步。 diff --git a/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md b/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md index d27f901ff1f34..283fe06305abc 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md +++ b/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md @@ -1,7 +1,7 @@ --- title: 使用 Weave Net 提供 NetworkPolicy content_type: task -weight: 50 +weight: 60 --- diff --git a/content/zh-cn/docs/tasks/administer-cluster/nodelocaldns.md b/content/zh-cn/docs/tasks/administer-cluster/nodelocaldns.md index 70a7f05ceaee5..7ee9264f35c73 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/nodelocaldns.md +++ b/content/zh-cn/docs/tasks/administer-cluster/nodelocaldns.md @@ -1,13 +1,16 @@ --- title: 在 Kubernetes 集群中使用 NodeLocal DNSCache content_type: task +weight: 390 --- @@ -185,7 +188,7 @@ This feature can be enabled using the following steps: * If kube-proxy is running in IPTABLES mode: ``` bash - sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/,__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml + sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml ``` `__PILLAR__CLUSTER__DNS__` and `__PILLAR__UPSTREAM__SERVERS__` will be populated by @@ -207,7 +210,7 @@ This feature can be enabled using the following steps: * If kube-proxy is running in IPVS mode: ``` bash - sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml + sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/,__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml ``` In this mode, the `node-local-dns` pods listen only on ``. @@ -284,13 +287,12 @@ In those cases, the `kube-dns` ConfigMap can be updated. `node-local-dns` Pod 使用内存来保存缓存项并处理查询。 -由于它们并不监视 Kubernetes 对象变化,集群规模或者 Service/Endpoints +由于它们并不监视 Kubernetes 对象变化,集群规模或者 Service/EndpointSlices 的数量都不会直接影响内存用量。内存用量会受到 DNS 查询模式的影响。 根据 [CoreDNS 文档](https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md), diff --git a/content/zh-cn/docs/tasks/administer-cluster/safely-drain-node.md b/content/zh-cn/docs/tasks/administer-cluster/safely-drain-node.md index ebe8ddd925e20..13db6267c53d7 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/safely-drain-node.md +++ b/content/zh-cn/docs/tasks/administer-cluster/safely-drain-node.md @@ -2,6 +2,7 @@ title: 安全地清空一个节点 content_type: task min-kubernetes-server-version: 1.5 +weight: 310 --- @@ -116,9 +118,22 @@ Next, tell Kubernetes to drain the node: 接下来,告诉 Kubernetes 清空节点: ```shell -kubectl drain +kubectl drain --ignore-daemonsets ``` + +如果存在由 DaemonSet 管理的 Pod,你需要使用 `kubectl` 指定 `--ignore-daemonsets` 才能成功腾空节点。 +`kubectl drain` 子命令本身并不会实际腾空节点上的 DaemonSet Pod: +DaemonSet 控制器(控制平面的一部分)立即创建新的等效 Pod 替换丢失的 Pod。 +DaemonSet 控制器还会创建忽略不可调度污点的 Pod,这允许新的 Pod 启动到你正在腾空的节点上。 + 例如,如果你有一个三副本的 StatefulSet, 并设置了一个 `PodDisruptionBudget`,指定 `minAvailable: 2`。 -如果所有的三个 Pod 均就绪,并且你并行地发出多个 drain 命令, -那么 `kubectl drain` 只会从 StatefulSet 中逐出一个 Pod, +如果所有的三个 Pod 处于[健康(healthy)](/zh-cn/docs/tasks/run-application/configure-pdb/#healthiness-of-a-pod)状态, +并且你并行地发出多个 drain 命令,那么 `kubectl drain` 只会从 StatefulSet 中逐出一个 Pod, 因为 Kubernetes 会遵守 PodDisruptionBudget 并确保在任何时候只有一个 Pod 不可用 (最多不可用 Pod 个数的计算方法:`replicas - minAvailable`)。 -任何会导致就绪副本数量低于指定预算的清空操作都将被阻止。 +任何会导致处于[健康(healthy)](/zh-cn/docs/tasks/run-application/configure-pdb/#healthiness-of-a-pod) +状态的副本数量低于指定预算的清空操作都将被阻止。 @@ -491,11 +490,14 @@ and may grant an attacker significant visibility into the state of your cluster. your backups using a well reviewed backup and encryption solution, and consider using full disk encryption where possible. -Kubernetes supports [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/), a feature -introduced in 1.7, v1 beta since 1.13, and v2 alpha since 1.25. This will encrypt resources like `Secret` and `ConfigMap` in etcd, preventing -parties that gain access to your etcd backups from viewing the content of those secrets. While -this feature is currently beta, it offers an additional level of defense when backups -are not encrypted or an attacker gains read access to etcd. +Kubernetes supports optional [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/) for information in the Kubernetes API. +This lets you ensure that when Kubernetes stores data for objects (for example, `Secret` or +`ConfigMap` objects), the API server writes an encrypted representation of the object. +That encryption means that even someone who has access to etcd backup data is unable +to view the content of those objects. +In Kubernetes {{< skew currentVersion >}} you can also encrypt custom resources; +encryption-at-rest for extension APIs defined in CustomResourceDefinitions was added to +Kubernetes as part of the v1.26 release. --> ### 对 Secret 进行静态加密 @@ -504,11 +506,12 @@ are not encrypted or an attacker gains read access to etcd. 你要始终使用经过充分审查的备份和加密方案来加密备份数据, 并考虑在可能的情况下使用全盘加密。 -Kubernetes 支持[静态数据加密](/zh-cn/docs/tasks/administer-cluster/encrypt-data/)。 -该功能在 1.7 版引入,在 1.13 版成为 v1 Beta,在 1.25 版成为 v2 Alpha。 -它会加密 etcd 里面的 `Secret` 和 `ConfigMap` 资源,以防止某一方通过查看 etcd 的备份文件查看到这些 -Secret 的内容。虽然目前该功能还只是 Beta 阶段, -在备份未被加密或者攻击者获取到 etcd 的读访问权限时,它仍能提供额外的防御层级。 +对于 Kubernetes API 中的信息,Kubernetes 支持可选的[静态数据加密](/zh-cn/docs/tasks/administer-cluster/encrypt-data/)。 +这让你可以确保当 Kubernetes 存储对象(例如 `Secret` 或 `ConfigMap`)的数据时,API 服务器写入的是加密的对象。 +这种加密意味着即使有权访问 etcd 备份数据的某些人也无法查看这些对象的内容。 +在 Kubernetes {{< skew currentVersion >}} 中,你也可以加密自定义资源; +针对以 CustomResourceDefinition 形式定义的扩展 API,对其执行静态加密的能力作为 v1.26 +版本的一部分已添加到 Kubernetes。 +### 拓扑管理器策略选项 {#topology-manager-policy-options} + +对拓扑管理器策略选项的支持需要启用 `TopologyManagerPolicyOptions` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 + + +你可以使用以下特性门控根据成熟度级别打开和关闭这些选项组: +* `TopologyManagerPolicyBetaOptions` 默认禁用。启用以显示 Beta 级别选项。目前没有 Beta 级别选项。 +* `TopologyManagerPolicyAlphaOptions` 默认禁用。启用以显示 Alpha 级别选项。你仍然需要使用 + `TopologyManagerPolicyOptions` kubelet 选项来启用每个选项。 + + +存在以下策略选项: +* `prefer-closest-numa-nodes`(Alpha,默认不可见,`TopologyManagerPolicyOptions` 和 + `TopologyManagerPolicyAlphaOptions` 特性门控必须被启用)(1.26 或更高版本) + + +如果 `prefer-closest-numa-nodes` 策略选项被指定,则在做出准入决策时 `best-effort` 和 `restricted` +策略将偏向于彼此之间距离较短的一组 NUMA 节点。 +你可以通过将 `prefer-closest-numa-nodes=true` 添加到拓扑管理器策略选项来启用此选项。 +默认情况下,如果没有此选项,拓扑管理器会在单个 NUMA 节点或(在需要多个 NUMA 节点时)最小数量的 NUMA 节点上对齐资源。 +然而,`TopologyManager` 无法感知到 NUMA 距离且在做出准入决策时也没有考虑这些距离。 +这种限制出现在多插槽以及单插槽多 NUMA 系统中,如果拓扑管理器决定在非相邻 NUMA 节点上对齐资源, +可能导致对执行延迟敏感和高吞吐的应用程序出现明显的性能下降。 + + +## 验证二进制签名 {#verifying-binary-signatures} + +Kubernetes 发布过程使用 cosign 的无密钥签名对所有二进制工件(压缩包、SPDX 文件、 独立的二进制文件)签名。 +要验证一个特定的二进制文件,获取组件时要包含其签名和证书: + +```bash +URL=https://dl.k8s.io/release/v{{< skew currentVersion >}}.0/bin/linux/amd64 +BINARY=kubectl + +FILES=( + "$BINARY" + "$BINARY.sig" + "$BINARY.cert" +) + +for FILE in "${FILES[@]}"; do + curl -sSfL --retry 3 --retry-delay 3 "$URL/$FILE" -o "$FILE" +done +``` + + +然后使用 `cosign` 验证二进制文件: + +```shell +cosign verify-blob "$BINARY" --signature "$BINARY".sig --certificate "$BINARY".cert +``` + +{{< note >}} + +想要进一步了解无密钥签名,请参考 +[Keyless Signatures](https://github.com/sigstore/cosign/blob/main/KEYLESS.md#keyless-signatures)。 +{{< /note >}} + -1. 对凭证的取值作 base64 编码后保存到文件中: +1. 将凭据保存到文件: ```shell - echo -n 'admin' | base64 > ./username.txt - echo -n 'S!B\*d$zDsb=' | base64 > ./password.txt + echo -n 'admin' > ./username.txt + echo -n 'S!B\*d$zDsb=' > ./password.txt ``` -## 给节点添加标签 +## 给节点添加标签 {#add-a-label-to-a-node} 1. 列出你的集群中的{{< glossary_tooltip term_id="node" text="节点" >}}, 包括这些节点上的标签: ```shell - kubectl get nodes + kubectl get nodes --show-labels ``` -## 创建一个将被调度到你选择的节点的 Pod +## 创建一个将被调度到你选择的节点的 Pod {#create-a-pod-scheduled-to-chosen-node} 此 Pod 配置文件描述了一个拥有节点选择器 `disktype: ssd` 的 Pod。这表明该 Pod 将被调度到有 `disktype=ssd` 标签的节点。 @@ -136,7 +136,7 @@ a `disktype=ssd` label. You can also schedule a pod to one specific node via setting `nodeName`. --> -## 创建一个会被调度到特定节点上的 Pod +## 创建一个会被调度到特定节点上的 Pod {#create-a-pod-scheduled-to-specific-node} 你也可以通过设置 `nodeName` 将某个 Pod 调度到特定的节点。 diff --git a/content/zh-cn/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md b/content/zh-cn/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md index 45e6082a2b90c..41a64482bbc74 100644 --- a/content/zh-cn/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md +++ b/content/zh-cn/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md @@ -33,7 +33,7 @@ Kubernetes 将发送一个 preStop 事件。容器可以为每个事件指定一 In this exercise, you create a Pod that has one Container. The Container has handlers for the postStart and preStop events. --> -## 定义 postStart 和 preStop 处理函数 +## 定义 postStart 和 preStop 处理函数 {#define-poststart-and-prestop-handlers} 在本练习中,你将创建一个包含一个容器的 Pod,该容器为 postStart 和 preStop 事件提供对应的处理函数。 @@ -75,7 +75,7 @@ Get a shell into the Container running in your Pod: --> 使用 shell 连接到你的 Pod 里的容器: -``` +```shell kubectl exec -it lifecycle-demo -- /bin/bash ``` @@ -91,7 +91,7 @@ root@lifecycle-demo:/# cat /usr/share/message -命令行输出的是 `postStart` 处理函数所写入的文本 +命令行输出的是 `postStart` 处理函数所写入的文本: ``` Hello from the postStart handler @@ -109,7 +109,7 @@ relative to the Container's code, but Kubernetes' management of the container blocks until the postStart handler completes. The Container's status is not set to RUNNING until the postStart handler completes. --> -## 讨论 +## 讨论 {#discussion} Kubernetes 在容器创建后立即发送 postStart 事件。 然而,postStart 处理函数的调用不保证早于容器的入口点(entrypoint) @@ -122,22 +122,22 @@ RUNNING。 Kubernetes sends the preStop event immediately before the Container is terminated. Kubernetes' management of the Container blocks until the preStop handler completes, unless the Pod's grace period expires. For more details, see -[Termination of Pods](/docs/user-guide/pods/#termination-of-pods). +[Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/). --> -Kubernetes 在容器结束前立即发送 preStop 事件。除非 Pod 宽限期限超时,Kubernetes 的容器管理逻辑 -会一直阻塞等待 preStop 处理函数执行完毕。更多的相关细节,可以参阅 -[Pods 的结束](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。 +Kubernetes 在容器结束前立即发送 preStop 事件。除非 Pod 宽限期限超时, +Kubernetes 的容器管理逻辑会一直阻塞等待 preStop 处理函数执行完毕。 +更多细节请参阅 [Pod 的生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。 +{{< note >}} -{{< note >}} -Kubernetes 只有在 Pod *结束(Terminated)* 的时候才会发送 preStop 事件, -这意味着在 Pod *完成(Completed)* 时 -preStop 的事件处理逻辑不会被触发。这个限制在 -[issue #55087](https://github.com/kubernetes/kubernetes/issues/55807) 中被追踪。 +Kubernetes 只有在一个 Pod 或该 Pod 中的容器**结束(Terminated)** 的时候才会发送 preStop 事件, +这意味着在 Pod **完成(Completed)** 时 +preStop 的事件处理逻辑不会被触发。有关这个限制, +请参阅[容器回调](/zh-cn/docs/concepts/containers/container-lifecycle-hooks/#container-hooks)了解详情。 {{< /note >}} ## {{% heading "whatsnext" %}} @@ -147,7 +147,7 @@ preStop 的事件处理逻辑不会被触发。这个限制在 * Learn more about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/). --> * 进一步了解[容器生命周期回调](/zh-cn/docs/concepts/containers/container-lifecycle-hooks/)。 -* 进一步了解[Pod 的生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。 +* 进一步了解 [Pod 的生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。 * `initialDelaySeconds`:容器启动后要等待多少秒后才启动启动、存活和就绪探针, 默认是 0 秒,最小值是 0。 @@ -618,9 +620,40 @@ Defaults to 3. Minimum value is 1. * `timeoutSeconds`:探测的超时后等待多少秒。默认值是 1 秒。最小值是 1。 * `successThreshold`:探针在失败后,被视为成功的最小连续成功数。默认值是 1。 存活和启动探测的这个值必须是 1。最小值是 1。 -* `failureThreshold`:当探测失败时,Kubernetes 的重试次数。 - 对存活探测而言,放弃就意味着重新启动容器。 - 对就绪探测而言,放弃意味着 Pod 会被打上未就绪的标签。默认值是 3。最小值是 1。 + +* `failureThreshold`:探针连续失败了 `failureThreshold` 次之后, + Kubernetes 认为总体上检查已失败:容器状态未就绪、不健康、不活跃。 + 对于启动探针或存活探针而言,如果至少有 `failureThreshold` 个探针已失败, + Kubernetes 会将容器视为不健康并为这个特定的容器触发重启操作。 + kubelet 会考虑该容器的 `terminationGracePeriodSeconds` 设置。 + 对于失败的就绪探针,kubelet 继续运行检查失败的容器,并继续运行更多探针; + 因为检查失败,kubelet 将 Pod 的 `Ready` + [状况](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions)设置为 `false`。 + +* `terminationGracePeriodSeconds`:为 kubelet + 配置从为失败的容器触发终止操作到强制容器运行时停止该容器之前等待的宽限时长。 + 默认值是继承 Pod 级别的 `terminationGracePeriodSeconds` 值(如果不设置则为 30 秒),最小值为 1。 + 更多细节请参见[探针级别 `terminationGracePeriodSeconds`](#probe-level-terminationgraceperiodseconds)。 {{< note >}} +上面的清单需要通过 `--admission-control-config-file` 指定给 kube-apiserver。 +{{< /note >}} + {{< note >}} ## 将卷权限和所有权更改委派给 CSI 驱动程序 -{{< feature-state for_k8s_version="v1.23" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} -更多的信息请参考 [KEP](https://github.com/gnufied/enhancements/blob/master/keps/sig-storage/2317-fsgroup-on-mount/README.md) -和 [CSI 规范](https://github.com/container-storage-interface/spec/blob/master/spec.md#createvolume) -中的字段 `VolumeCapability.MountVolume.volume_mount_group` 的描述。 - 2. 选择一个目录,比如在 `/etc/kubernetes/manifests` 目录来保存 Web 服务 Pod 的定义文件,例如 `/etc/kubernetes/manifests/static-web.yaml`: @@ -230,7 +210,7 @@ JSON/YAML 格式的 Pod 定义文件。 -1. 创建一个 YAML 文件,并保存在 web 服务上,为 kubelet 生成一个 URL。 +1. 创建一个 YAML 文件,并保存在 Web 服务器上,这样你就可以将该文件的 URL 传递给 kubelet。 ```yaml apiVersion: v1 @@ -286,8 +266,6 @@ You can view running containers (including static Pods) by running (on the node) # Run this command on the node where the kubelet is running crictl ps ``` - -The output might be something like: --> ## 观察静态 Pod 的行为 {#behavior-of-static-pods} @@ -405,6 +383,28 @@ CONTAINER IMAGE CREATED STATE 89db4553e1eeb docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106 ``` + +一旦你找到合适的容器,你就可以使用 `crictl` 获取该容器的日志。 + +```shell +# 在容器运行所在的节点上执行以下命令 +crictl logs +``` + +```console +10.240.0.48 - - [16/Nov/2022:12:45:49 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-" +10.240.0.48 - - [16/Nov/2022:12:45:50 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-" +10.240.0.48 - - [16/Nove/2022:12:45:51 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-" +``` + + +若要找到如何使用 `crictl` 进行调试的更多信息, +请访问[使用 crictl 对 Kubernetes 节点进行调试](/zh-cn/docs/tasks/debug/debug-cluster/crictl/)。 + ```shell # 这里假定你在用主机文件系统上的静态 Pod 配置文件 -# 在 kubelet 运行的节点上执行以下命令 +# 在容器运行所在的节点上执行以下命令 mv /etc/kubernetes/manifests/static-web.yaml /tmp sleep 20 crictl ps @@ -446,3 +446,19 @@ CONTAINER IMAGE CREATED STATE f427638871c35 docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106 ``` +## {{% heading "whatsnext" %}} + + +* [为控制面组件生成静态 Pod 清单](/zh-cn/docs/reference/setup-tools/kubeadm/implementation-details/#generate-static-pod-manifests-for-control-plane-components) +* [为本地 etcd 生成静态 Pod 清单](/zh-cn/docs/reference/setup-tools/kubeadm/implementation-details/#generate-static-pod-manifest-for-local-etcd) +* [使用 `crictl` 对 Kubernetes 节点进行调试](/docs/tasks/debug/debug-cluster/crictl/) +* 更多细节请参阅 [`crictl`](https://github.com/kubernetes-sigs/cri-tools) +* [从 `docker` CLI 命令映射到 `crictl`](/zh-cn/docs/reference/tools/map-crictl-dockercli/) +* [将 etcd 实例设置为由 kubelet 管理的静态 Pod](/zh-cn/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) diff --git a/content/zh-cn/docs/tasks/debug/_index.md b/content/zh-cn/docs/tasks/debug/_index.md index dd5f1761f02c1..64f7c06e149ab 100644 --- a/content/zh-cn/docs/tasks/debug/_index.md +++ b/content/zh-cn/docs/tasks/debug/_index.md @@ -83,23 +83,55 @@ and command-line interfaces (CLIs), such as [`kubectl`](/docs/reference/kubectl/ ## 求救!我的问题还没有解决!我现在需要帮助! +### Stack Exchange、Stack Overflow 或 Server Fault {#stack-exchange} + +若你对容器化应用有**软件开发**相关的疑问,你可以在 +[Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) 上询问。 + +若你有**集群管理**或**配置**相关的疑问,你可以在 +[Server Fault](https://serverfault.com/questions/tagged/kubernetes) 上询问。 + + -### Stack Overflow {#stack-overflow} +还有几个更专业的 Stack Exchange 网站,很适合在这些地方询问有关 +[DevOps](https://devops.stackexchange.com/questions/tagged/kubernetes)、 +[软件工程](https://softwareengineering.stackexchange.com/questions/tagged/kubernetes)或[信息安全 (InfoSec)](https://security.stackexchange.com/questions/tagged/kubernetes) +领域中 Kubernetes 的问题。 社区中的其他人可能已经问过和你类似的问题,也可能能够帮助解决你的问题。 + + Kubernetes 团队还会监视[带有 Kubernetes 标签的帖子](https://stackoverflow.com/questions/tagged/kubernetes)。 -如果现有的问题对你没有帮助,在[问一个新问题](https://stackoverflow.com/questions/ask?tags=kubernetes) -之前,**请[确保你的问题是关于 Stack Overflow 的主题](https://stackoverflow.com/help/on-topic) -并且你需要阅读关于[如何提出新问题](https://stackoverflow.com/help/how-to-ask) -的指南。** +如果现有的问题对你没有帮助,**请在问一个新问题之前,确保你的问题切合 +[Stack Overflow](https://stackoverflow.com/help/on-topic)、 +[Server Fault](https://serverfault.com/help/on-topic) 或 Stack Exchange 的主题**, +并通读[如何提出新问题](https://stackoverflow.com/help/how-to-ask)的指导说明! -### Bugs 和功能请求 {#bugs-and-feature-requests} +### Bug 和功能请求 {#bugs-and-feature-requests} 如果你发现一个看起来像 Bug 的问题,或者你想提出一个功能请求,请使用 -[Github 问题跟踪系统](https://github.com/kubernetes/kubernetes/issues)。 +[GitHub 问题跟踪系统](https://github.com/kubernetes/kubernetes/issues)。 Kube-proxy 可以以若干模式之一运行。在上述日志中,`Using iptables Proxier` 行表示 kube-proxy 在 "iptables" 模式下运行。 -最常见的另一种模式是 "ipvs"。先前的 "userspace" 模式已经被这些所代替。 +最常见的另一种模式是 "ipvs"。 -#### Userspace 模式 {#userspace-mode} - -在极少数情况下,你可能会用到 "userspace" 模式。在你的节点上运行: - -```shell -iptables-save | grep hostnames -``` - -```none --A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577 --A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577 -``` - - -对于 Service (本例中只有一个)的每个端口,应当有 2 条规则: -一条 "KUBE-PORTALS-CONTAINER" 和一条 "KUBE-PORTALS-HOST" 规则。 - -几乎没有人应该再使用 "userspace" 模式,因此你在这里不会花更多的时间。 - -如果失败,并且你正在使用用户空间代理,则可以尝试直接访问代理。 -如果你使用的是 iptables 代理,请跳过本节。 - -回顾上面的 `iptables-save` 输出,并提取 `kube-proxy` 为你的 Service 所使用的端口号。 -在上面的例子中,端口号是 “48577”。现在试着连接它: - -```shell -curl localhost:48577 -``` - -```none -hostnames-632524106-tlaok -``` - @@ -996,7 +941,7 @@ Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9 If you don't see those, try restarting `kube-proxy` with the `-v` flag set to 4, and then look at the logs again. --> -如果你没有看到这些,请尝试将 `-V` 标志设置为 4 并重新启动 `kube-proxy`,然后再查看日志。 +如果你没有看到这些,请尝试将 `-v` 标志设置为 4 并重新启动 `kube-proxy`,然后再查看日志。 ### 边缘案例: Pod 无法通过 Service IP 连接到它本身 {#a-pod-fails-to-reach-itself-via-the-service-ip} diff --git a/content/zh-cn/docs/tasks/debug/debug-cluster/_index.md b/content/zh-cn/docs/tasks/debug/debug-cluster/_index.md index 604174f541ff1..5a779f612eeb1 100644 --- a/content/zh-cn/docs/tasks/debug/debug-cluster/_index.md +++ b/content/zh-cn/docs/tasks/debug/debug-cluster/_index.md @@ -60,7 +60,13 @@ kubectl cluster-info dump ### 示例:调试关闭/无法访问的节点 {#example-debugging-a-down-unreachable-node} @@ -279,11 +285,10 @@ of the relevant log files. On systemd-based systems, you may need to use `journ * `/var/log/kubelet.log` - logs from the kubelet, responsible for running containers on the node * `/var/log/kube-proxy.log` - logs from `kube-proxy`, which is responsible for directing traffic to Service endpoints --> - ### 工作节点 {#worker-nodes} -* `/var/log/kubelet.log` —— 来自 `kubelet` 的日志,负责在节点运行容器 -* `/var/log/kube-proxy.log` —— 来自 `kube-proxy` 的日志,负责将流量转发到服务端点 +* `/var/log/kubelet.log` —— 负责在节点运行容器的 `kubelet` 所产生的日志 +* `/var/log/kube-proxy.log` —— 负责将流量转发到服务端点的 `kube-proxy` 所产生的日志 - -{{< feature-state state="beta" >}} + -{{< note >}} [审计事件配置](/zh-cn/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event) 的配置与 [Event](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#event-v1-core) API 对象不同。 @@ -118,7 +117,7 @@ _audit level_ of the event. The defined audit levels are: 审计策略定义了关于应记录哪些事件以及应包含哪些数据的规则。 审计策略对象结构定义在 [`audit.k8s.io` API 组](/zh-cn/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy) -处理事件时,将按顺序与规则列表进行比较。第一个匹配规则设置事件的 +。处理事件时,将按顺序与规则列表进行比较。第一个匹配规则设置事件的 **审计级别(Audit Level)**。已定义的审计级别有: - ## 审计后端 {#audit-backends} 审计后端实现将审计事件导出到外部存储。`Kube-apiserver` 默认提供两个后端: @@ -206,16 +204,16 @@ In all cases, audit events follow a structure defined by the Kubernetes API in t [`audit.k8s.io` API 组](/zh-cn/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event) 中定义的结构。 +{{< note >}} -{{< note >}} 对于 patch 请求,请求的消息体需要是设定 patch 操作的 JSON 所构成的一个串, 而不是一个完整的 Kubernetes API 对象 JSON 串。 例如,以下的示例是一个合法的 patch 请求消息体,该请求对应 -`/apis/batch/v1/namespaces/some-namespace/jobs/some-job-name`。 +`/apis/batch/v1/namespaces/some-namespace/jobs/some-job-name`: ```json [ @@ -237,9 +235,6 @@ request to `/apis/batch/v1/namespaces/some-namespace/jobs/some-job-name`. The log backend writes audit events to a file in [JSONlines](https://jsonlines.org/) format. You can configure the log audit backend using the following `kube-apiserver` flags: - -Log backend writes audit events to a file in JSON format. You can configure -log audit backend using the following [kube-apiserver][kube-apiserver] flags: --> ### Log 后端 {#log-backend} @@ -256,7 +251,7 @@ Log 后端将审计事件写入 [JSONlines](https://jsonlines.org/) 格式的 - `--audit-log-path` 指定用来写入审计事件的日志文件路径。不指定此标志会禁用日志后端。`-` 意味着标准化 - `--audit-log-maxage` 定义保留旧审计日志文件的最大天数 - `--audit-log-maxbackup` 定义要保留的审计日志文件的最大数量 -- `--audit-log-maxsize` 定义审计日志文件的最大大小(兆字节) +- `--audit-log-maxsize` 定义审计日志文件轮转之前的最大大小(兆字节) 接下来挂载数据卷: ```yaml @@ -347,7 +345,7 @@ throttling is enabled in `webhook` and disabled in `log`. 同样,默认情况下,在 `webhook` 中启用带宽限制,在 `log` 中禁用带宽限制。 ### 日志条目截断 {#truncate} -日志后端和 Webhook 后端都支持限制所输出的事件的尺寸。 +日志后端和 Webhook 后端都支持限制所输出的事件大小。 例如,下面是可以为日志后端配置的标志列表: - `audit-log-truncate-enabled`:是否弃用事件和批次的截断处理。 -- `audit-log-truncate-max-batch-size`:向下层后端发送的各批次的最大尺寸字节数。 -- `audit-log-truncate-max-event-size`:向下层后端发送的审计事件的最大尺寸字节数。 +- `audit-log-truncate-max-batch-size`:向下层后端发送的各批次的最大字节数。 +- `audit-log-truncate-max-event-size`:向下层后端发送的审计事件的最大字节数。 - `crictl` 是 CRI 兼容的容器运行时命令行接口。 你可以使用它来检查和调试 Kubernetes 节点上的容器运行时和应用程序。 `crictl` 和它的源代码在 @@ -82,6 +81,14 @@ You can set the endpoint for `crictl` by doing one of the following: - 在配置文件 `--config=/etc/crictl.yaml` 中设置端点。 要设置不同的文件,可以在运行 `crictl` 时使用 `--config=PATH_TO_FILE` 标志。 +{{}} + +如果你不设置端点,`crictl` 将尝试连接到已知端点的列表,这可能会影响性能。 +{{}} + 输出类似于: -``` +```none POD ID CREATED STATE NAME NAMESPACE ATTEMPT 926f1b5a1d33a About a minute ago Ready sh-84d7dcf559-4r2gq default 0 4dccb216c4adb About a minute ago Ready nginx-65899c769f-wv2gp default 0 @@ -170,7 +177,7 @@ The output is similar to this: --> 输出类似于这样: -``` +```none POD ID CREATED STATE NAME NAMESPACE ATTEMPT 4dccb216c4adb 2 minutes ago Ready nginx-65899c769f-wv2gp default 0 ``` @@ -422,7 +429,7 @@ deleted by the Kubelet. ### 创建容器 {#create-a-container} 用 `crictl` 创建容器对容器运行时排错很有帮助。 -在运行的 Kubernetes 集群中,沙盒会随机的被 kubelet 停止和删除。 +在运行的 Kubernetes 集群中,沙盒会随机地被 kubelet 停止和删除。 输出类似于这样: - + ```none CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT 3e025dd50a72d busybox 32 seconds ago Created busybox 0 @@ -548,7 +555,7 @@ The output is similar to this: --> 输出类似于这样: -``` +```none CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT 3e025dd50a72d busybox About a minute ago Running busybox 0 ``` diff --git a/content/zh-cn/docs/tasks/debug/debug-cluster/local-debugging.md b/content/zh-cn/docs/tasks/debug/debug-cluster/local-debugging.md index 8c0823cf75a72..263a59d742f9c 100644 --- a/content/zh-cn/docs/tasks/debug/debug-cluster/local-debugging.md +++ b/content/zh-cn/docs/tasks/debug/debug-cluster/local-debugging.md @@ -2,7 +2,6 @@ title: 使用 telepresence 在本地开发和调试服务 content_type: task --- - - Kubernetes 应用程序通常由多个独立的服务组成,每个服务都在自己的容器中运行。 在远端的 Kubernetes 集群上开发和调试这些服务可能很麻烦, 需要[在运行的容器上打开 Shell](/zh-cn/docs/tasks/debug/debug-application/get-shell-running-container/), @@ -24,11 +22,9 @@ Kubernetes 应用程序通常由多个独立的服务组成,每个服务都在 - `telepresence` 是一个工具,用于简化本地开发和调试服务的过程,同时可以将服务代理到远程 Kubernetes 集群。 -`telepresence` 允许你使用自定义工具(例如:调试器 和 IDE)调试服务, -并提供对 Configmap、Secret 和远程集群上运行的服务的完全访问。 - +`telepresence` 允许你使用自定义工具(例如调试器和 IDE)调试本地服务, +并能够让此服务完全访问 ConfigMap、Secret 和远程集群上运行的服务。 - * Kubernetes 集群安装完毕 * 配置好 `kubectl` 与集群交互 * [Telepresence](https://www.telepresence.io/docs/latest/install/) 安装完毕 @@ -54,10 +49,10 @@ This document describes using `telepresence` to develop and debug services runni After installing `telepresence`, run `telepresence connect` to launch its Daemon and connect your local workstation to the cluster. --> +## 从本机连接到远程 Kubernetes 集群 {#connecting-your-local-machine-to-a-remote-cluster} -## 从本机连接到远程 Kubernetes 集群 - -安装 `telepresence` 后,运行 `telepresence connect` 来启动它的守护进程并将本地工作站连接到远程 Kubernetes 集群。 +安装 `telepresence` 后,运行 `telepresence connect` 来启动它的守护进程并将本地工作站连接到远程 +Kubernetes 集群。 ``` $ telepresence connect @@ -70,7 +65,6 @@ Connected to context default (https://) - 你可以通过 curl 使用 Kubernetes 语法访问服务,例如:`curl -ik https://kubernetes.default` -## 开发和调试现有的服务 +## 开发和调试现有的服务 {#developing-or-debugging-an-existing-service} 在 Kubernetes 上开发应用程序时,通常对单个服务进行编程或调试。 服务可能需要访问其他服务以进行测试和调试。 @@ -86,15 +80,15 @@ When developing an application on Kubernetes, you typically program or debug a s - -使用 `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT` 命令创建一个 "拦截器" 用于重新路由远程服务流量。 +使用 `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT` +命令创建一个 "拦截器" 用于重新路由远程服务流量。 环境变量: @@ -105,7 +99,6 @@ Where: - 运行此命令会告诉 Telepresence 将远程流量发送到本地服务,而不是远程 Kubernetes 集群中的服务中。 在本地编辑保存服务源代码,并在访问远程应用时查看相应变更会立即生效。 还可以使用调试器或任何其他本地开发工具运行本地服务。 @@ -115,10 +108,9 @@ Running this command tells Telepresence to send remote traffic to your local ser Telepresence installs a traffic-agent sidecar next to your existing application's container running in the remote cluster. It then captures all traffic requests going into the Pod, and instead of forwarding this to the application in the remote cluster, it routes all traffic (when you create a [global intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#global-intercept)) or a subset of the traffic (when you create a [personal intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#personal-intercept)) to your local development environment. --> +## Telepresence 是如何工作的? {#how-does-telepresence-work} -## Telepresence 是如何工作的? - -Telepresence 会在远程集群中运行的现有应用程序容器旁边安装流量代理 sidecar。 +Telepresence 会在远程集群中运行的现有应用程序容器旁边安装流量代理 Sidecar。 当它捕获进入 Pod 的所有流量请求时,不是将其转发到远程集群中的应用程序, 而是路由所有流量(当创建[全局拦截器](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#global-intercept)时) 或流量的一个子集(当创建[自定义拦截器](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#personal-intercept)时) @@ -129,10 +121,11 @@ Telepresence 会在远程集群中运行的现有应用程序容器旁边安装 -如果你对实践教程感兴趣,请查看[本教程](https://cloud.google.com/community/tutorials/developing-services-with-k8s),其中介绍了在 Google Kubernetes Engine 上本地开发 Guestbook 应用程序。 +如果你对实践教程感兴趣, +请查看[本教程](https://cloud.google.com/community/tutorials/developing-services-with-k8s), +其中介绍了如何在 Google Kubernetes Engine 上本地开发 Guestbook 应用程序。 - 如需进一步了解,请访问 [Telepresence 官方网站](https://www.telepresence.io)。 diff --git a/content/zh-cn/docs/tasks/debug/debug-cluster/monitor-node-health.md b/content/zh-cn/docs/tasks/debug/debug-cluster/monitor-node-health.md index 8e2778c7a94fb..c3c6b0890ecf2 100644 --- a/content/zh-cn/docs/tasks/debug/debug-cluster/monitor-node-health.md +++ b/content/zh-cn/docs/tasks/debug/debug-cluster/monitor-node-health.md @@ -13,12 +13,12 @@ weight: 20 --> - - ## 局限性 {#limitations} -* 节点问题检测器只支持基于文件类型的内核日志。 - 它不支持像 journald 这样的命令行日志工具。 * 节点问题检测器使用内核日志格式来报告内核问题。 要了解如何扩展内核日志格式,请参阅[添加对另一个日志格式的支持](#support-other-log-format)。 - ## 启用节点问题检测器 一些云供应商将节点问题检测器以{{< glossary_tooltip text="插件" term_id="addons" >}}形式启用。 -你还可以使用 `kubectl` 或创建插件 Pod 来启用节点问题探测器。 +你还可以使用 `kubectl` 或创建插件 DaemonSet 来启用节点问题探测器。 - -## 使用 kubectl 启用节点问题检测器 {#using-kubectl} +### 使用 kubectl 启用节点问题检测器 {#using-kubectl} `kubectl` 提供了节点问题探测器最灵活的管理。 你可以覆盖默认配置使其适合你的环境或检测自定义节点问题。例如: - -### 使用插件 pod 启用节点问题检测器 {#using-addon-pod} +### 使用插件 Pod 启用节点问题检测器 {#using-addon-pod} 如果你使用的是自定义集群引导解决方案,不需要覆盖默认配置, 可以利用插件 Pod 进一步自动化部署。 @@ -125,25 +120,25 @@ directory `/etc/kubernetes/addons/node-problem-detector` on a control plane node 创建 `node-strick-detector.yaml`,并在控制平面节点上保存配置到插件 Pod 的目录 `/etc/kubernetes/addons/node-problem-detector`。 - ## 覆盖配置文件 构建节点问题检测器的 docker 镜像时,会嵌入 -[默认配置](https://github.com/kubernetes/node-problem-detector/tree/v0.1/config)。 +[默认配置](https://github.com/kubernetes/node-problem-detector/tree/v0.8.12/config)。 - 不过,你可以像下面这样使用 [`ConfigMap`](/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/) 将其覆盖: - 1. 更改 `config/` 中的配置文件 1. 创建 `ConfigMap` `node-strick-detector-config`: - + ```shell kubectl create configmap node-problem-detector-config --from-file=config/ ``` 1. 更改 `node-problem-detector.yaml` 以使用 ConfigMap: - + {{< codenew file="debug/node-problem-detector-configmap.yaml" >}} 1. 使用新的配置文件重新创建节点问题检测器: - ```shell + ```shell # 如果你正在运行节点问题检测器,请先删除,然后再重新创建 kubectl delete -f https://k8s.io/examples/debug/node-problem-detector.yaml kubectl apply -f https://k8s.io/examples/debug/node-problem-detector-configmap.yaml ``` - -## 内核监视器 -*内核监视器(Kernel Monitor)* 是节点问题检测器中支持的系统日志监视器守护进程。 -内核监视器观察内核日志并根据预定义规则检测已知的内核问题。 +## 问题守护程序 - +- `SystemLogMonitor` 类型的守护程序根据预定义的规则监视系统日志并报告问题和指标。 + 你可以针对不同的日志源自定义配置如 +[filelog](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/kernel-monitor-filelog.json)、 +[kmsg](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/kernel-monitor.json)、 +[kernel](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/kernel-monitor-counter.json)、 +[abrt](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/abrt-adaptor.json) +和 [systemd](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/systemd-monitor-counter.json)。 + + -内核监视器根据 [`config/kernel-monitor.json`](https://github.com/kubernetes/node-problem-detector/blob/v0.1/config/kernel-monitor.json) -中的一组预定义规则列表匹配内核问题。 -规则列表是可扩展的,你始终可以通过覆盖配置来扩展它。 - -### 添加新的 NodeCondition -要支持新的 `NodeCondition`,请在 `config/kernel-monitor.json` 中的 -`conditions` 字段中创建一个条件定义: +- `CustomPluginMonitor` 类型的守护程序通过运行用户定义的脚本来调用和检查各种节点问题。 + 你可以使用不同的自定义插件监视器来监视不同的问题,并通过更新 + [配置文件](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/custom-plugin-monitor.json) + 来定制守护程序行为。 -```json -{ - "type": "NodeConditionType", - "reason": "CamelCaseDefaultNodeConditionReason", - "message": "arbitrary default node condition message" -} -``` + +- `HealthChecker` 类型的守护程序检查节点上的 kubelet 和容器运行时的健康状况。 - -### 检测新的问题 -你可以使用新的规则描述来扩展 `config/kernel-monitor.json` 中的 `rules` 字段以检测新问题: +### 增加对其他日志格式的支持 {#support-other-log-format} -```json -{ - "type": "temporary/permanent", - "condition": "NodeConditionOfPermanentIssue", - "reason": "CamelCaseShortReason", - "message": "regexp matching the issue in the kernel log" -} -``` +系统日志监视器目前支持基于文件的日志、journald 和 kmsg。 +可以通过实现一个新的 +[log watcher](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/pkg/systemlogmonitor/logwatchers/types/log_watcher.go) +来添加额外的日志源。 - -### 配置内核日志设备的路径 {#kernel-log-device-path} -检查你的操作系统(OS)发行版本中的内核日志路径位置。 -Linux 内核[日志设备](https://www.kernel.org/doc/documentation/abi/testing/dev-kmsg) -通常呈现为 `/dev/kmsg`。 -但是,日志路径位置因 OS 发行版本而异。 -`config/kernel-monitor.json` 中的 `log` 字段表示容器内的日志路径。 -你可以配置 `log` 字段以匹配节点问题检测器所示的设备路径。 +### 添加自定义插件监视器 - -### 添加对其它日志格式的支持 {#support-other-log-format} -内核监视器使用 -[`Translator`](https://github.com/kubernetes/node-problem-detector/blob/v0.1/pkg/kernelmonitor/translator.go) -插件转换内核日志的内部数据结构。 -你可以为新的日志格式实现新的转换器。 +## 导出器 + +导出器(Exporter)向特定后端报告节点问题和/或指标。 +支持下列导出器: + +- **Kubernetes exporter**: 此导出器向 Kubernetes API 服务器报告节点问题。 + 临时问题报告为事件,永久性问题报告为节点状况。 + +- **Prometheus exporter**: 此导出器在本地将节点问题和指标报告为 Prometheus(或 OpenMetrics)指标。 + 你可以使用命令行参数指定导出器的 IP 地址和端口。 + +- **Stackdriver exporter**: 此导出器向 Stackdriver Monitoring API 报告节点问题和指标。 + 可以使用[配置文件](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/exporter/stackdriver-exporter.json)自定义导出行为。 - ## 建议和限制 diff --git a/content/zh-cn/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md b/content/zh-cn/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md index c27f1339558cc..f79fb0304e03e 100644 --- a/content/zh-cn/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md +++ b/content/zh-cn/docs/tasks/debug/debug-cluster/resource-usage-monitoring.md @@ -1,12 +1,14 @@ --- content_type: concept title: 资源监控工具 +weight: 15 --- @@ -24,9 +26,8 @@ where bottlenecks can be removed to improve overall performance. --> 要扩展应用程序并提供可靠的服务,你需要了解应用程序在部署时的行为。 你可以通过检测容器检查 Kubernetes 集群中的应用程序性能, -[Pod](/zh-cn/docs/concepts/workloads/pods), -[服务](/zh-cn/docs/concepts/services-networking/service/) -和整个集群的特征。 +[Pod](/zh-cn/docs/concepts/workloads/pods)、 +[服务](/zh-cn/docs/concepts/services-networking/service/)和整个集群的特征。 Kubernetes 在每个级别上提供有关应用程序资源使用情况的详细信息。 此信息使你可以评估应用程序的性能,以及在何处可以消除瓶颈以提高整体性能。 @@ -37,9 +38,8 @@ In Kubernetes, application monitoring does not depend on a single monitoring sol On new clusters, you can use [resource metrics](#resource-metrics-pipeline) or [full metrics](#full-metrics-pipeline) pipelines to collect monitoring statistics. --> -在 Kubernetes 中,应用程序监控不依赖单个监控解决方案。 -在新集群上,你可以使用[资源度量](#resource-metrics-pipeline)或 -[完整度量](#full-metrics-pipeline)管道来收集监视统计信息。 +在 Kubernetes 中,应用程序监控不依赖单个监控解决方案。在新集群上, +你可以使用[资源度量](#resource-metrics-pipeline)或[完整度量](#full-metrics-pipeline)管道来收集监视统计信息。 -## {{% heading "接下来" %}} - 了解其他调试工具,包括: * [日志记录](/zh-cn/docs/concepts/cluster-administration/logging/) diff --git a/content/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md b/content/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md index 0e5b7a0ed0ebf..ea2110b88322f 100644 --- a/content/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md +++ b/content/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md @@ -73,8 +73,7 @@ Adding a new version: 1. Pick a conversion strategy. Since custom resource objects need the ability to be served at both versions, that means they will sometimes be served in a - different version than the one stored. To make this possible, the custom - resource objects must sometimes be converted between the + different version than the one stored. To make this possible, the custom resource objects must sometimes be converted between the version they are stored at and the version they are served at. If the conversion involves schema changes and requires custom logic, a conversion webhook should be used. If there are no schema changes, the default `None` @@ -132,11 +131,11 @@ Removing an old version: 1. Set `served` to `false` for the old version in the `spec.versions` list. If any clients are still unexpectedly using the old version they may begin reporting errors attempting to access the custom resource objects at the old version. - If this occurs, switch back to using `served:true` on the old version, migrate the + If this occurs, switch back to using `served:true` on the old version, migrate the remaining clients to the new version and repeat this step. 1. Ensure the [upgrade of existing objects to the new stored version](#upgrade-existing-objects-to-a-new-stored-version) step has been completed. - 1. Verify that the `storage` is set to `true` for the new version in the `spec.versions` list in the CustomResourceDefinition. - 1. Verify that the old version is no longer listed in the CustomResourceDefinition `status.storedVersions`. + 1. Verify that the `storage` is set to `true` for the new version in the `spec.versions` list in the CustomResourceDefinition. + 1. Verify that the old version is no longer listed in the CustomResourceDefinition `status.storedVersions`. 1. Remove the old version from the CustomResourceDefinition `spec.versions` list. 1. Drop conversion support for the old version in conversion webhooks. --> @@ -499,11 +498,11 @@ spec: ### 版本删除 {#version-removal} -在为所有提供旧版本自定义资源的集群将现有数据迁移到新 API 版本,并且从 CustomResourceDefinition 的 +在为所有提供旧版本自定义资源的集群将现有存储数据迁移到新 API 版本,并且从 CustomResourceDefinition 的 `status.storedVersions` 中删除旧版本之前,无法从 CustomResourceDefinition 清单文件中删除旧 API 版本。 ```yaml @@ -532,9 +531,6 @@ spec: ## Webhook 转换 {#webhook-conversion} @@ -627,7 +623,7 @@ how to [authenticate API servers](/docs/reference/access-authn-authz/extensible- A conversion webhook must not mutate anything inside of `metadata` of the converted object other than `labels` and `annotations`. Attempted changes to `name`, `UID` and `namespace` are rejected and fail the request -which caused the conversion. All other changes are ignored. +which caused the conversion. All other changes are ignored. --> #### 被允许的变更 @@ -639,8 +635,10 @@ which caused the conversion. All other changes are ignored. ### 部署转换 Webhook 服务 {#deploy-the-conversion-webhook-service} @@ -842,7 +840,7 @@ API 服务器一旦确定请求应发送到转换 Webhook,它需要知道如 创建 apiextensions.k8s.io/v1beta1 定制资源定义时若未指定 @@ -1298,7 +1296,7 @@ If conversion fails, a webhook should return a `response` stanza containing the {{< warning >}} @@ -1352,29 +1350,50 @@ Example of a response from a webhook indicating a conversion request failed, wit ## 编写、读取和更新版本化的 CustomResourceDefinition 对象 {#write-read-and-update-versioned-crd-objects} -写入对象时,将使用写入时指定的存储版本来存储。如果存储版本发生变化, +写入对象时,将存储为写入时指定的存储版本。如果存储版本发生变化, 现有对象永远不会被自动转换。然而,新创建或被更新的对象将以新的存储版本写入。 对象写入的版本不再被支持是有可能的。 -当读取对象时,作为路径的一部分,你需要指定版本。 -如果所指定的版本与对象的持久版本不同,Kubernetes 会按所请求的版本将对象返回, -但是在满足服务请求时,被持久化的对象既不会在磁盘上更改, -也不会以任何方式进行转换(除了 `apiVersion` 字符串被更改之外)。 -你可以以当前提供的任何版本来请求对象。 +当读取对象时,你需要在路径中指定版本。 +你可以请求当前提供的任意版本的对象。 +如果所指定的版本与对象的存储版本不同,Kubernetes 会按所请求的版本将对象返回, +但磁盘上存储的对象不会更改。 + + +在为读取请求提供服务时正返回的对象会发生什么取决于 CRD 的 `spec.conversion` 中指定的内容: + + +- 如果所指定的 `strategy` 值是默认的 `None`,则针对对象的唯一修改是更改其 `apiVersion` 字符串, + 并且可能[修剪未知字段](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#field-pruning)(取决于配置)。 + 请注意,如果存储和请求版本之间的模式不同,这不太可能导致好的结果。 + 尤其是如果在相同的数据类不同版本中采用不同字段来表示时,不应使用此策略。 +- 如果指定了 [Webhook 转换](#webhook-conversion),则此机制将控制转换。 -1. 存储版本是 `v1beta1`。你创建一个对象。该对象以版本 `v1beta1` 存储。 -2. 你将为 CustomResourceDefinition 添加版本 `v1`,并将其指定为存储版本。 -3. 你使用版本 `v1beta1` 来读取你的对象,然后你再次用版本 `v1` 读取对象。 - 除了 apiVersion 字段之外,返回的两个对象是完全相同的。 -4. 你创建一个新对象。对象以版本 `v1` 保存在存储中。 - 你现在有两个对象,其中一个是 `v1beta1`,另一个是 `v1`。 -5. 你更新第一个对象。该对象现在以版本 `v1` 保存,因为 `v1` 是当前的存储版本。 +1. The storage version is `v1beta1`. You create an object. It is stored at version `v1beta1` +2. You add version `v1` to your CustomResourceDefinition and designate it as + the storage version. Here the schemas for `v1` and `v1beta1` are identical, + which is typically the case when promoting an API to stable in the + Kubernetes ecosystem. +3. You read your object at version `v1beta1`, then you read the object again at + version `v1`. Both returned objects are identical except for the apiVersion + field. +4. You create a new object. It is stored at version `v1`. You now + have two objects, one of which is at `v1beta1`, and the other of which is at + `v1`. +5. You update the first object. It is now stored at version `v1` since that + is the current storage version. +--> +1. 存储版本是 `v1beta1`。你创建一个对象。该对象以版本 `v1beta1` 存储。 +2. 你将为 CustomResourceDefinition 添加版本 `v1`,并将其指定为存储版本。 + 此处 `v1` 和 `v1beta1` 的模式是相同的,这通常是在 Kubernetes 生态系统中将 API 提升为稳定版时的情况。 +3. 你使用版本 `v1beta1` 来读取你的对象,然后你再次用版本 `v1` 读取对象。 + 除了 apiVersion 字段之外,返回的两个对象是完全相同的。 +4. 你创建一个新对象。该对象存储为版本 `v1`。 + 你现在有两个对象,其中一个是 `v1beta1`,另一个是 `v1`。 +5. 你更新第一个对象。该对象现在以版本 `v1` 保存,因为 `v1` 是当前的存储版本。 @@ -1440,8 +1461,8 @@ procedure. **选项 1:** 使用存储版本迁移程序(Storage Version Migrator) @@ -1459,18 +1480,18 @@ The following is an example procedure to upgrade from `v1beta1` to `v1`. 以下是从 `v1beta1` 升级到 `v1` 的示例过程。 -1. 在 CustomResourceDefinition 文件中将 `v1` 设置为存储版本,并使用 kubectl 应用它。 - `storedVersions`现在是`v1beta1, v1`。 -2. 编写升级过程以列出所有现有对象并使用相同内容将其写回存储。 - 这会强制后端使用当前存储版本(即 `v1`)写入对象。 -3. 从 CustomResourceDefinition 的 `status.storedVersions` 字段中删除 `v1beta1`。 +1. 在 CustomResourceDefinition 文件中将 `v1` 设置为存储版本,并使用 kubectl 应用它。 + `storedVersions`现在是 `v1beta1, v1`。 +2. 编写升级过程以列出所有现有对象并使用相同内容将其写回存储。 + 这会强制后端使用当前存储版本(即 `v1`)写入对象。 +3. 从 CustomResourceDefinition 的 `status.storedVersions` 字段中删除 `v1beta1`。 {{< note >}} -### 设置默认值 {#efaulting} +### 设置默认值 {#defaulting} {{< note >}} -你可以利用 {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} -执行基于时间调度的 {{< glossary_tooltip text="Job" term_id="job" >}}。 -这些自动化任务和 Linux 或者 Unix 系统的 [Cron](https://zh.wikipedia.org/wiki/Cron) 任务类似。 - -CronJob 在创建周期性以及重复性的任务时很有帮助,例如执行备份操作或者发送邮件。 -CronJob 也可以在特定时间调度单个任务,例如你想调度低活跃周期的任务。 - - -CronJob 有一些限制和特点。 -例如,在特定状况下,同一个 CronJob 可以创建多个任务。 -因此,任务应该是幂等的。 - -有关更多限制,请参考 [CronJob](/zh-cn/docs/concepts/workloads/controllers/cron-jobs)。 +本页演示如何使用 Kubernetes {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} +对象运行自动化任务。 ## {{% heading "prerequisites" %}} @@ -49,12 +28,12 @@ CronJob 有一些限制和特点。 -## 创建 CronJob {#creating-a-cronjob} +## 创建 CronJob {#creating-a-cron-job} CronJob 需要一个配置文件。 以下是针对一个 CronJob 的清单,该 CronJob 每分钟运行一个简单的演示任务: @@ -201,182 +180,3 @@ You can read more about removing jobs in [garbage collection](/docs/concepts/arc --> 删除 CronJob 会清除它创建的所有任务和 Pod,并阻止它创建额外的任务。 你可以查阅[垃圾收集](/zh-cn/docs/concepts/architecture/garbage-collection/)。 - - -## 编写 CronJob 声明信息 {#writing-a-cronjob-spec} - -像 Kubernetes 的其他对象一样,CronJob 需要 `apiVersion`、`kind` 和 `metadata` 字段。 -有关 Kubernetes 对象及它们的{{< glossary_tooltip text="清单" term_id="manifest" >}}的更多信息, -请参考[资源管理](/zh-cn/docs/concepts/cluster-administration/manage-deployment/)和 -[使用 kubectl 管理资源](/zh-cn/docs/concepts/overview/working-with-objects/object-management/)文档。 - -CronJob 配置也需要包括 -[`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 部分。 - -{{< note >}} - -如果你修改了一个 CronJob,你所做的修改将只被应用到将来所运行的任务上, -对当前 CronJob 内处于运行中的 Job 集合(和 Job 里面的 Pod)不会产生任何变化,它们将继续运行。 -也就是说,对 CronJob 的修改不更新现有的任务,即使这些任务处于运行状态。 -{{< /note >}} - - -### 排期表 {#schedule} - -`.spec.schedule` 是 `.spec` 中的必需字段。它接受 [Cron](https://zh.wikipedia.org/wiki/Cron) -格式串,例如 `0 * * * *` or `@hourly`,作为它的任务被创建和执行的调度时间。 - - -该格式也包含了扩展的 “Vixie cron” 步长值。 -[FreeBSD 手册](https://www.freebsd.org/cgi/man.cgi?crontab%285%29)中解释如下: - - - -> 步长可被用于范围组合。范围后面带有 `/<数字>` 可以声明范围内的步幅数值。 -> 例如,`0-23/2` 可被用在小时字段来声明命令在其他数值的小时数执行 -> (V7 标准中对应的方法是 `0,2,4,6,8,10,12,14,16,18,20,22`)。 -> 步长也可以放在通配符后面,因此如果你想表达 “每两小时”,就用 `*/2` 。 - - -{{< note >}} -调度中的问号 (`?`) 和星号 `*` 含义相同,它们用来表示给定字段的任何可用值。 -{{< /note >}} - - -### 任务模板 {#job-template} - -`.spec.jobTemplate` 是任务的模板,它是必需的。它和 -[Job](/zh-cn/docs/concepts/workloads/controllers/job/) 的语法完全一样, -只不过它是嵌套的,没有 `apiVersion` 和 `kind`。 -有关如何编写一个任务的 `.spec`, -请参考[编写 Job 规约](/zh-cn/docs/concepts/workloads/controllers/job/#writing-a-job-spec)。 - - -### 开始的最后期限 {#starting-deadline} - -`.spec.startingDeadlineSeconds` 字段是可选的。 -它表示任务如果由于某种原因错过了调度时间,开始该任务的截止时间的秒数。 -过了截止时间,CronJob 就不会开始任务。 -不满足这种最后期限的任务会被统计为失败任务。如果此字段未设置,那任务就没有最后期限。 - - -如果 `.spec.startingDeadlineSeconds` 字段被设置(非空), -CronJob 控制器将会计算从预期创建 Job 到当前时间的时间差。 -如果时间差大于该限制,则跳过此次执行。 - -例如,如果将其设置为 `200`,则 Job 控制器允许在实际调度之后最多 200 秒内创建 Job。 - - -### 并发性规则 {#concurrency-policy} - -`.spec.concurrencyPolicy` 也是可选的。它声明了 CronJob 创建的任务执行时发生重叠如何处理。 -spec 仅能声明下列规则中的一种: - -* `Allow`(默认):CronJob 允许并发任务执行。 -* `Forbid`: CronJob 不允许并发任务执行;如果新任务的执行时间到了而老任务没有执行完,CronJob 会忽略新任务的执行。 -* `Replace`:如果新任务的执行时间到了而老任务没有执行完,CronJob 会用新任务替换当前正在运行的任务。 - -请注意,并发性规则仅适用于相同 CronJob 创建的任务。如果有多个 CronJob,它们相应的任务总是允许并发执行的。 - - -### 挂起 {#suspend} - -`.spec.suspend` 字段也是可选的。如果设置为 `true` ,后续发生的执行都会被挂起。 -这个设置对已经开始的执行不起作用。默认是 `false`。 - - -{{< caution >}} -在调度时间内挂起的执行都会被统计为错过的任务。当 `.spec.suspend` 从 `true` 改为 `false` 时, -且没有[开始的最后期限](#starting-deadline),错过的任务会被立即调度。 -{{< /caution >}} - - -### 任务历史限制 {#jobs-history-limits} - -`.spec.successfulJobsHistoryLimit` 和 `.spec.failedJobsHistoryLimit` 是可选的。 -这两个字段指定应保留多少已完成和失败的任务。 -默认设置分别为 3 和 1。设置为 `0` 代表相应类型的任务完成后不会保留。 diff --git a/content/zh-cn/docs/tasks/job/pod-failure-policy.md b/content/zh-cn/docs/tasks/job/pod-failure-policy.md index 9cf1b4e9d4a3e..ead751513176e 100644 --- a/content/zh-cn/docs/tasks/job/pod-failure-policy.md +++ b/content/zh-cn/docs/tasks/job/pod-failure-policy.md @@ -11,7 +11,7 @@ min-kubernetes-server-version: v1.25 weight: 60 --> -{{< feature-state for_k8s_version="v1.25" state="alpha" >}} +{{< feature-state for_k8s_version="v1.26" state="beta" >}} @@ -49,19 +49,6 @@ You should already be familiar with the basic use of [Job](/docs/concepts/worklo {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - -{{< note >}} - -因为这些特性还处于 Alpha 阶段,所以在准备 Kubernetes -集群时要启用两个[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/): -`JobPodFailurePolicy` 和 `PodDisruptionConditions`。 -{{< /note >}} - -{{< feature-state state="beta" for_k8s_version="v1.10" >}} +{{< feature-state state="stable" for_k8s_version="v1.26" >}} -Kubernetes 支持对若干节点上的 GPU(图形处理单元)进行管理,目前处于**实验**状态。 +Kubernetes 支持使用{{< glossary_tooltip text="设备插件" term_id="device-plugin" >}}来跨集群中的不同节点管理 +AMD 和 NVIDIA GPU(图形处理单元),目前处于**稳定**状态。 本页介绍用户如何使用 GPU 以及当前存在的一些限制。 @@ -31,23 +33,21 @@ Kubernetes 支持对若干节点上的 GPU(图形处理单元)进行管理 ## 使用设备插件 {#using-device-plugins} -Kubernetes 实现了{{< glossary_tooltip text="设备插件(Device Plugin)" term_id="device-plugin" >}} -以允许 Pod 访问类似 GPU 这类特殊的硬件功能特性。 +Kubernetes 实现了设备插件(Device Plugin),让 Pod 可以访问类似 GPU 这类特殊的硬件功能特性。 {{% thirdparty-content %}} 作为集群管理员,你要在节点上安装来自对应硬件厂商的 GPU 驱动程序,并运行来自 -GPU 厂商的对应设备插件。 +GPU 厂商的对应设备插件。以下是一些厂商说明的链接: * [AMD](https://github.com/RadeonOpenCompute/k8s-device-plugin#deployment) * [Intel](https://intel.github.io/intel-device-plugins-for-kubernetes/cmd/gpu_plugin/README.html) diff --git a/content/zh-cn/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/zh-cn/docs/tasks/manage-kubernetes-objects/declarative-config.md index 517442cfb2765..b0a3563c541a6 100644 --- a/content/zh-cn/docs/tasks/manage-kubernetes-objects/declarative-config.md +++ b/content/zh-cn/docs/tasks/manage-kubernetes-objects/declarative-config.md @@ -538,7 +538,7 @@ kubectl delete -f <文件名> ``` @@ -547,7 +547,7 @@ Only use this if you know what you are doing. 只有在充分理解此命令背后含义的情况下才建议这样操作。 {{< warning >}} @@ -556,12 +556,16 @@ changes might be introduced in subsequent releases. {{< /warning >}} {{< warning >}} + 在使用此命令时必须小心,这样才不会无意中删除不想删除的对象。 {{< /warning >}} 1. 将现时对象导出到本地配置文件: @@ -1467,23 +1484,23 @@ configuration involves several manual steps: kubectl get / -o yaml > _.yaml ``` -1. 手动移除配置文件中的 `status` 字段。 +2. 手动移除配置文件中的 `status` 字段。 - {{< note >}} 这一步骤是可选的,因为 `kubectl apply` 并不会更新 status 字段,即便 配置文件中包含 status 字段。 {{< /note >}} -1. 设置对象上的 `kubectl.kubernetes.io/last-applied-configuration` 注解: +3. 设置对象上的 `kubectl.kubernetes.io/last-applied-configuration` 注解: ```shell kubectl replace --save-config -f _.yaml ``` -1. 更改过程,使用 `kubectl apply` 专门管理对象。 +4. 更改过程,使用 `kubectl apply` 专门管理对象。 + +{{< comment >}} +TODO(pwittrock): Why doesn't export remove the status field? Seems like it should. +{{< /comment >}} 推荐的方法是定义单个不变的 PodTemplate 标签,该标签仅由控制器选择器使用,而没有其他语义。 - + 标签示例: ```yaml @@ -270,7 +272,7 @@ template: * [使用命令式命令管理 Kubernetes 对象](/zh-cn/docs/tasks/manage-kubernetes-objects/imperative-command/) diff --git a/content/zh-cn/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/zh-cn/docs/tasks/manage-kubernetes-objects/kustomization.md index d5fa3f428c606..1f58cb6dd5284 100644 --- a/content/zh-cn/docs/tasks/manage-kubernetes-objects/kustomization.md +++ b/content/zh-cn/docs/tasks/manage-kubernetes-objects/kustomization.md @@ -32,7 +32,7 @@ kubectl kustomize ``` 要应用这些资源,使用 `--kustomize` 或 `-k` 参数来执行 `kubectl apply`: @@ -130,8 +130,7 @@ metadata: ``` 要从 env 文件生成 ConfigMap,请在 `configMapGenerator` 中的 `envs` 列表中添加一个条目。 下面是一个用来自 `.env` 文件的数据生成 ConfigMap 的例子: @@ -1166,9 +1165,7 @@ Run the following command to apply the Deployment object `dev-my-nginx`: 执行下面的命令来应用 Deployment 对象 `dev-my-nginx`: ```shell -kubectl apply -k ./ -``` -``` +> kubectl apply -k ./ deployment.apps/dev-my-nginx created ``` @@ -1200,9 +1197,7 @@ Run the following command to delete the Deployment object `dev-my-nginx`: 执行下面的命令删除 Deployment 对象 `dev-my-nginx`: ```shell -kubectl delete -k ./ -``` -``` +> kubectl delete -k ./ deployment.apps "dev-my-nginx" deleted ``` diff --git a/content/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md b/content/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md index 93108bcc07cae..68ca773fce205 100644 --- a/content/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md +++ b/content/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md @@ -492,7 +492,7 @@ Patch your Deployment again with this new patch: 使用新的 patch 重新修补 Deployment: ```shell -kubectl patch deployment retainkeys-demo --type merge --patch-file patch-file-retainkeys.yaml +kubectl patch deployment retainkeys-demo --type strategic --patch-file patch-file-retainkeys.yaml ``` 当 DNS 配置以及其它选项不合理的时候,通过向 Pod 的 `/etc/hosts` 文件中添加条目, 可以在 Pod 级别覆盖对主机名的解析。你可以通过 PodSpec 的 HostAliases @@ -32,7 +32,7 @@ Modification not using HostAliases is not suggested because the file is managed @@ -89,19 +89,19 @@ By default, the `hosts` file only includes IPv4 and IPv6 boilerplates like 默认情况下,`hosts` 文件只包含 IPv4 和 IPv6 的样板内容,像 `localhost` 和主机名称。 ## 通过 HostAliases 增加额外条目 -除了默认的样板内容,我们可以向 `hosts` 文件添加额外的条目。 +除了默认的样板内容,你可以向 `hosts` 文件添加额外的条目。 例如,要将 `foo.local`、`bar.local` 解析为 `127.0.0.1`, -将 `foo.remote`、 `bar.remote` 解析为 `10.1.2.3`,我们可以在 +将 `foo.remote`、 `bar.remote` 解析为 `10.1.2.3`,你可以在 `.spec.hostAliases` 下为 Pod 配置 HostAliases。 {{< codenew file="service/networking/hostaliases-pod.yaml" >}} @@ -158,7 +158,7 @@ fe00::2 ip6-allrouters ``` 在最下面额外添加了一些条目。 diff --git a/content/zh-cn/docs/tasks/run-application/access-api-from-pod.md b/content/zh-cn/docs/tasks/run-application/access-api-from-pod.md index aa1809c672994..bf6f6fc2a0b59 100644 --- a/content/zh-cn/docs/tasks/run-application/access-api-from-pod.md +++ b/content/zh-cn/docs/tasks/run-application/access-api-from-pod.md @@ -80,16 +80,30 @@ securely with the API server. #### 直接访问 REST API {#directly-accessing-the-rest-api} -在运行在 Pod 中时,可以通过 `default` 命名空间中的名为 `kubernetes` 的服务访问 -Kubernetes API 服务器。也就是说,Pod 可以使用 `kubernetes.default.svc` 主机名 -来查询 API 服务器。官方客户端库自动完成这个工作。 +在运行在 Pod 中时,你的容器可以通过获取 `KUBERNETES_SERVICE_HOST` 和 +`KUBERNETES_SERVICE_PORT_HTTPS` 环境变量为 Kubernetes API +服务器生成一个 HTTPS URL。 +API 服务器的集群内地址也发布到 `default` 命名空间中名为 `kubernetes` 的 Service 中, +从而 Pod 可以引用 `kubernetes.default.svc` 作为本地 API 服务器的 DNS 名称。 + +{{< note >}} + +Kubernetes 不保证 API 服务器具有主机名 `kubernetes.default.svc` 的有效证书; +但是,控制平面应该为 `$KUBERNETES_SERVICE_HOST` 代表的主机名或 IP 地址提供有效证书。 +{{< /note >}} 示例 1:设置 `minAvailable` 值为 5 的情况下,驱逐时需保证 PodDisruptionBudget 的 `selector` -选中的 Pod 中 5 个或 5 个以上处于健康状态。 +选中的 Pod 中 5 个或 5 个以上处于[健康](#healthiness-of-a-pod)状态。 +### Pod 的健康 {#healthiness-of-a-pod} + +如果 Pod 的 `.status.conditions` 中包含 `type="Ready"` 和 `status="True"` 的项, +则当前实现将其视为健康的 Pod。这些 Pod 通过 PDB 状态中的 `.status.currentHealthy` 字段被跟踪。 + + +## 不健康的 Pod 驱逐策略 {#unhealthy-pod-eviction-policy} + +{{< feature-state for_k8s_version="v1.26" state="alpha" >}} + +{{< note >}} + +为了使用此行为,你必须在 +[API 服务器](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)上启用 +`PDBUnhealthyPodEvictionPolicy` +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。 +{{< /note >}} + + +守护应用程序的 PodDisruptionBudget 通过不允许驱逐健康的 Pod 来确保 `.status.currentHealthy` 的 Pod +数量不低于 `.status.desiredHealthy` 中指定的数量。通过使用 `.spec.unhealthyPodEvictionPolicy`, +你还可以定义条件来判定何时应考虑驱逐不健康的 Pod。未指定策略时的默认行为对应于 `IfHealthyBudget` 策略。 + + +策略包含: + + +`IfHealthyBudget` +: 对于运行中但还不健康的 Pod(`.status.phase="Running"`),只有所守护的应用程序不受干扰 + (`.status.currentHealthy` 至少等于 `.status.desiredHealthy`)时才能被驱逐。 + +: 此策略确保已受干扰的应用程序所运行的 Pod 会尽可能成为健康。 + 这对排空节点有负面影响,可能会因 PDB 守护的应用程序行为错误而阻止排空。 + 更具体地说,这些应用程序的 Pod 处于 `CrashLoopBackOff` 状态 + (由于漏洞或错误配置)或其 Pod 只是未能报告 `Ready` 状况。 + + +`AlwaysAllow` +: 运行中但还不健康的 Pod(`.status.phase="Running"`)将被视为已受干扰且可以被驱逐, + 与是否满足 PDB 中的判决条件无关。 + +: 这意味着受干扰的应用程序所运行的 Pod 可能没有机会恢复健康。 + 通过使用此策略,集群管理器可以轻松驱逐由 PDB 所守护的行为错误的应用程序。 + 更具体地说,这些应用程序的 Pod 处于 `CrashLoopBackOff` 状态 + (由于漏洞或错误配置)或其 Pod 只是未能报告 `Ready` 状况。 + +{{< note >}} + +处于`Pending`、`Succeeded` 或 `Failed` 阶段的 Pod 总是被考虑驱逐。 +{{< /note >}} + -本任务展示如何删除 StatefulSet。 +本任务展示如何删除 {{< glossary_tooltip text="StatefulSet" term_id="StatefulSet" >}}。 ## {{% heading "prerequisites" %}} @@ -92,13 +92,12 @@ kubectl delete pods -l app.kubernetes.io/name=MyApp ### 持久卷 {#persistent-volumes} 删除 StatefulSet 管理的 Pod 并不会删除关联的卷。这是为了确保你有机会在删除卷之前从卷中复制数据。 -在 Pod 离开[终止状态](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) -后删除 PVC 可能会触发删除背后的 PV 持久卷,具体取决于存储类和回收策略。 +在 Pod 已经终止后删除 PVC 可能会触发删除背后的 PV 持久卷,具体取决于存储类和回收策略。 永远不要假定在 PVC 删除后仍然能够访问卷。 ### 完全删除 StatefulSet {#complete-deletion-of-a-statefulset} -要删除 StatefulSet 中的所有内容,包括关联的 pods,你可以运行 +要删除 StatefulSet 中的所有内容,包括关联的 Pod,你可以运行 一系列如下所示的命令: ```shell @@ -133,20 +132,20 @@ In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`; su ### 强制删除 StatefulSet 的 Pod 如果你发现 StatefulSet 的某些 Pod 长时间处于 'Terminating' 或者 'Unknown' 状态, 则可能需要手动干预以强制从 API 服务器中删除这些 Pod。 这是一项有点危险的任务。详细信息请阅读 -[删除 StatefulSet 类型的 Pods](/zh-cn/docs/tasks/run-application/force-delete-stateful-set-pod/)。 +[强制删除 StatefulSet 的 Pod](/zh-cn/docs/tasks/run-application/force-delete-stateful-set-pod/)。 ## {{% heading "whatsnext" %}} -进一步了解[强制删除 StatefulSet 的 Pods](/zh-cn/docs/tasks/run-application/force-delete-stateful-set-pod/)。 +进一步了解[强制删除 StatefulSet 的 Pod](/zh-cn/docs/tasks/run-application/force-delete-stateful-set-pod/)。 diff --git a/content/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index c0a96f51d65f4..1b04454774b2f 100644 --- a/content/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -14,6 +14,7 @@ reviewers: title: Horizontal Pod Autoscaler Walkthrough content_type: task weight: 100 +min-kubernetes-server-version: 1.23 --> @@ -61,8 +62,7 @@ HorizontalPodAutoscaler 会指示工作负载资源(Deployment、StatefulSet {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} 如果你运行的是旧版本的 Kubernetes,请参阅该版本的文档版本 @@ -159,7 +159,9 @@ Deployment 然后更新 ReplicaSet —— 这是所有 Deployment 在 Kubernetes 请参阅[算法详细信息](/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details)。 - + 创建 HorizontalPodAutoscaler: ```shell @@ -181,7 +183,9 @@ You can check the current status of the newly-made HorizontalPodAutoscaler, by r kubectl get hpa ``` - + 输出类似于: ``` @@ -258,7 +262,7 @@ php-apache Deployment/php-apache/scale 305% / 50% 1 10 7 这时,由于请求增多,CPU 利用率已经升至请求值的 305%。 可以看到,Deployment 的副本数量已经增长到了 7: @@ -319,7 +323,9 @@ NAME REFERENCE TARGET MINPODS MAXPODS REPL php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m ``` - + Deployment 也显示它已经缩小了: ```shell diff --git a/content/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale.md index b0aa334b91a44..2f2695404f80b 100644 --- a/content/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -142,8 +142,9 @@ or the custom metrics API (for all other metrics). * For per-pod resource metrics (like CPU), the controller fetches the metrics from the resource metrics API for each Pod targeted by the HorizontalPodAutoscaler. Then, if a target utilization value is set, the controller calculates the utilization - value as a percentage of the equivalent [resource request](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) on the containers in - each Pod. If a target raw value is set, the raw metric values are used directly. + value as a percentage of the equivalent + [resource request](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) + on the containers in each Pod. If a target raw value is set, the raw metric values are used directly. The controller then takes the mean of the utilization or the raw value (depending on the type of target specified) across all targeted Pods, and produces a ratio used to scale the number of desired replicas. @@ -157,9 +158,8 @@ or the custom metrics API (for all other metrics). 需要注意的是,如果 Pod 某些容器不支持资源采集,那么控制器将不会使用该 Pod 的 CPU 使用率。 下面的[算法细节](#algorithm-details)章节将会介绍详细的算法。 @@ -173,13 +173,13 @@ or the custom metrics API (for all other metrics). * 如果 Pod 使用对象指标和外部指标(每个指标描述一个对象信息)。 这个指标将直接根据目标设定值相比较,并生成一个上面提到的扩缩比例。 - 在 `autoscaling/v2beta2` 版本 API 中,这个指标也可以根据 Pod 数量平分后再计算。 + 在 `autoscaling/v2` 版本 API 中,这个指标也可以根据 Pod 数量平分后再计算。 @@ -274,8 +274,8 @@ with missing metrics will be used to adjust the final scaling amount. 当使用 CPU 指标来扩缩时,任何还未就绪(还在初始化,或者可能是不健康的)状态的 Pod **或** 最近的指标度量值采集于就绪状态前的 Pod,该 Pod 也会被搁置。 @@ -489,7 +489,7 @@ pod usage is still within acceptable limits. {{< /note >}} ### 容器资源指标 {#container-resource-metrics} @@ -564,6 +564,8 @@ the old container name from the HPA specification. 关于指标来源以及其区别的更多信息,请参阅相关的设计文档, -[HPA V2](https://github.com/kubernetes/design-proposals-archive/blob/main/autoscaling/hpa-v2.md), -[custom.metrics.k8s.io](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/custom-metrics-api.md) 和 -[external.metrics.k8s.io](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/external-metrics-api.md)。 +[HPA V2](https://git.k8s.io/design-proposals-archive/autoscaling/hpa-v2.md), +[custom.metrics.k8s.io](https://git.k8s.io/design-proposals-archive/instrumentation/custom-metrics-api.md) 和 +[external.metrics.k8s.io](https://git.k8s.io/design-proposals-archive/instrumentation/external-metrics-api.md)。 {{< note >}} -**这不是生产环境下配置**。 -尤其注意,MySQL 设置都使用的是不安全的默认值,这是因为我们想把重点放在 Kubernetes +**这一配置不适合生产环境。** +MySQL 设置都使用的是不安全的默认值,这是因为我们想把重点放在 Kubernetes 中运行有状态应用程序的一般模式上。 {{< /note >}} @@ -110,7 +109,7 @@ kubectl apply -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml This ConfigMap provides `my.cnf` overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve replication logs to replicas -and you want repicas to reject any writes that don't come via replication. +and you want replicas to reject any writes that don't come via replication. --> 这个 ConfigMap 提供 `my.cnf` 覆盖设置,使你可以独立控制 MySQL 主服务器和副本服务器的配置。 在这里,你希望主服务器能够将复制日志提供给副本服务器, @@ -217,7 +216,7 @@ Press **Ctrl+C** to cancel the watch. {{< note >}} 如果你看不到任何进度,确保已启用[前提条件](#准备开始) 中提到的动态 PersistentVolume 制备程序。 @@ -359,7 +358,7 @@ MySQL 本身不提供执行此操作的机制,因此本示例使用了一种 After the init containers complete successfully, the regular containers run. The MySQL Pods consist of a `mysql` container that runs the actual `mysqld` server, and an `xtrabackup` container that acts as a -[sidecar](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns). +[sidecar](/blog/2015/06/the-distributed-system-toolkit-patterns). --> ### 开始复制 @@ -701,13 +700,14 @@ kubectl uncordon <节点名称> ## 扩展副本节点数量 使用 MySQL 复制时,你可以通过添加副本节点来扩展读取查询的能力。 -使用 StatefulSet,你可以使用单个命令执行此操作: +对于 StatefulSet,你可以使用单个命令实现此目的: ```shell kubectl scale statefulset mysql --replicas=5 diff --git a/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md index ab358943b01f3..dd1a25823a78d 100644 --- a/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md +++ b/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md @@ -2,12 +2,20 @@ title: "macOS 系统上的 bash 自动补全" description: "在 macOS 上实现 Bash 自动补全的一些可选配置。" headless: true +_build: + list: never + render: never + publishResources: false --- +_build: + list: never + render: never + publishResources: false +--> kubectl 的 Bash 补全脚本可以通过 `kubectl completion bash` 命令生成。 -在你的 shell 中导入(Sourcing)这个脚本即可启用补全功能。 +在你的 Shell 中导入(Sourcing)这个脚本即可启用补全功能。 此外,kubectl 补全脚本依赖于工具 [**bash-completion**](https://github.com/scop/bash-completion), 所以你必须先安装它。 @@ -29,9 +37,9 @@ kubectl 的 Bash 补全脚本可以通过 `kubectl completion bash` 命令生成 -bash-completion 有两个版本:v1 和 v2。v1 对应 Bash3.2(也是 macOS 的默认安装版本),v2 对应 Bash 4.1+。 +bash-completion 有两个版本:v1 和 v2。v1 对应 Bash 3.2(也是 macOS 的默认安装版本),v2 对应 Bash 4.1+。 kubectl 的补全脚本**无法适配** bash-completion v1 和 Bash 3.2。 -必须为它配备 **bash-completion v2** 和 **Bash 4.1+**。 +必须为它配备 **bash-completion v2** 和 **Bash 4.1+**。 有鉴于此,为了在 macOS 上使用 kubectl 补全功能,你必须要安装和使用 Bash 4.1+ ([**说明**](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba))。 后续说明假定你用的是 Bash 4.1+(也就是 Bash 4.1 或更新的版本)。 @@ -63,7 +71,7 @@ brew install bash -重新加载 shell,并验证所需的版本已经生效: +重新加载 Shell,并验证所需的版本已经生效: ```bash echo $BASH_VERSION $SHELL @@ -79,7 +87,6 @@ Homebrew 通常把它安装为 `/usr/local/bin/bash`。 --> ### 安装 bash-completion {#install-bash-completion} - {{< note >}} -重新加载 shell,并用命令 `type _init_completion` 验证 bash-completion v2 已经恰当的安装。 +重新加载 Shell,并用命令 `type _init_completion` 验证 bash-completion v2 已经恰当的安装。 -你现在需要确保在所有的 shell 环境中均已导入(sourced) kubectl 的补全脚本, +你现在需要确保在所有的 Shell 环境中均已导入(sourced)kubectl 的补全脚本, 有若干种方法可以实现这一点: - 在文件 `~/.bash_profile` 中导入(Source)补全脚本: @@ -144,7 +150,7 @@ You now have to ensure that the kubectl completion script gets sourced in all yo -- 如果你为 kubectl 定义了别名,则可以扩展 shell 补全来兼容该别名: +- 如果你为 kubectl 定义了别名,则可以扩展 Shell 补全来兼容该别名: ```bash echo 'alias k=kubectl' >>~/.bash_profile @@ -154,8 +160,8 @@ You now have to ensure that the kubectl completion script gets sourced in all yo -- 如果你是用 Homebrew 安装的 kubectl(如 - [此页面](/zh-cn/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)所描述), +- 如果你是用 Homebrew 安装的 kubectl + (如[此页面](/zh-cn/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)所描述), 则 kubectl 补全脚本应该已经安装到目录 `/usr/local/etc/bash_completion.d/kubectl` 中了。这种情况下,你什么都不需要做。 @@ -163,12 +169,12 @@ You now have to ensure that the kubectl completion script gets sourced in all yo - 用 Hommbrew 安装的 bash-completion v2 会初始化 目录 `BASH_COMPLETION_COMPAT_DIR` + 用 Hommbrew 安装的 bash-completion v2 会初始化目录 `BASH_COMPLETION_COMPAT_DIR` 中的所有文件,这就是后两种方法能正常工作的原因。 {{< /note >}} -总之,重新加载 shell 之后,kubectl 补全功能将立即生效。 +总之,重新加载 Shell 之后,kubectl 补全功能将立即生效。 diff --git a/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-fish.md b/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-fish.md index ecf4ab04ecd3e..868b32d5b0eea 100644 --- a/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-fish.md +++ b/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-fish.md @@ -2,14 +2,27 @@ title: "fish 自动补全" description: "启用 fish 自动补全的可选配置。" headless: true +_build: + list: never + render: never + publishResources: false --- + +{{< note >}} + +自动补全 Fish 需要 kubectl 1.23 或更高版本。 +{{< /note >}} +_build: + list: never + render: never + publishResources: false +--> +--> 为了让 kubectl 能发现并访问 Kubernetes 集群,你需要一个 [kubeconfig 文件](/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/), 该文件在 @@ -28,7 +34,7 @@ Check that kubectl is properly configured by getting the cluster state: 创建集群时,或成功部署一个 Miniube 集群时,均会自动生成。 通常,kubectl 的配置信息存放于文件 `~/.kube/config` 中。 -通过获取集群状态的方法,检查是否已恰当的配置了 kubectl: +通过获取集群状态的方法,检查是否已恰当地配置了 kubectl: ```shell kubectl cluster-info @@ -38,8 +44,8 @@ kubectl cluster-info If you see a URL response, kubectl is correctly configured to access your cluster. If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster. - --> -如果返回一个 URL,则意味着 kubectl 成功的访问到了你的集群。 +--> +如果返回一个 URL,则意味着 kubectl 成功地访问到了你的集群。 如果你看到如下所示的消息,则代表 kubectl 配置出了问题,或无法连接到 Kubernetes 集群。 @@ -52,11 +58,12 @@ The connection to the server was refused - did you specify th For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above. If kubectl cluster-info returns the url response but you can't access your cluster, to check whether it is configured properly, use: - --> -例如,如果你想在自己的笔记本上(本地)运行 Kubernetes 集群,你需要先安装一个 Minikube 这样的工具,然后再重新运行上面的命令。 +--> +例如,如果你想在自己的笔记本上(本地)运行 Kubernetes 集群,你需要先安装一个 Minikube +这样的工具,然后再重新运行上面的命令。 -如果命令 `kubectl cluster-info` 返回了 url,但你还不能访问集群,那可以用以下命令来检查配置是否妥当: +如果命令 `kubectl cluster-info` 返回了 URL,但你还不能访问集群,那可以用以下命令来检查配置是否妥当: ```shell kubectl cluster-info dump -``` \ No newline at end of file +``` diff --git a/content/zh-cn/docs/tasks/tools/install-kubectl-windows.md b/content/zh-cn/docs/tasks/tools/install-kubectl-windows.md index 8f1ec698540e2..3fa6f323b9046 100644 --- a/content/zh-cn/docs/tasks/tools/install-kubectl-windows.md +++ b/content/zh-cn/docs/tasks/tools/install-kubectl-windows.md @@ -41,10 +41,10 @@ The following methods exist for installing kubectl on Windows: +- [Install on Windows using Chocolatey, Scoop, or winget](#install-nonstandard-package-tools) +--> - [用 curl 在 Windows 上安装 kubectl](#install-kubectl-binary-with-curl-on-windows) -- [在 Windows 上用 Chocolatey、Scoop 或 Winget 安装](#install-nonstandard-package-tools) +- [在 Windows 上用 Chocolatey、Scoop 或 winget 安装](#install-nonstandard-package-tools) -1. 下载 [最新发行版 {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)。 +1. 下载[最新发行版 {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)。 或者使用下面命令来查看版本的详细信息: ```cmd @@ -134,22 +137,22 @@ The following methods exist for installing kubectl on Windows: [Docker Desktop for Windows](https://docs.docker.com/docker-for-windows/#kubernetes) adds its own version of `kubectl` to `PATH`. If you have installed Docker Desktop before, you may need to place your `PATH` entry before the one added by the Docker Desktop installer or remove the Docker Desktop's `kubectl`. --> -[Windows 版的 Docker Desktop](https://docs.docker.com/docker-for-windows/#kubernetes) +[Windows 版的 Docker Desktop](https://docs.docker.com/docker-for-windows/#kubernetes) 将其自带版本的 `kubectl` 添加到 `PATH`。 如果你之前安装过 Docker Desktop,可能需要把此 `PATH` 条目置于 Docker Desktop 安装的条目之前, 或者直接删掉 Docker Desktop 的 `kubectl`。 {{< /note >}} -### 在 Windows 上用 Chocolatey、Scoop 或 Winget 安装 {#install-nonstandard-package-tools} +### 在 Windows 上用 Chocolatey、Scoop 或 winget 安装 {#install-nonstandard-package-tools} 1. 要在 Windows 上安装 kubectl,你可以使用包管理器 [Chocolatey](https://chocolatey.org)、 - 命令行安装器 [Scoop](https://scoop.sh) 或包管理器 [Winget](https://winget.run/)。 + 命令行安装器 [Scoop](https://scoop.sh) 或包管理器 [winget](https://learn.microsoft.com/zh-cn/windows/package-manager/winget/)。 {{< tabs name="kubectl_win_install" >}} {{% tab name="choco" %}} @@ -180,7 +183,7 @@ If you have installed Docker Desktop before, you may need to place your `PATH` e +--> 3. 导航到你的 home 目录: ```powershell @@ -190,7 +193,7 @@ If you have installed Docker Desktop before, you may need to place your `PATH` e +--> 4. 创建目录 `.kube`: ```powershell @@ -266,9 +269,9 @@ kubectl 为 Bash、Zsh、Fish 和 PowerShell 提供自动补全功能,可以 ``` -2. 验证该可执行文件(可选步骤) +2. 验证该可执行文件(可选步骤)。 3. 将 `kubectl-convert` 二进制文件夹附加或添加到你的 `PATH` 环境变量中。 diff --git a/content/zh-cn/docs/tutorials/security/apparmor.md b/content/zh-cn/docs/tutorials/security/apparmor.md index 5afbd85cc7a8e..6ef716b8f8ae6 100644 --- a/content/zh-cn/docs/tutorials/security/apparmor.md +++ b/content/zh-cn/docs/tutorials/security/apparmor.md @@ -1,14 +1,14 @@ --- title: 使用 AppArmor 限制容器对资源的访问 content_type: tutorial -weight: 10 +weight: 30 --- @@ -481,7 +481,7 @@ Note the pod status is Pending, with a helpful error message: `Pod Cannot enforc Kubernetes 目前不提供任何本地机制来将 AppArmor 配置文件加载到节点上。 有很多方法可以设置配置文件,例如: diff --git a/content/zh-cn/docs/tutorials/security/cluster-level-pss.md b/content/zh-cn/docs/tutorials/security/cluster-level-pss.md index afbb8459ff680..9f54156ed66fa 100644 --- a/content/zh-cn/docs/tutorials/security/cluster-level-pss.md +++ b/content/zh-cn/docs/tutorials/security/cluster-level-pss.md @@ -10,7 +10,9 @@ weight: 10 --> {{% alert title="Note" %}} - + 本教程仅适用于新集群。 {{% /alert %}} @@ -48,7 +50,7 @@ Pod 安全准入是在创建 Pod 时应用 Install the following on your workstation: - [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) -- [kubectl](https://kubernetes.io/docs/tasks/tools/) +- [kubectl](/docs/tasks/tools/) --> 在你的工作站中安装以下内容: @@ -76,7 +78,7 @@ that are most appropriate for your configuration, do the following: +--> 1. 创建一个没有应用 Pod 安全标准的集群: ```shell @@ -98,7 +100,6 @@ that are most appropriate for your configuration, do the following: kubectl cluster-info --context kind-psa-wo-cluster-pss Thanks for using kind! 😊 - ``` 输出类似于: - ``` - Kubernetes control plane is running at https://127.0.0.1:61350 + Kubernetes control plane is running at https://127.0.0.1:61350 + CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy - + To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ``` @@ -141,7 +142,7 @@ that are most appropriate for your configuration, do the following: +--> 4. 使用 `--dry-run=server` 来了解应用不同的 Pod 安全标准时会发生什么: 1. Privileged @@ -159,7 +160,7 @@ that are most appropriate for your configuration, do the following: namespace/local-path-storage labeled ``` 2. Baseline - ```shell + ```shell kubectl label --dry-run=server --overwrite ns --all \ pod-security.kubernetes.io/enforce=baseline ``` @@ -280,16 +281,17 @@ following: namespaces: [kube-system] EOF ``` - {{< note >}} - + + {{< note >}} + `pod-security.admission.config.k8s.io/v1` 配置需要 v1.25+。 - 对于 v1.23 和 v1.24,使用 [v1beta1](https://v1-24.docs.kubernetes.io/zh-cn/docs/tasks/configure-pod-container/enforce-standards-admission-controller/)。 - 对于 v1.22,使用 [v1alpha1](https://v1-22.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/)。 - {{< /note >}} + 对于 v1.23 和 v1.24,使用 [v1beta1](https://v1-24.docs.kubernetes.io/zh-cn/docs/tasks/configure-pod-container/enforce-standards-admission-controller/)。 + 对于 v1.22,使用 [v1alpha1](https://v1-22.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/)。 + {{< /note >}} ## 清理 {#clean-up} -运行 `kind delete cluster --name psa-with-cluster-pss` 和 -`kind delete cluster --name psa-wo-cluster-pss` 来删除你创建的集群。 +现在通过运行以下命令删除你上面创建的集群: + +```shell +kind delete cluster --name psa-with-cluster-pss +``` +```shell +kind delete cluster --name psa-wo-cluster-pss +``` ## {{% heading "whatsnext" %}} @@ -439,7 +445,7 @@ created. [shell script](/examples/security/kind-with-cluster-level-baseline-pod-security.sh) to perform all the preceding steps at once: 1. Create a Pod Security Standards based cluster level Configuration - 2. Create a file to let API server consumes this configuration + 2. Create a file to let API server consume this configuration 3. Create a cluster that creates an API server with this configuration 4. Set kubectl context to this new cluster 5. Create a minimal pod yaml file diff --git a/content/zh-cn/docs/tutorials/security/ns-level-pss.md b/content/zh-cn/docs/tutorials/security/ns-level-pss.md index 1487e87fb1093..faf20f35993bf 100644 --- a/content/zh-cn/docs/tutorials/security/ns-level-pss.md +++ b/content/zh-cn/docs/tutorials/security/ns-level-pss.md @@ -1,13 +1,13 @@ --- title: 在名字空间级别应用 Pod 安全标准 content_type: tutorial -weight: 10 +weight: 20 --- {{% alert title="Note" %}} @@ -224,11 +224,15 @@ with no warnings. ## 清理 {#clean-up} -运行 `kind delete cluster --name psa-ns-level` 删除创建的集群。 +现在通过运行以下命令删除你上面创建的集群: + +```shell +kind delete cluster --name psa-ns-level +``` ## {{% heading "whatsnext" %}} diff --git a/content/zh-cn/docs/tutorials/security/seccomp.md b/content/zh-cn/docs/tutorials/security/seccomp.md index 5db921b96cff1..6d4694378537b 100644 --- a/content/zh-cn/docs/tutorials/security/seccomp.md +++ b/content/zh-cn/docs/tutorials/security/seccomp.md @@ -1,7 +1,7 @@ --- title: 使用 seccomp 限制容器的系统调用 content_type: tutorial -weight: 20 +weight: 40 min-kubernetes-server-version: v1.22 --- @@ -424,6 +424,70 @@ docker exec -it kind-worker bash -c \ } ``` + +## 创建使用容器运行时默认 seccomp 配置文件的 Pod {#create-pod-that-uses-the-container-runtime-default-seccomp-profile} + +大多数容器运行时都提供了一组合理的、默认被允许或默认被禁止的系统调用。 +你可以通过将 Pod 或容器的安全上下文中的 seccomp 类型设置为 `RuntimeDefault` +来为你的工作负载采用这些默认值。 + +{{< note >}} + +如果你已经启用了 `SeccompDefault` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), +只要没有指定其他 seccomp 配置文件,那么 Pod 就会使用 `RuntimeDefault` seccomp 配置文件。 +否则,默认值为 `Unconfined`。 +{{< /note >}} + + +这是一个 Pod 的清单,它要求其所有容器使用 `RuntimeDefault` seccomp 配置文件: + +{{< codenew file="pods/security/seccomp/ga/default-pod.yaml" >}} + + +创建此 Pod: + +```shell +kubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/default-pod.yaml +``` + +```shell +kubectl get pod default-pod +``` + + +此 Pod 应该显示为已成功启动: + +``` +NAME READY STATUS RESTARTS AGE +default-pod 1/1 Running 0 20s +``` + + +最后,你看到一切正常之后,请清理: + +```shell +kubectl delete pod default-pod --wait --now +``` + -## 创建使用容器运行时默认 seccomp 配置文件的 Pod {#create-pod-that-uses-the-container-runtime-default-seccomp-profile} - -大多数容器运行时都提供了一组合理的默认系统调用,以及是否允许执行这些系统调用。 -你可以通过将 Pod 或容器的安全上下文中的 seccomp 类型设置为 `RuntimeDefault` -来为你的工作负载采用这些默认值。 - -{{< note >}} - -如果你已经启用了 `SeccompDefault` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), -只要没有指定其他 seccomp 配置文件,那么 Pod 就会使用 `SeccompDefault` 的 seccomp 配置文件。 -否则,默认值为 `Unconfined`。 -{{< /note >}} - - -这是一个 Pod 的清单,它要求其所有容器使用 `RuntimeDefault` seccomp 配置文件: - -{{< codenew file="pods/security/seccomp/ga/default-pod.yaml" >}} - - -创建此 Pod: - -```shell -kubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/default-pod.yaml -``` - -```shell -kubectl get pod default-pod -``` - - -此 Pod 应该显示为成功启动: - -``` -NAME READY STATUS RESTARTS AGE -default-pod 1/1 Running 0 20s -``` - - -最后,你看到一切正常之后,请清理: - -```shell -kubectl delete pod default-pod --wait --now -``` - ## {{% heading "whatsnext" %}} +要配置分配给 StatefulSet 中每个 Pod 的整数序号, +请参阅[起始序号](/zh-cn/docs/concepts/workloads/controllers/statefulset/#start-ordinal)。 +{{< /note >}} + @@ -1798,56 +1807,55 @@ Service: ```shell kubectl delete svc nginx ``` + 删除本教程中用到的 PersistentVolume 卷的持久化存储介质。 +```shell +kubectl get pvc +``` +``` +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +www-web-0 Bound pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO standard 25m +www-web-1 Bound pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO standard 24m +www-web-2 Bound pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO standard 15m +www-web-3 Bound pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO standard 15m +www-web-4 Bound pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO standard 14m +``` -+```shell -+kubectl get pvc -+``` -+``` -+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -+www-web-0 Bound pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO standard 25m -+www-web-1 Bound pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO standard 24m -+www-web-2 Bound pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO standard 15m -+www-web-3 Bound pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO standard 15m -+www-web-4 Bound pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO standard 14m -+``` -+ -+```shell -+kubectl get pv -+``` -+``` -+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -+pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO Delete Bound default/www-web-3 standard 15m -+pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO Delete Bound default/www-web-0 standard 25m -+pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO Delete Bound default/www-web-4 standard 14m -+pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO Delete Bound default/www-web-1 standard 24m -+pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO Delete Bound default/www-web-2 standard 15m -+``` -+ -+```shell -+kubectl delete pvc www-web-0 www-web-1 www-web-2 www-web-3 www-web-4 -+``` -+ -+``` -+persistentvolumeclaim "www-web-0" deleted -+persistentvolumeclaim "www-web-1" deleted -+persistentvolumeclaim "www-web-2" deleted -+persistentvolumeclaim "www-web-3" deleted -+persistentvolumeclaim "www-web-4" deleted -+``` -+ -+```shell -+kubectl get pvc -+``` -+ -+``` -+No resources found in default namespace. -+``` +```shell +kubectl get pv +``` +``` +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO Delete Bound default/www-web-3 standard 15m +pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO Delete Bound default/www-web-0 standard 25m +pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO Delete Bound default/www-web-4 standard 14m +pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO Delete Bound default/www-web-1 standard 24m +pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO Delete Bound default/www-web-2 standard 15m +``` + +```shell +kubectl delete pvc www-web-0 www-web-1 www-web-2 www-web-3 www-web-4 +``` + +``` +persistentvolumeclaim "www-web-0" deleted +persistentvolumeclaim "www-web-1" deleted +persistentvolumeclaim "www-web-2" deleted +persistentvolumeclaim "www-web-3" deleted +persistentvolumeclaim "www-web-4" deleted +``` + +```shell +kubectl get pvc +``` +``` +No resources found in default namespace. +``` {{< note >}} -## 容器镜像 - -所有 Kubernetes 容器镜像都部署到 -`registry.k8s.io` 容器镜像仓库。 +## 容器镜像 {#container-images} +所有 Kubernetes 容器镜像都被部署到 `registry.k8s.io` 容器镜像仓库。 {{< feature-state for_k8s_version="v1.24" state="alpha" >}} @@ -114,11 +112,12 @@ you can verify integrity for is a container image, using the experimental signing support. To manually verify signed container images of Kubernetes core components, refer to -[Verify Signed Container Images](/docs/tasks/administer-cluster/verify-signed-images). +[Verify Signed Container Images](/docs/tasks/administer-cluster/verify-signed-artifacts). --> 对于 Kubernetes v{{< skew currentVersion >}},唯一可以验证完整性的代码工件就是容器镜像,它使用实验性签名支持。 -如需手动验证 Kubernetes 核心组件的签名容器镜像,请参考[验证签名容器镜像](/zh-cn/docs/tasks/administer-cluster/verify-signed-images)。 +如需手动验证 Kubernetes 核心组件的签名容器镜像, +请参考[验证签名容器镜像](/zh-cn/docs/tasks/administer-cluster/verify-signed-artifacts)。 -## 二进制 +## 二进制 {#binaries} 在 [CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) 文件中找到下载 Kubernetes 组件(及其校验和)的链接。 diff --git a/content/zh-cn/releases/patch-releases.md b/content/zh-cn/releases/patch-releases.md index 479a4cb89e947..ff755a5148996 100644 --- a/content/zh-cn/releases/patch-releases.md +++ b/content/zh-cn/releases/patch-releases.md @@ -72,7 +72,7 @@ of the actual release. Cherry pick PRs which miss merge criteria will be carried over and tracked for the next patch release. --> -## Cherry Picks +## Cherry Pick 请遵循 [Cherry Pick 流程](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/cherry-picks.md)。 @@ -143,22 +143,22 @@ releases may also occur in between these. --> ## 未来发布的月度版本 {#upcoming-monthly-releases} -时间表可能会因错误修复的严重程度而有所不同,但为了便于规划,我们将针对以下每月发布点。 -计划外的关键版本也可能发生在这些版本之间。 +时间表可能会因错误修复的严重程度而有所不同,但为了便于规划,我们每月将按照以下时间点进行发布。 +中间可能会发布一些计划外的关键版本。 | 月度补丁发布 | Cherry Pick 截止日期 | 目标日期 | | -------------- | -------------------- | ----------- | -| 2022 年 12 月 | 2022-12-02 | 2022-12-07 | -| 2023 年 1 月 | 2023-01-13 | 2023-01-18 | | 2023 年 2 月 | 2023-02-10 | 2023-02-15 | +| 2023 年 3 月 | 2023-03-10 | 2023-03-15 | +| 2023 年 4 月 | 2023-04-07 | 2023-04-12 | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Backend Pod 1 - labels: app=MyApp - port: 9376 - - - - - - Backend Pod 2 - labels: app=MyApp - port: 9376 - - - - - - Backend Pod 3 - labels: app=MyApp - port: 9376 - - - - - - - - - - - - Client - - - - - - kube-proxy - - - - - - - apiserver - - - - - - clusterIP - (iptables) - - Node - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/static/images/docs/services-ipvs-overview.svg b/static/images/docs/services-ipvs-overview.svg index de745a764066e..d2c2f702d4611 100644 --- a/static/images/docs/services-ipvs-overview.svg +++ b/static/images/docs/services-ipvs-overview.svg @@ -1,121 +1,592 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Backend Pod 1 - - - - - - Backend Pod 2 - - - - - - Backend Pod 3 - - - - - - - - - - - - Client - - - - - - kube-proxy - - - - - - - apiserver - - - - - - clusterIP - (Virtual Server) - - Node - (Real Server) - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +