Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubelet: Keep trying fast status update at startup until node is ready #112618

Merged
merged 2 commits into from
Nov 9, 2022

Conversation

jingyuanliang
Copy link
Contributor

@jingyuanliang jingyuanliang commented Sep 21, 2022

What type of PR is this?

/kind bug

What this PR does / why we need it:

This reduces the delay between container runtime and container runtime network being ready, and the node being reported ready.

Which issue(s) this PR fixes:

fastStatusUpdateOnce was introduced (7ffa4e1, #67031) to reduce the latency to node ready, but in some situations, having CIDR assigned doesn't immediately make the node ready and it's blocked by the container runtime. In this case, fastStatusUpdateOnce returns early when the CIDR is assigned but container runtime (and network) is not ready yet, and leaves node readiness reporting to the regular container runtime check and heartbeat reporting, which can add a delay of up to 12 seconds per our measurement (median delay at about 5 seconds when fastStatusUpdateOnce returns early).

This change keeps fastStatusUpdateOnce running until the node is actually ready. It reduces the delay to less than 150ms in our experiment.

Special notes for your reviewer:

On performance: this loop stops when the node is ready, and before node is ready there shouldn't be (much) workload running, so taking a bit more resources at this time shouldn't be a big matter. As for master - did a quick check that syncNodeStatus doesn't really update the status to master if there's no change in the status, but I'm not sure if getting node status is locally managed / cached in client-go, or it will hit the master every 100ms. Given the sophisticated design of client-go I think it's likely cached, but bringing it up here to get some confirmation.

FOLLOWUP: It appears there was a little performance impact according a measurement. I'm trying a change to double the delay to see how it goes.

FOLLOWUP 2: Doubling the delay indeed improves the performance. Will go with 200ms, at least while syncNodeStatus runs. Tried other values (300/400/500ms) and none of them is constantly better than 200ms, but 200ms is constantly better than the original 100ms.

FOLLOWUP 3: It's actually hitting the master from syncNodeStatus(). #113466 tries to provide a way to sync without a call to the apiserver when no update is needed. With this resolved, we're going back to the original 100ms.

Known issue: kl.updateRuntimeUp() floods system log with errors like "Container runtime network not ready" and/or "Container runtime not ready" before the container runtime (and network) is ready. We've already got user complaint about the confusion due to this message, and it will be much more significant (now multiple times per second from previously once every few seconds). If we're willing to make this change, the logging issue can be addressed at the same time.

FOLLOWUP: This has been addressed by logging at verbosity 4 before fastStatusUpdateOnce exits.

Does this PR introduce a user-facing change?

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 21, 2022
@k8s-ci-robot
Copy link
Contributor

Welcome @jingyuanliang!

It looks like this is your first PR to kubernetes/kubernetes 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kubernetes has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Sep 21, 2022
@k8s-ci-robot
Copy link
Contributor

Hi @jingyuanliang. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 21, 2022
@jingyuanliang
Copy link
Contributor Author

/hold per "Special notes for your reviewer" above.
/cc @MrHohn
/cc @krzysztof-jastrzebski (person who initially added fastStatusUpdateOnce)

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 21, 2022
@jingyuanliang
Copy link
Contributor Author

@Random-Liu Re: log flooding - you turns out to be the person who initially added these error logs in 4bd9dbf and 772bf8e. Would you consider reducing the level to a warning, or suppressing repeated logging, or something else?

@pacoxu
Copy link
Member

pacoxu commented Sep 21, 2022

/ok-to-test
/cc

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 21, 2022
@MrHohn
Copy link
Member

MrHohn commented Sep 22, 2022

Regarding the known issue on logs flooding - thanks for pointing this out and I would be grateful if you could think of a way on reducing the log frequencies (rate-limiting) and hinting that this node is still in the process of turning up. There are enough customer confusions this log line brought in already :)

Other than that, I would +1 to a faster node readiness status update on node startup, with the same motivation what fastStatusUpdateOnce had (now we know waiting till PodCIDR assigned might not be enough).

@SergeyKanzhelev
Copy link
Member

To do: update function comment to describe the new behavior.

is it still a todo?

+1 for the logs confusion.

@SergeyKanzhelev
Copy link
Member

/priority important-longterm
/triage accepted

@k8s-ci-robot k8s-ci-robot added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Sep 27, 2022
@dchen1107
Copy link
Member

Spent sometime on the changes since it includes some refactory code. Logging my understanding here as a record for others:

Fundamental issue the PR tried to address is for node bootstrap case, fast-node-status-update path wouldn't return prematurely only based on CIDR status, such as Container Runtime might not ready. Returning too earlier might cause the path falling into the regular node_status_update path, which implies the potential delays for the node bootstrap.

Asked @jingyuanliang why including much refactory code, instead of addition check to continue the loop. According to @jingyuanliang: "Since the original fast path hits the apiserver, I can't just keep running it for so long, then had to refactor some code to build a path without hitting the apiserver".

/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dchen1107, jingyuanliang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 9, 2022
@jingyuanliang
Copy link
Contributor Author

Adding more notes - probably at the time of the original code 7ffa4e1 podCIDR was the bottleneck before the node becomes ready, but it's no longer the case.

Now it's blocked on CRI reporting ready. There's no way for us to actively monitor CRI's readiness signal (there isn't one), and we shouldn't try to guess CRI's behavior, so the only way is to poll CRI's status for now.

// It holds the same lock as syncNodeStatus and is thread-safe when called concurrently with
// syncNodeStatus. Its return value indicates whether the loop running it should exit
// (final run), and it also sets kl.containerRuntimeReadyExpected.
func (kl *Kubelet) fastNodeStatusUpdate(ctx context.Context, timeout bool) (completed bool) {
Copy link
Member

@Random-Liu Random-Liu Nov 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this code is unnecessarily complicated.

// fastNodeStatusUpdate is a "lightweight" version of syncNodeStatus which doesn't hit the
// apiserver except for the final run

If we want to achieve this, can we just do:

  1. More frequent readiness check in an inner loop;
  2. Less frequent node status patch in an outer loop?

I guess we just need to separate the node readiness population and the node status update, so that we can populate the readiness and check internally, but update apiserver less frequently?

Or in the worst case, a true of false to syncNodeStatus to specify whether it is a "dry-run" sync or not, so that we get the new node readiness, but doesn't necessarily need to hit the apiserver until ready?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the current code, it doesn't update the apiserver until the node is ready.

https://github.com/kubernetes/kubernetes/blob/bb651243a375c326d17326b23f87733d58d02180/pkg/kubelet/kubelet_node_status.go#L478-L491

and once it updates the apiserver, no more loop happens.

https://github.com/kubernetes/kubernetes/blob/bb651243a375c326d17326b23f87733d58d02180/pkg/kubelet/kubelet_node_status.go#L508

Basically there will be only a single node status patch apiserver call (unless retried up to 5 times by syncNodeStatus).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or in the worst case, a true of false to syncNodeStatus to specify whether it is a "dry-run" sync or not, so that we get the new node readiness, but doesn't necessarily need to hit the apiserver until ready?

A call to syncNodeStatus always reads from the apiserver even if we don't patch it. I tried to change it in a way that it reads from lister instead, but this results in dirtier code (#113188), so I finally went on the current approach.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation!

The motivation makes sense to me, but I'm not sure whether the code resulted is cleaner than #113188. :P It is mainly a matter of those locks, which are quite error prone.

Please make sure to find a way to clean them up, if we decide to do that in a follow up PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah in this PR I'm more concerned about making things safe. Some locks are unnecessary and some others are shared that might cause confusion. I've tried to clean them up, but decided to go with a follow-up PR to make review easier.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With #113188 we got to change all syncNodeStatus callers including the tests. It's more difficult to prove the correctness of the change unlike #113466 where all existing tests stay intact and are still passing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Random-Liu unhold if this resolved your concerns?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jingyuanliang you convinced me even the current refactory is a little bit hard to understand. But with the amount newly introduced tests and comments, the concerns are eased.

@k8s-ci-robot
Copy link
Contributor

@jingyuanliang: You must be a member of the kubernetes/milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Milestone Maintainers Team and have them propose you as an additional delegate for this responsibility.

In response to this:

Need a milestone tag?
/milestone v1.26

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@aojea
Copy link
Member

aojea commented Nov 9, 2022

/milestone v1.26

The PR was approved before code freeze

#112618 (comment)

/hold

holding for resolution on @Random-Liu question

#112618 (comment)

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 9, 2022
@k8s-ci-robot k8s-ci-robot added this to the v1.26 milestone Nov 9, 2022
@leonardpahlke leonardpahlke removed this from the v1.26 milestone Nov 9, 2022
@leonardpahlke
Copy link
Member

Since this was approved before code freeze I will add the milestone back! (I cleared all PRs from the milestone a little late, including this one)

/milestone v1.26

@k8s-ci-robot k8s-ci-robot added this to the v1.26 milestone Nov 9, 2022
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 9, 2022
@k8s-ci-robot k8s-ci-robot removed lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Nov 9, 2022
@jingyuanliang
Copy link
Contributor Author

Rebased to resolve merge conflict -

>><<<<<<< HEAD
          // DefaultContainerLogsDir is the location of container logs.
          DefaultContainerLogsDir = "/var/log/containers"
  =======
          // nodeReadyGracePeriod is the period to allow for before fast status update is
          // terminated and container runtime not being ready is logged without verbosity guard.
          nodeReadyGracePeriod = 120 * time.Second
  
          // ContainerLogsDir is the location of container logs.
          ContainerLogsDir = "/var/log/containers"
  >>>>>>> bb651243a37 (kubelet: Keep trying fast status update at startup until node is ready)

@dchen1107
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 9, 2022
@aojea
Copy link
Member

aojea commented Nov 9, 2022

/hold

holding for resolution on @Random-Liu question

/hold cancel

it seems the discussion was resolved #112618 (comment)

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 9, 2022
@k8s-ci-robot k8s-ci-robot merged commit 2c1b7f5 into kubernetes:master Nov 9, 2022
SIG Node PR Triage automation moved this from Needs Reviewer to Done Nov 9, 2022
@wojtek-t
Copy link
Member

Thanks for resolving the initial performance concern - the current version LGTM too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/kubelet cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note-none Denotes a PR that doesn't merit a release note. sig/node Categorizes an issue or PR as relevant to SIG Node. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

None yet