New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add HNS Load Balancer Health Checks for ExternalTrafficPolicy: Local #96998
Conversation
@jeremyje: GitHub didn't allow me to request PR reviews from the following users: daschott. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @jeremyje. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jeremyje The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/sig windows |
/ok-to-test |
/priority important-soon |
/triage accepted |
/retest |
@@ -909,9 +909,10 @@ var _ = SIGDescribe("Services", func() { | |||
framework.ExpectNoError(err, "failed to validate endpoints for service %s in namespace: %s", serviceName, ns) | |||
}) | |||
|
|||
ginkgo.It("should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]", func() { | |||
ginkgo.It("should preserve source pod IP for traffic thru service cluster IP", func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test does not pass locally. I'm not sure why yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this test is related. It's creating clusterIP service and check if source ip is preserved. This PR's case is from LoadBalancer.
@@ -765,6 +769,11 @@ func TestCreateDsrLoadBalancer(t *testing.T) { | |||
if svcInfo.localTrafficDSR != true { | |||
t.Errorf("Failed to create DSR loadbalancer with local traffic policy") | |||
} | |||
if len(svcInfo.loadBalancerIngressIPs) == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test condition fails. Looking at the structs it's not clear what should actually be compared here or what the expectation is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fake needs to be changed a lit.
@@ -926,6 +929,7 @@ func (proxier *Proxier) syncProxyRules() { | |||
|
|||
hnsNetworkName := proxier.network.name | |||
hns := proxier.hns | |||
cbr0HnsEndpoint, _ := hns.getEndpointByName("cbr0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cbr0 is GCP specific (https://kubernetes.io/docs/concepts/cluster-administration/networking/#google-compute-engine-gce), can this be generalized?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It also looks like this can collide because there could be multiple endpoints with the same name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes I don't think we should be putting GCP specific logic here. where is the current cbr0 endpoint being created?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not usually apply policies to endpoints other than the one that is part of the service.
What is the actual scenario/data path are you trying to enable with this fix?
Can you please describe? That would help us figure out what is the correct solution here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is trying to let the GCE load balancer health check the nodes. The way GCE load balancer works is it doesn't do DNAT. Instead of getting <node-ip>:<health-check-node-port>
, the node gets <service-vip>:<health-check-node-port>
. so we need to do this to forward the health check package the kube-proxy's internal health check app.
I have another PR open for further review: #99287. Let's move discussion there.
@madhanrm PTAL |
/cc @anfernee |
@jeremyje: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@jeremyje: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
In GCE, the current
externalTrafficPolicy: Local
logic does not work because the nodes that run the pods do not setup load balancer ports. This means that the GCLB does not understand which nodes are serving the pods that can accept traffic. Since all report unhealthy it'll direct traffic to any node. This PR configures the health check ports so that GCLB knows which nodes can handle the traffic.See #62046 (comment) for details.
Which issue(s) this PR fixes:
Fixes #62046
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
/cc @elweb9858 @daschott