-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add HNS Load Balancer Health Checks for ExternalTrafficPolicy: Local #96998
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -20,18 +20,19 @@ package winkernel | |
|
||
import ( | ||
"fmt" | ||
"k8s.io/api/core/v1" | ||
"net" | ||
"strings" | ||
"testing" | ||
"time" | ||
|
||
v1 "k8s.io/api/core/v1" | ||
discovery "k8s.io/api/discovery/v1beta1" | ||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | ||
"k8s.io/apimachinery/pkg/types" | ||
"k8s.io/apimachinery/pkg/util/intstr" | ||
"k8s.io/kubernetes/pkg/proxy" | ||
"k8s.io/kubernetes/pkg/proxy/healthcheck" | ||
utilpointer "k8s.io/utils/pointer" | ||
"net" | ||
"strings" | ||
"testing" | ||
"time" | ||
) | ||
|
||
const ( | ||
|
@@ -70,6 +71,10 @@ func (hns fakeHNS) getEndpointByID(id string) (*endpointsInfo, error) { | |
return nil, nil | ||
} | ||
|
||
func (hns fakeHNS) getEndpointByName(id string) (*endpointsInfo, error) { | ||
return nil, nil | ||
} | ||
|
||
func (hns fakeHNS) getEndpointByIpAddress(ip string, networkName string) (*endpointsInfo, error) { | ||
_, ipNet, _ := net.ParseCIDR(destinationPrefix) | ||
|
||
|
@@ -703,7 +708,6 @@ func TestCreateLoadBalancer(t *testing.T) { | |
t.Errorf("%v does not match %v", svcInfo.hnsID, guid) | ||
} | ||
} | ||
|
||
} | ||
|
||
func TestCreateDsrLoadBalancer(t *testing.T) { | ||
|
@@ -765,6 +769,11 @@ func TestCreateDsrLoadBalancer(t *testing.T) { | |
if svcInfo.localTrafficDSR != true { | ||
t.Errorf("Failed to create DSR loadbalancer with local traffic policy") | ||
} | ||
if len(svcInfo.loadBalancerIngressIPs) == 0 { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This test condition fails. Looking at the structs it's not clear what should actually be compared here or what the expectation is. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The fake needs to be changed a lit. |
||
t.Errorf("svcInfo does not have any loadBalancerIngressIPs, %+v", svcInfo) | ||
} else if svcInfo.loadBalancerIngressIPs[0].healthCheckHnsID != guid { | ||
t.Errorf("The Hns Loadbalancer HealthCheck Id %v does not match %v. ServicePortName %q", svcInfo.loadBalancerIngressIPs[0].healthCheckHnsID, guid, svcPortName.String()) | ||
} | ||
} | ||
} | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -894,9 +894,10 @@ var _ = SIGDescribe("Services", func() { | |
validateEndpointsPortsOrFail(cs, ns, serviceName, portsByPodName{}) | ||
}) | ||
|
||
ginkgo.It("should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]", func() { | ||
ginkgo.It("should preserve source pod IP for traffic thru service cluster IP", func() { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This test does not pass locally. I'm not sure why yet. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think this test is related. It's creating clusterIP service and check if source ip is preserved. This PR's case is from LoadBalancer. |
||
// TODO(jeremyje): Determine which parts of this test work. | ||
// this test is creating a pod with HostNetwork=true, which is not supported on Windows. | ||
e2eskipper.SkipIfNodeOSDistroIs("windows") | ||
//e2eskipper.SkipIfNodeOSDistroIs("windows") | ||
|
||
// This behavior is not supported if Kube-proxy is in "userspace" mode. | ||
// So we check the kube-proxy mode and skip this test if that's the case. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cbr0 is GCP specific (https://kubernetes.io/docs/concepts/cluster-administration/networking/#google-compute-engine-gce), can this be generalized?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It also looks like this can collide because there could be multiple endpoints with the same name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes I don't think we should be putting GCP specific logic here. where is the current cbr0 endpoint being created?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not usually apply policies to endpoints other than the one that is part of the service.
What is the actual scenario/data path are you trying to enable with this fix?
Can you please describe? That would help us figure out what is the correct solution here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is trying to let the GCE load balancer health check the nodes. The way GCE load balancer works is it doesn't do DNAT. Instead of getting
<node-ip>:<health-check-node-port>
, the node gets<service-vip>:<health-check-node-port>
. so we need to do this to forward the health check package the kube-proxy's internal health check app.I have another PR open for further review: #99287. Let's move discussion there.