New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix a small regression in Service updates #104601
Fix a small regression in Service updates #104601
Conversation
@thockin: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This PR may require API review. If so, when the changes are ready, complete the pre-review checklist and request an API review. Status of requested reviews is tracked in the API Review project. |
|
||
// Build a set of all the ports in oldSvc that are also in newSvc. We know | ||
// we can't patch these values. | ||
used := nodePortsUsed(oldSvc).Intersection(nodePortsUsed(newSvc)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Jordan's example considers healthCheckNodePort too https://github.com/kubernetes/kubernetes/pull/103532/files#r694831725
If the user sets a new Nodeport value that matches the current allocated HealthCheckNodePort, should we change the HealthCheckNodePort?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the tests *swap-port-with-hcnp
consider that should fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that has always failed, since the node port allocate step happens before the HCNP deallocate. I am fine with that continuing to fail, and honestly, letting nodeports fail isn't SO bad, but it is a breakage in 22 vs 21.
I have doubts this is a regression https://github.com/kubernetes/kubernetes/pull/103532/files#r696807794 , technically for nodeport it is, but for the user this update never worked. quoting the comment here: reproducer as integration test https://github.com/kubernetes/kubernetes/compare/master...aojea:svc_carry_node_ports_test?expand=1 in 1.22
in 1.21.2
If we can argument that there is no regression for the user (it fails one way or the other), can we go with the port conflict approach and protect users of making weird things (and remove more complex logic in the API) sweat_smile ? |
The failure based on the missing cluster IP with my example is orthogonal... a user successfully making use of I agree this is a small issue (the user steps to hit it are unlikely), but I am constantly amazed by the ... creativity of API use people do to make things work, and if we can avoid introducing a new failure, I think we should do so. |
you are right , the patch was already broken but the replace worked as you are describing
|
LGTM, just the tests questions |
f1099a0
to
f8efea2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment. One question. But can merge as is.
@@ -622,48 +622,210 @@ func TestServiceRegistryUpdate(t *testing.T) { | |||
} | |||
|
|||
func TestServiceRegistryUpdateUnspecifiedAllocations(t *testing.T) { | |||
type proof func(t *testing.T, s *api.Service) | |||
prove := func(proofs ...proof) []proof { | |||
return proofs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Low pri
on all proves ( don't think we should fix but we should ack it). idx
is never tested against len(<the thing we want to test>)
. This could yield into undesirable error, specially when i think these should be part of make.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The good news is that they will panic if wrong.
I agree finding a way to re-use these will be nice - as a followup to the mega PR :)
Prior to 1.22 a user could change NodePort values within a service during an update, and the apiserver would allocate values for any that were not specified. Consider a YAML like: ``` apiVersion: v1 kind: Service metadata: name: foo spec: type: NodePort ports: - name: p port: 80 - name: q port: 81 selector: app: foo ``` When this is created, nodeport values will be allocated for each port. Something like: ``` apiVersion: v1 kind: Service metadata: name: foo spec: clusterIP: 10.0.149.11 type: NodePort ports: - name: p nodePort: 30872 port: 80 protocol: TCP targetPort: 9376 - name: q nodePort: 31310 port: 81 protocol: TCP targetPort: 81 selector: app: foo ``` If the user PUTs (kubectl replace) the original YAML, we would see that `.nodePort = 0`, and allocate new ports. This was ugly at best. In 1.22 we fixed this to not allocate new values if we still had the old values, but instead re-assign them. Net new ports would still be seen as `.nodePort = 0` and so new allocations would be made. This broke a corner case as follows: Prior to 1.22, the user could PUT this YAML: ``` apiVersion: v1 kind: Service metadata: name: foo spec: type: NodePort ports: - name: p nodePort: 31310 # note this is the `q` value port: 80 - name: q # note this nodePort is not specified port: 81 selector: app: foo ``` The `p` port would take the `q` port's value. The `q` port would be seen as `.nodePort = 0` and a new value allocated. In 1.22 this results in an error (duplicate value in `p` and `q`). This is VERY minor but it is an API regression, which we try to avoid, and the fix is not too horrible. This commit adds more robust testing of this logic.
f8efea2
to
73503a4
Compare
small comment cleanup and rebase |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: khenidak, thockin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
Thanks to @liggitt for spotting this one ex post facto.
Prior to 1.22 a user could change NodePort values within a service during an update, and the apiserver would allocate values for any that were not specified.
Consider a YAML like:
When this is created, nodeport values will be allocated for each port. Something like:
If the user PUTs (kubectl replace) the original YAML, we would see that
.nodePort = 0
, and allocate new ports. This was ugly at best.In 1.22 we fixed this (#103532) to not allocate new values if we still had the old values, but instead re-assign them. Net new ports would still be seen as
.nodePort = 0
and so new allocations would be made.This broke a corner case as follows:
Prior to 1.22, the user could PUT this YAML:
The
p
port would take theq
port's value. Theq
port would be seen as.nodePort = 0
and a new value allocated. In 1.22 this results in an error (duplicate value inp
andq
).This is VERY minor but it is an API regression, which we try to avoid, and the fix is not too horrible.
This commit adds more robust testing of this logic.
/kind bug
/kind api-change
/kind regression
xref https://github.com/kubernetes/kubernetes/pull/103532/files#r694831725