Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

createOrReplace methods should behave like kubectl #2454

Closed
manusa opened this issue Sep 1, 2020 · 15 comments
Closed

createOrReplace methods should behave like kubectl #2454

manusa opened this issue Sep 1, 2020 · 15 comments

Comments

@manusa
Copy link
Member

manusa commented Sep 1, 2020

Description

kubectl only replaces server resources in case the local resource has changed and is different to the one deployed on the cluster.
In case the local resource is unchanged no REST APIs will be hit.

In scope of #2372 we tried to mimic the behavior of kubectl but the implemented methods in ResourceComparison are incomplete, and lead to #2445.

We need to check how does kubectl resolve if the local resource is different or on the other hand unchanged compared to that in the cluster.

Challenges:

  • Greedy vs. conservative behavior
  • Unknown structure of resource, metadata and spec fields are clearly not enough
  • Order of fields, arrays, etc.
@rohanKanojia
Copy link
Member

I think kubectl adds the last applied object as an annotation and compares the new request with this while replacing.

Here is an example for doing createOrReplace on a Deployment:

k8-resource-yamls : $ kubectl apply -f ~/work/k8-resource-yamls/hello-deployment.yaml 
deployment.apps/hello-dep created
k8-resource-yamls : $ kubectl get deploy
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
hello-dep          0/2     2            0           5s
hello-kubernetes   1/1     1            1           5d1h
random-generator   0/1     1            0           21h
k8-resource-yamls : $ kubectl apply -f ~/work/k8-resource-yamls/hello-deployment.yaml 
deployment.apps/hello-dep unchanged

When I check hello-dep, I'm able to see kubectl.kubernetes.io/last-applied-configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"hello-dep","namespace":"default"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"hello-dep"}},"template":{"metadata":{"labels":{"app":"hello-dep"}},"spec":{"containers":[{"image":"gcr.io/google-samples/hello-app:1.0","imagePullPolicy":"Always","name":"hello-dep","ports":[{"containerPort":8080}]}]}}}}
  creationTimestamp: "2020-09-09T11:17:24Z"
  generation: 1
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:spec:
        f:progressDeadlineSeconds: {}

...

Here is an example for createOrReplace with Service:

k8-resource-yamls : $ kubectl create -f test-service.yml  
service/my-service created
k8-resource-yamls : $ kubectl apply -f test-service.yml 
service/my-service unchanged
k8-resource-yamls : $ kubectl get svc my-service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
my-service   ClusterIP   10.98.201.134   <none>        80/TCP    18m
k8-resource-yamls : $ kubectl get svc my-service -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-service","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":9376}],"selector":{"app":"MyApp"}}}
  creationTimestamp: "2020-09-09T10:57:58Z"

@manusa
Copy link
Member Author

manusa commented Sep 9, 2020

So basically kubectl adds a JSON snapshot of the provided YAML whenever a kubectl apply -f ... operation is performed, right?

I'm not sure this is a behavior we want to implement (at least as a default behavior).

If we add such an annotation it will pollute and clutter the annotations of any resource created with Kubernetes Client.
It will also be sort of "unstable", since a createOrReplace on a resource which was created or modified externally (not from Kubernetes Client) will provide an inconsistent behavior.

I really don't see any advantage of going further with this.

@rohanKanojia
Copy link
Member

What shall we do about users complaining replace not getting ignored like this issue #2439 ?

@manusa
Copy link
Member Author

manusa commented Sep 9, 2020

I think that issue is based on wrong expectations (IMHO).

Issue:

  1. List of resources (service + deployment) is loaded from YAML file
  2. Deserialized resource list is applied to the cluster using Kubernetes-Client
  3. YAML deployment entry is modified externally
  4. YAML is re-deserialized into a new resource list
  5. Complete list of resource is applied to the cluster using Kubernetes-Client
  6. Since the service PUT request is sent, the cluster updates the NodePort as it has not been statically provided in the configuration

I see many flaws here:

  • Why is the YAML file updated externally?
    • Depending on the answer to that, maybe resource versioning should be implemented by the consuming project.
  • In the previous scenario, what happens if the service already existed with the same exact properties kubectl create service... ?
  • Following the previous expectation, if we implement this, how should our Kubernetes Client behave in case an existing resource is already present in the cluster with the same properties but doesn't contain the snapshot annotation? Won't this be confusing for the users?
  • ...

For me the use-case describe in #2439 needs some changes. IMO Providing an implementation to fix this will bring far more problems (see #2445 + all other resources which failed too) and will only help solving an issue that can probably be fixed by a much simpler approach.

@saparaj
Copy link

saparaj commented Sep 21, 2020

@manusa Agree with you on annotations part that it will pollute and clutter the annotations of any resource created with Kubernetes Client. Usecase from #2439, Can you please elaborate more on how that can be solved with a simpler approach?

@manusa
Copy link
Member Author

manusa commented Sep 28, 2020

Hi @saparaj

The easiest way would be to provide a comparison method yourself, since you know which fields are bound to be changed (in case these are more or less always the same).

I really don't understand the purpose that lies beneath loading resources from a YAML file and then applying them several times. Could you elaborate a little bit more on what you are doing? I really think there must be something simpler that can be done.

If this procedure is something you really need, and you need it in an abstract way. An approach would be to load the local list, retrieve each individual resource from the server, and finally merge the server resources with those loaded from your file. There are dozens of libraries that will do this for you, you can even use Jackson's ObjectMapper for this (serialize and deserialize the object to be merged into the server object). You would then send updates for each of the existing objects, or create for those that don't.

Again, I would insist in doing something much simpler. In your case the only problem is the port value changing in your service (since it 's randomly selected by K8s). I would simply go by trying to get the service from the server and updating the port value before reapplying the list.

@saparaj
Copy link

saparaj commented Oct 13, 2020

Hi @manusa,

What is the current logic of CreateOrReplace method then? I was assuming it will be similar to kubectl apply. Are there any low level apis that fabric8 client supports which we can leverage to write compare method ourselves?

Thanks,
Swarda

@manusa
Copy link
Member Author

manusa commented Oct 14, 2020

You can see the current behavior in the following lines:

try {
// Create
KubernetesResourceUtil.setResourceVersion(itemToCreateOrReplace, null);
return create(itemToCreateOrReplace);
} catch (KubernetesClientException exception) {
if (exception.getCode() != HttpURLConnection.HTTP_CONFLICT) {
throw exception;
}
// Conflict; Do Replace
final T itemFromServer = fromServer().get();
KubernetesResourceUtil.setResourceVersion(itemToCreateOrReplace, KubernetesResourceUtil.getResourceVersion(itemFromServer));
return replace(itemToCreateOrReplace);
}

1. Client tries to create a Resource for the provided name and namespace.
2.A. If resource doesn't exist in the cluster, Response is OK and new resource is returned
2.B. There is an exception while creating the resource
3.A. The exception is not related with previous resource existence (CONFLICT), the exception is thrown
3.B. The exception is related with an already existing resource (CONFLICT). Existing object is retrieved from the server.
4. Resource version in object to be created/replaced is updated from the one in the server (avoid conflict)
5. Object in server is replaced.

In a nutshell, as the method name suggests, the object is either created or replaced (always). kubectl apply would probably fit in a method named createOrReplaceIfNotEqual.

The way apply works in kubectl is by serializing the object into an annotation of the same object prior to persisting it in the cluster. Then, when upserting the object, it compares the local version with the serialized annotation in the server.

You could mimic kubectl apply behavior yourself by using Serialization.asJson(object) and adding the annotation to the object before it's being created. Then comparing this annotation in subsequent replace operations.

I think all of this came up from #2439 (comment). Maybe you could provide a repo link or some code where this is being used so we can provide better and more suited alternatives for your use-case.

@saparaj
Copy link

saparaj commented Oct 23, 2020

@manusa Thank you for the detailed explanation. The usecase which I am trying to achieve is using fabric8io client to deploy manifest to a cluster which can happen n number of times and yaml manifest contains multiple resources. I got the issue #2439 when I tried to execute createOrReplace() twice for the same yaml only deployment docker path was updated. I found this old issue related to fabric8io plugin - fabric8io/fabric8-maven-plugin#894 so thought it might be similar in client as well.

@manusa
Copy link
Member Author

manusa commented Jan 8, 2021

From Gitter

@saparaj: is there a way to invoke server-side apply in fabric8io client? https://kubernetes.io/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/

@manusa: Hi, this seems definitely the way to go with the createOrReplace issue
What kubectl does is a patch request
e.g. for: https://raw.githubusercontent.com/kubernetes/website/master/content/es/examples/controllers/nginx-deployment.yaml
This is the transaction:

I0108 07:08:58.326421 1889240 round_trippers.go:425] curl -k -v -XPATCH  -H "Accept: application/json" -H "Content-Type: application/apply-patch+yaml" -H "User-Agent: kubectl/v1.20.1 (linux/amd64) kubernetes/c4d7527" 'https://192.168.49.2:8443/apis/apps/v1/namespaces/default/deployments/nginx-deployment?fieldManager=kubectl&force=false'
I0108 07:08:58.334878 1889240 round_trippers.go:445] PATCH https://192.168.49.2:8443/apis/apps/v1/namespaces/default/deployments/nginx-deployment?fieldManager=kubectl&force=false 201 Created in 8 milliseconds
I0108 07:08:58.334896 1889240 round_trippers.go:451] Response Headers:
I0108 07:08:58.334905 1889240 round_trippers.go:454]     Content-Type: application/json
I0108 07:08:58.334929 1889240 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 3a01a150-e4f9-4778-8b6d-7d8c0601e80a
I0108 07:08:58.334936 1889240 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 951d22f7-a734-4eb1-8213-01a4cc56ea13
I0108 07:08:58.334943 1889240 round_trippers.go:454]     Content-Length: 1412
I0108 07:08:58.334949 1889240 round_trippers.go:454]     Date: Fri, 08 Jan 2021 06:08:58 GMT
I0108 07:08:58.334955 1889240 round_trippers.go:454]     Cache-Control: no-cache, private
I0108 07:08:58.334989 1889240 request.go:1107] Response Body: {"kind":"Deployment","apiVersion":"apps/v1","metadata":{"name":"nginx-deployment","namespace":"default","uid":"119ab268-eda2-46ea-9b95-be0d628d1c86","resourceVersion":"799375","generation":1,"creationTimestamp":"2021-01-08T06:08:58Z","labels":{"app":"nginx"},"managedFields":[{"manager":"kubectl","operation":"Apply","apiVersion":"apps/v1","time":"2021-01-08T06:08:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{"f:app":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{"f:app":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"nginx\"}":{".":{},"f:image":{},"f:name":{},"f:ports":{"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{}}}}}}}}}}]},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"nginx"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"name":"nginx","image":"nginx:1.7.9","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":"25%","maxSurge":"25%"}},"revisionHistoryLimit":10,"progressDeadlineSeconds":600},"status":{}}
deployment.apps/nginx-deployment serverside-applied

@saparaj
Copy link

saparaj commented Jan 11, 2021

@manusa As per documentation this server-side apply is supported in API server version 1.16.0 or greater. Can we have this added in fabric8io client instead of making createOrReplace to behave like kubectl apply?

@rohanKanojia
Copy link
Member

How should we proceed on this? Seems like Server Side Apply is just a plain PATCH request with "Content-Type: application/apply-patch+yaml" Shall we change behavior of createOrReplace() to do a patch with provided content type instead of replace? Or shall we introduce a new DSL method serverSideApply(...) ?

@stale
Copy link

stale bot commented Jul 8, 2021

This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions!

@stale stale bot added the status/stale label Jul 8, 2021
@stale stale bot closed this as completed Jul 15, 2021
@jorsol
Copy link
Contributor

jorsol commented Jul 20, 2021

The Server-Side Apply needs more than just a plain PATCH, I'm not fully clear with all the details but the metadata.managedFields must be set to null or you will get a

{
  "kind":"Status",
  "apiVersion":"v1",
  "metadata":{
    
  },
  "status":"Failure",
  "message":"metadata.managedFields must be nil",
  "reason":"BadRequest",
  "code":400
}

Actually, the apply and update must be considered, so the apply operation is what must set the managedFields to null:

For instance, only the apply operation fails on conflicts while update does not. Also, apply operations are required to identify themselves by providing a fieldManager query parameter, while the query parameter is optional for update operations. Finally, when using the apply operation you cannot have managedFields in the object that is being applied.

Not quite sure how it would fit in the DSL but something like this probably needs to be added:

...
.serverSideApply(String fieldManager)
.createOrReplace(T resource)

@uklance
Copy link

uklance commented Mar 31, 2023

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants