Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature request: preverse property order #88

Open
bluebrown opened this issue Feb 2, 2023 · 8 comments
Open

feature request: preverse property order #88

bluebrown opened this issue Feb 2, 2023 · 8 comments

Comments

@bluebrown
Copy link

bluebrown commented Feb 2, 2023

Hi, I know why the order is not preserved. It's because of the underlying implementation of the go package. However, given that many Kubernetes tools rely on this yaml package, and the output is typically read by humans, I think it would be nice to add an option to preserve the order.

In this issue are some ideas and even concrete implementations on how to achieve this. Something could be done based on those ideas.

As an example use case, when using kustomize, the yaml manifests get all messed up, which is ok when applying it directly to the cluster. But It's not really ok anymore when source controlling this for gitops purposes. It becomes almost unreadable and create huge diffs.

I tried creating a customer formatter that could be run on top of anything that is produced by this package, but it's very hard to get right because this yaml package has so magic handling of Kubernetes objects, which cannot be achieved easily but using other standard packages. For example, decoding a pod into a pod struct and then using the standard yaml package to get conventional ordering does not result in something usable, since it does not implement the special handling for Kubernetes objects.

After all, I think the right place to try to preserve ordering or get conventional ordering, would be this yaml package here.

@bluebrown
Copy link
Author

I see that here are some discussions regarding using a different json implementation #17. Perhaps this feature could be added to https://github.com/kubernetes-sigs/json, so that when this yaml package switches to using it, it gets that feature for alongside.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 3, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 2, 2023
@bluebrown
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 10, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 21, 2024
@sergeyshevch
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 21, 2024
@mattwelke
Copy link

mattwelke commented Mar 22, 2024

Glad to see this remains open. I'm affected by this right now.

My use case involves putting k8s objects into a custom struct alongside some properties of my own and then marshaling that struct to YAML with property casing (camelCase) and property order preserved. I want property order preserved because I want the marshaled data to be human-readable by k8s users. I want to choose a property order I consider idiomatic for k8s users.

I find that I'm faced with a trade off.

  • If I use gopkg.in/yaml.v3 to marshal my struct to YAML, my own field names are preserved (as long as I use yaml struct tags to indicate the property names I want). But the names of the properties of the nested k8s objects aren't preserved. They lose camelCase and become all lowercase.
  • If I use this library to marshal my struct to YAML (using json struct tags instead of yaml struct tags), the properties of the nested k8s objects are preserved, being properly camelCase. Also, the names of my own properties are preserved. But the property order of the struct becomes alphabetical.

Example:

package main

import (
	"fmt"

	gopkg_yaml "gopkg.in/yaml.v3"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	sigs_yaml "sigs.k8s.io/yaml"
)

func gopkgYAML() {
	type myStruct struct {
		Z        string            `yaml:"z"`
		Metadata metav1.ObjectMeta `yaml:"metadata"`
	}

	yaml, err := gopkg_yaml.Marshal(myStruct{
		Z: "z",
		Metadata: metav1.ObjectMeta{
			Name:      "TestName",
			Namespace: "TestNamespace",
		},
	})
	if err != nil {
		panic(err)
	}
	fmt.Println(string(yaml))
}

func sigsYAML() {
	type myStruct struct {
		Z        string            `json:"z"`
		Metadata metav1.ObjectMeta `json:"metadata"`
	}

	yaml, err := sigs_yaml.Marshal(myStruct{
		Z: "z",
		Metadata: metav1.ObjectMeta{
			Name:      "TestName",
			Namespace: "TestNamespace",
		},
	})
	if err != nil {
		panic(err)
	}
	fmt.Println(string(yaml))
}

func main() {
	fmt.Println("gopkg.in/yaml.v3\n")
	gopkgYAML()
	fmt.Println("sigs.k8s.io/yaml\n")
	sigsYAML()
}
gopkg.in/yaml.v3

z: z
metadata:
    name: TestName
    generatename: ""
    namespace: TestNamespace
    selflink: ""
    uid: ""
    resourceversion: ""
    generation: 0
    creationtimestamp: "0001-01-01T00:00:00Z"
    deletiontimestamp: null
    deletiongraceperiodseconds: null
    labels: {}
    annotations: {}
    ownerreferences: []
    finalizers: []
    managedfields: []

sigs.k8s.io/yaml

metadata:
  creationTimestamp: null
  name: TestName
  namespace: TestNamespace
z: z

Note how my custom property z is displayed at the bottom when I use sigs.k8s.io/yaml to marshal the struct.

Zero value properties are also marshaled differently too, but that's not what I'm demonstrating here. I'm just including the exact output of my test code for the sake of accuracy.

It sounds like people have a pretty good idea of what needs to change (maybe a change within this library to preserve property order), but in the mean time, I'm wondering what workarounds folks might know of.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants