Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documented wildcard matching for replacement is broken #5375

Closed
macetw opened this issue Oct 11, 2023 · 6 comments
Closed

Documented wildcard matching for replacement is broken #5375

macetw opened this issue Oct 11, 2023 · 6 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@macetw
Copy link

macetw commented Oct 11, 2023

What happened?

I am using a glob pattern on my ServiceMonitor endpoints (a prometheus-operator construct) and I cannot defined the port 8080 to all of them -- and I have tons of them. I want to be able to apply the port number (and relabelings, actually) to all of them, but the replacements construct, as documented in docs, is not working.

Here's my example, not using a ServiceMonitor, but just a plain-old Pod with environment variables.

kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: mynamespace
resources:
  - podexample.yaml
configMapGenerator:
  - name: example
    literals:
      - username=macetw
replacements:
  - source:
      kind: ConfigMap
      name: example
      fieldPath: data.username
    targets:
      - select:
          kind: Pod
          name: mypod
        fieldPaths:
          - spec.containers.[name=mycontainer].env.*.value
        options:
          create: true

podexample.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
  namespace: mynamespace
spec:
  containers:
    - name: mycontainer
      image: docker.io/alpine:latest
      command:
        - sleep
        - inf
      env:
        - name: VARIABLE1
        - name: VARIABLE2

And my result:

$ kubectl apply -k . --dry-run -o yaml
W1011 14:14:54.782699  640700 helpers.go:660] --dry-run is deprecated and can be replaced with --dry-run=client.
error: wrong Node Kind for spec.containers.env expected: MappingNode was SequenceNode: value: {- name: USER_NAME

I am using version 4.5.4. This feature is included in that release.

#4424

This is from the docs, here:
https://github.com/kubernetes-sigs/cli-experimental/blob/master/site/content/en/references/kustomize/kustomization/replacements/_index.md#index

Docs say:

This will target every element in the list.

... but it does not work this way.

What did you expect to happen?

I want the value field rendered in every entry of the list of env with the given replacement value defined. In this case, with options: { create: true} , I want those fields created.

How can we reproduce it (as minimally and precisely as possible)?

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - podexample.yaml
configMapGenerator:
  - name: example
    literals:
      - username=macetw
replacements:
  - source:
      kind: ConfigMap
      name: example
      fieldPath: data.username
    targets:
      - select:
          kind: Pod
          name: mypod
        fieldPaths:
            - spec.containers.[name=mycontainer].env.*.value
        options:
          create: true
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: mycontainer
      image: docker.io/alpine:latest
      command:
        - sleep
        - inf
      env:
        - name: VARIABLE1
        - name: VARIABLE2
kubectl apply -k . --dry-run -o yaml

Expected output

- apiVersion: v1
  kind: Pod
  metadata:
    name: mypod
  spec:
    containers:
    - command:
      - sleep
      - inf
      env:
      - name: VARIABLE1
        value: macetw
      - name: VARIABLE2
        value: macetw
      image: docker.io/alpine:latest
      name: mycontainer

Actual output

$ kubectl apply -k . --dry-run -o yaml
W1011 14:37:46.838210  644735 helpers.go:660] --dry-run is deprecated and can be replaced with --dry-run=client.
error: wrong Node Kind for spec.containers.env expected: MappingNode was SequenceNode: value: {- name: VARIABLE1
- name: VARIABLE2}
$ echo $?
0

Kustomize version

4.5.4

Operating system

Linux

@macetw macetw added the kind/bug Categorizes issue or PR as related to a bug. label Oct 11, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Oct 11, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Mlundm
Copy link

Mlundm commented Nov 14, 2023

Seems to be fixed in newer versions (5.0.4 for example).

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 12, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 13, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

4 participants