Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

namespace Transformer Does Not Update Namespace in nginx.ingress.kubernetes.io/auth-tls-secret #4365

Closed
GuyPaddock opened this issue Jan 4, 2022 · 11 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@GuyPaddock
Copy link

Describe the bug
#1302 added support for Kustomize to update secret names referenced by nginx.ingress.kubernetes.io/auth-tls-secret annotations when those names are affected by prefix/suffix transformers, but it does not appear that the namespace transformer is able to update these annotations.

Files that can reproduce the issue

ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: some-ingress
  namespace: default
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
    nginx.ingress.kubernetes.io/auth-tls-secret: "default/some-ca-cert"
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false"
    nginx.ingress.kubernetes.io/rewrite-target: "/"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "my.host"
      secretName: my-tls-certificate
  rules:
    - host: "my.host"
      http:
        paths:
          - path: /
            pathType: Exact
            backend:
              service:
                name: some-internal-service
                port:
                  number: 8080

secrets.yaml

apiVersion: v1
kind: Secret
metadata:
  name: "some-ca-cert"
  namespace: "default"
type: Opaque
data:
  ca.crt: "--Some ca.crt--"

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ingress.yaml
  - secrets.yaml

namespace: bug-repro
namePrefix: some-prefix-

Expected output

apiVersion: v1
data:
  ca.crt: --Some ca.crt--
kind: Secret
metadata:
  name: some-prefix-some-ca-cert
  namespace: bug-repro
type: Opaque
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false"
    nginx.ingress.kubernetes.io/auth-tls-secret: bug-repro/some-prefix-some-ca-cert
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: some-prefix-some-ingress
  namespace: bug-repro
spec:
  ingressClassName: nginx
  rules:
  - host: my.host
    http:
      paths:
      - backend:
          service:
            name: some-internal-service
            port:
              number: 8080
        path: /
        pathType: Exact
  tls:
  - hosts:
    - my.host
    secretName: my-tls-certificate

Actual output

apiVersion: v1
data:
  ca.crt: --Some ca.crt--
kind: Secret
metadata:
  name: some-prefix-some-ca-cert
  namespace: bug-repro
type: Opaque
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false"
    nginx.ingress.kubernetes.io/auth-tls-secret: default/some-ca-cert
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: some-prefix-some-ingress
  namespace: bug-repro
spec:
  ingressClassName: nginx
  rules:
  - host: my.host
    http:
      paths:
      - backend:
          service:
            name: some-internal-service
            port:
              number: 8080
        path: /
        pathType: Exact
  tls:
  - hosts:
    - my.host
    secretName: my-tls-certificate

Kustomize version

{Version:kustomize/v4.4.1 GitCommit:b2d65ddc98e09187a8e38adc27c30bab078c1dbf BuildDate:2021-11-11T23:36:27Z GoOs:linux GoArch:amd64}

Platform

  • WSL 2
  • Ubuntu 20.04.3 LTS
  • 5.4.91-microsoft-standard-WSL2 #1 SMP Mon Jan 25 18:39:31 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Additional context
If ingress.yaml is changed to the following:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: some-ingress
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
    nginx.ingress.kubernetes.io/auth-tls-secret: "some-ca-cert"
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false"
    nginx.ingress.kubernetes.io/rewrite-target: "/"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "my.host"
      secretName: my-tls-certificate
  rules:
    - host: "my.host"
      http:
        paths:
          - path: /
            pathType: Exact
            backend:
              service:
                name: some-internal-service
                port:
                  number: 8080

Then the output becomes the following:

apiVersion: v1
data:
  ca.crt: --Some ca.crt--
kind: Secret
metadata:
  name: some-prefix-some-ca-cert
  namespace: bug-repro
type: Opaque
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false"
    nginx.ingress.kubernetes.io/auth-tls-secret: some-prefix-some-ca-cert
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: some-prefix-some-ingress
  namespace: bug-repro
spec:
  ingressClassName: nginx
  rules:
  - host: my.host
    http:
      paths:
      - backend:
          service:
            name: some-internal-service
            port:
              number: 8080
        path: /
        pathType: Exact
  tls:
  - hosts:
    - my.host
    secretName: my-tls-certificate
@GuyPaddock GuyPaddock added the kind/bug Categorizes issue or PR as related to a bug. label Jan 4, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jan 4, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 4, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 4, 2022
@GuyPaddock
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 24, 2022
@natasha41575
Copy link
Contributor

/triage accepted

It seems like the fix would be a simple update to the Namespace transformer fieldspecs. We would be happy to review a PR to resolve this issue.

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 6, 2022
@cailynse
Copy link
Contributor

cailynse commented Jul 6, 2022

/assign

@yuwenma
Copy link
Contributor

yuwenma commented Jul 7, 2022

@natasha41575 The auth-tls-secret requires partial namespace update, I'm afraid that's not what namespace transformer can support ? related to #4457

@KnVerey
Copy link
Contributor

KnVerey commented Jul 14, 2022

@yuwenma is right--when we triaged this, we missed the fact that the request is to update a substring within an annotation. That is indeed something that fieldSpec-driven transformers do not support, so this is not easy to address. In this case, the transformer in question is NameReferenceTransformer. #4457 was closed because it implied unstructured edit support, which is not acceptable in Kustomize. This particular substring can be targeted structurally, but the only transformer we have that can do so right now is Replacements, specifically with its delimiter and index options. Related: #4512 (comment)

Now that I look at this again, I'm surprised we added support for this annotation in the builtin field specs in the first place, since it is related to an out-of-tree controller.

@cailynse cailynse removed their assignment Jul 18, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 15, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

7 participants