Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kustomize doesn't pass namespace to helmCharts, resulting in inconsistent output #5566

Open
Skaronator opened this issue Mar 9, 2024 · 7 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/out-of-scope Indicates an issue or PR is not a fit for Kustomize's scope and/or principles

Comments

@Skaronator
Copy link

Skaronator commented Mar 9, 2024

What happened?

When using Kustomize with Helm charts, the namespace defined at the top level in the kustomization.yaml file is not correctly propagated to the helmCharts releases. As a result, the generated YAML files contains inconsistent values.

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

helmGlobals:
  chartHome: ./charts

namespace: the-actual-namespace   # <- defines namespace for all resources

helmCharts:
- name: service
  releaseName: service
  valuesFile: values.yaml

Helm Chart Template

apiVersion: v1
kind: Service
metadata:
  name: the-bug
  namespace: {{ .Release.Namespace }}
  annotations:
    this-service-is-deployed-in-namespace: {{ .Release.Namespace }}

Result

$ kustomize build . --enable-helm
apiVersion: v1
kind: Service
metadata:
  annotations:
    this-service-is-deployed-in-namespace: default # <- {{ .Release.Namespace }} still points to "default"
  name: the-bug
  namespace: the-actual-namespace                  # <- Namespace overwritten by Kustomization to the correct value

This inconsistency can cause issues when deploying resources, such as Bitnami Minio Chart, which relies on the {{ .Release.Namespace }} to dynamically create the service URL.

https://github.com/bitnami/charts/blob/e8999fedbd8e35c96099aa44d3ca4ea947bfbe22/bitnami/minio/templates/distributed/statefulset.yaml#L150

What did you expect to happen?

I expected that the namespace defined in the top level of the kustomization.yaml file would be passed down to all helmCharts releases.

When a Helm chart entry already specifies a namespace, that namespace should take precedence.

How can we reproduce it (as minimally and precisely as possible)?

Repository to reproduce it is available here: https://github.com/Skaronator/kustomize-namespace-bug

$ tree
.
├── charts
│   └── service
│       ├── Chart.yaml
│       ├── templates
│       │   └── service.yaml
│       └── values.yaml
├── kustomization.yaml
├── README.md
└── values.yaml

4 directories, 6 files
# ./kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

helmGlobals:
  chartHome: ./charts

namespace: the-actual-namespace   # <- defines namespace for all resources

helmCharts:
- name: service
  releaseName: service
  valuesFile: values.yaml
# ./charts/service/Chart.yaml
apiVersion: v2
name: service
description: A Helm chart for Kubernetes
type: application
version: 1.0.0
appVersion: "1.0.0"
# ./charts/service/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: the-bug
  namespace: {{ .Release.Namespace }}
  annotations:
    this-service-is-deployed-in-namespace: {{ .Release.Namespace }}

Expected output

$ kustomize build . --enable-helm
apiVersion: v1
kind: Service
metadata:
  annotations:
    this-service-is-deployed-in-namespace: the-actual-namespace # <- {{ .Release.Namespace }} should be correctly set
  name: the-bug
  namespace: the-actual-namespace                               # <- Namespace overwritten by Kustomization to the correct value

Actual output

$ kustomize build . --enable-helm
apiVersion: v1
kind: Service
metadata:
  annotations:
    this-service-is-deployed-in-namespace: default # <- {{ .Release.Namespace }} still points to "default"
  name: the-bug
  namespace: the-actual-namespace                  # <- Namespace overwritten by Kustomization to the correct value

Kustomize version

v5.3.0

Operating system

Linux

@Skaronator Skaronator added the kind/bug Categorizes issue or PR as related to a bug. label Mar 9, 2024
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 9, 2024
@koba1t
Copy link
Member

koba1t commented Mar 13, 2024

@Skaronator
Thanks for reporting an issue.

namespace transformer only changes the namespace field in the k8s resource, And I think helmCharts generates k8s yaml when that is called.
This means your helm template is executed and only contains generated k8s yaml when the namespace transformer is called.
I think {{ .Release.Namespace }} is only enabled in helm, and Kustomize can't find that namespace place.

So please consider using the helmCharts.namespace field.

helmCharts:
- name: service
  releaseName: service
  valuesFile: values.yaml
  namespace: the-actual-namespace 

/triage out-of-scope

@k8s-ci-robot k8s-ci-robot added triage/out-of-scope Indicates an issue or PR is not a fit for Kustomize's scope and/or principles and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 13, 2024
@Skaronator
Copy link
Author

Hi @koba1t, thanks for taking your time.

The current implementation causes inconsistent YAML files. Defining the namespace within helmCharts is often not possible. For example, when ArgoCD dynamically sets the namespace in kustomize.

I'd expect if helmCharts.namespace is not defined, kustomize should set that value to the top level namespace.

@allandegnan
Copy link

This really seems closer to a bug @koba1t.

Setting the namespace at the kustomize level inflates the helm chart, rewrites all the objects to the namespace, but then misses things like connection strings, thus deploying the chart in a broken state.

Example, the airflow helm chart here:

https://github.com/apache/airflow/blob/main/chart/templates/secrets/metadata-connection-secret.yaml#L24

You can see it relies on {{ .Release.Namespace }} as part of the default connection to its database, thus if you don't match the namespaces between kustomize and the helmChart, then you have something that looks functional but isn't.

@panteparak
Copy link

panteparak commented Mar 25, 2024

I too agree that this looks to be a bug on Kustomize side. I believe the correct strategy would be

if a namespace is define in kustomization.yml like below

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: the-actual-namespace

helmCharts:
- name: service
  releaseName: service
  valuesFile: values.yaml

kustomize should not transform the namespace, but instead let helm do the transformation.

This would ensure that any built-in variable such as {{.Release.Namespace}} is being rendered correctly.
It might be worth to support specifying namespace at helm chart levels too, such that It allows the ability to override namespace to specific chart, like so.

helmCharts:
- name: service
  releaseName: service
  valuesFile: values.yaml
  namespace: the-actual-namespace-2

This looks to be partially related to #4593

@aviadhaham
Copy link

Hi, I found a workaround solution, to at least not repeat yourself in the patch resources.

Instead of adding the namespace to the patch, I only added the name of the deployment, and the things I want kustomiza to patch, and it worked.

Ref kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: amp-airflow-prod
helmCharts:
  - name: airflow
    repo: https://charts.bitnami.com/bitnami
    releaseName: amp-airflow
    namespace: amp-airflow-prod
    version: 17.2.4
    valuesFile: values.yml
resources:
  - secrets.yml

patches:
  - path: amp-airflow-exporter-patch.yml
    target:
      kind: Deployment
      name: amp-airflow-exporter
      # namespace: amp-airflow-prod

The patch:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: amp-airflow-exporter
  # namespace: amp-airflow-prod
spec:
  template:
    spec:
      containers:
        - name: airflow-exporter
          env:
            - name: TEST
              value: "test"

The patch was applied to the right resource, and the env var is there :)

@ThomasSteinbach
Copy link

I have the exact same behavior with this namespace...
https://github.com/ansible/awx-operator/blob/devel/.helm/starter/templates/awx-deploy.yaml#L7

... when deploying it as kustomization with argocd.

@QuadmanSWE
Copy link

I get this problem when deploying vernemq helm chart with kustomize through argocd. All of its resources except a service account, a role, and a rolebinding end up in the right place.

https://vernemq.github.io/docker-vernemq

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/out-of-scope Indicates an issue or PR is not a fit for Kustomize's scope and/or principles
Projects
None yet
Development

No branches or pull requests

8 participants