Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm template inconsistently respects the --namespace option in different environments #12124

Closed
alex-hunt-materialize opened this issue Jun 7, 2023 · 16 comments · Fixed by #12979
Labels
bug Categorizes issue or PR as related to a bug. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.

Comments

@alex-hunt-materialize
Copy link

Expected behavior:

When running helm template with the --namespace argument, it should set the .Release.Namespace variable, and any templates using that variable should be set to the value of the --namespace argument.

Actual behavior:

It is inconsistent about setting the .Release.Namespace variable. On my and my coworker's laptops, it works as expected. In our CI, it doesn't set it to the value of the --namespace argument, but instead just defaults to default.

It works on our laptops with or without docker, but in CI it doesn't. The same versions of helm and kubectl are used on my laptop and in CI. The inconsistency persists even when running locally in the same Docker image used by CI (so the python versions are also the same).

We've looked very hard for differences in our environments, and can find none, which leaves me to suspect helm itself is inconsistent.

Additional context:

Specifically, we're attempting to template the aws-ebs-csi-driver chart, version 2.19.0, from https://kubernetes-sigs.github.io/aws-ebs-csi-driver .

We're calling helm template in a python script pointing at a locally vendored copy of the chart. Values are passed to stdin.

values = {
    "controller": {
        "serviceAccount": {
            "create": True,
        },
        "nodeSelector": {
            "workload": "mz-system",
        },
    },
    "node": {
        "affinity": {
            "nodeAffinity": {
                "requiredDuringSchedulingIgnoredDuringExecution": {
                    "nodeSelectorTerms": [
                        {
                            "matchExpressions": [
                                {
                                    "key": "eks.amazonaws.com/compute-type",
                                    "operator": "NotIn",
                                    "values": ["fargate"],
                                },
                                {
                                    "key": "workload",
                                    "operator": "NotIn",
                                    "values": ["materialize-egress"],
                                },
                            ],
                        },
                    ],
                },
            },
        },
    },
}

args = [
    "helm",
    "template",
    "aws-ebs-csi-driver",
    "infra/helm/vendor/charts/aws-ebs-csi-driver/2.19.0",
    "-f",
    "-",
    "--namespace",
    "kube-system",
]

rendered_text = subprocess.run(
    args,
    input=yaml.safe_dump(values),
    check=True,
    text=True,
    capture_output=True,
).stdout

An example template in the chart which is rendered differently in CI vs locally (the namespace is kube-system locally, but in CI is rendered to default):

{{- if .Values.controller.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ .Values.controller.serviceAccount.name }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "aws-ebs-csi-driver.labels" . | nindent 4 }}
  {{- with .Values.controller.serviceAccount.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
  {{- if eq .Release.Name "kustomize" }}
  #Enable if EKS IAM roles for service accounts (IRSA) is used. See https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html for details.
  #annotations:
  #  eks.amazonaws.com/role-arn: arn:<partition>:iam::<account>:role/ebs-csi-role
  {{- end }}
{{- end -}}

Output of helm version:

version.BuildInfo{Version:"v3.12.0", GitCommit:"c9f554d75773799f72ceef38c51210f1842a1dea", GitTreeState:"clean", GoVersion:"go1.20.3"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.9", GitCommit:"a1a87a0a2bcd605820920c6b0e618a8ab7d117d4", GitTreeState:"clean", BuildDate:"2023-04-12T12:16:51Z", GoVersion:"go1.19.8", Compiler:"gc", Platform:"linux/amd64"}

Server version is not relevant to helm template.

Cloud Provider/Platform (AKS, GKE, Minikube etc.):

N/A, but our CI is running in Github Actions and we're running helm template in a Docker container.

@alex-hunt-materialize
Copy link
Author

alex-hunt-materialize commented Jun 8, 2023

I've tracked the issue down to the lack of kubeconfig in CI. This is definitely a bug in helm, as it should not require a kubeconfig or any contact with any kubernetes cluster in order to generate a template.

The following patch makes it work for our purposes:

diff --git a/pkg/cli/environment.go b/pkg/cli/environment.go
index dac2a4bc..d8a25c18 100644
--- a/pkg/cli/environment.go
+++ b/pkg/cli/environment.go
@@ -223,6 +223,9 @@ func (s *EnvSettings) Namespace() string {
        if ns, _, err := s.config.ToRawKubeConfigLoader().Namespace(); err == nil {
                return ns
        }
+       if s.namespace != "" {
+               return s.namespace
+       }
        return "default"
 }

@gjenkins8
Copy link
Contributor

If I summarize my understanding of the above: helm template ignores the --namespace flag?

@alex-hunt-materialize
Copy link
Author

Yes, it ignores the --namespace flag, but only if you don't have a working kube config.

@gjenkins8 gjenkins8 added the bug Categorizes issue or PR as related to a bug. label Aug 4, 2023
@gjenkins8
Copy link
Contributor

I think you're right, and this is a bug. A PR would be appreciated please.

@gjenkins8 gjenkins8 added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. labels Aug 4, 2023
@alex-hunt-materialize
Copy link
Author

@gjenkins8 There already is one #12126

@joejulian
Copy link
Contributor

I couldn't duplicate this behavior. I unset KUBECONFIG set HOME=/tmp and ran helm template --namespace foo . with the attached chart and it displayed the namespace correctly.

namespacetest.zip
(for some reason github wouldn't agree that my tgz file was a tgz)

@alex-hunt-materialize
Copy link
Author

@joejulian When I was testing this change, I just moved my ~/.kube directory. Perhaps something is not respecting the KUBECONFIG or HOME vars.

@joejulian
Copy link
Contributor

joejulian commented Aug 9, 2023

@alex-hunt-materialize

Nope, still no failure.

$ unset KUBECONFIG                                                                                                                                                                                                                                                                                                           

$ mv ~/.kube ~/.kubex                                                                                                                                                                                                                                                                                                        

$ helm template --namespace foo .                                                                                                                                                                                                                                                                                            #
---
# Source: namespacetest/templates/test.yaml
namespace: foo

$ set | grep KUBECONFIG   

I'll need a way to reproduce this problem with that chart in order to be able to approve this PR.

@alex-hunt-materialize
Copy link
Author

alex-hunt-materialize commented Aug 23, 2023

I've also been unable to reproduce it now, but I think my coworker just hit this today. I've asked them to see if they can isolate the reproduction.

@MichaelMorrisEst
Copy link
Contributor

Looking at the code, "default" will be used instead of the specified value if an err is encountered in s.config.ToRawKubeConfigLoader().Namespace() in the code snippet you have pointed out.
I would suggest adding a log statement to that code to help identify the error you are hitting.
I can get the same behaviour by setting --kubeconfig to a non-existing file, which I guess is not what you are doing but confirms that the default namespace will be taken when an error is encountered here, its just a matter of identifying the error you are getting in your scenario which an extra log statement should help with

diff --git a/pkg/cli/environment.go b/pkg/cli/environment.go
index dac2a4bc..105c7202 100644
--- a/pkg/cli/environment.go
+++ b/pkg/cli/environment.go
@@ -25,6 +25,7 @@ package cli
 
 import (
        "fmt"
+       "log"
        "net/http"
        "os"
        "strconv"
@@ -220,9 +221,11 @@ func (s *EnvSettings) EnvVars() map[string]string {
 
 // Namespace gets the namespace from the configuration
 func (s *EnvSettings) Namespace() string {
-       if ns, _, err := s.config.ToRawKubeConfigLoader().Namespace(); err == nil {
+       ns, _, err := s.config.ToRawKubeConfigLoader().Namespace()
+       if err == nil {
                return ns
        }
+       log.Printf("Error while fetching namespace %s", err)
        return "default"
 }

@alex-hunt-materialize
Copy link
Author

@MichaelMorrisEst Thank you for coming up with a reproduction! I am also able to reproduce it with that, or if I put invalid yaml into ~/.kube/config.

I'm surprised helm template needs to interact with kubernetes or its configs at all, though. The output of helm template has nothing cluster-specific in it, and a primary use case of helm template is to generate configs that could be applied to multiple clusters or parsed with alternative tools doing their own modifications. In this case, we're explicitly providing the namespace, so why would it ever look in the config?

I even think that my original fix is sub-optimal, as it should probably be above the line that reads the config.

diff --git a/pkg/cli/environment.go b/pkg/cli/environment.go
index dac2a4bc..fbd82b80 100644
--- a/pkg/cli/environment.go
+++ b/pkg/cli/environment.go
@@ -220,6 +220,9 @@ func (s *EnvSettings) EnvVars() map[string]string {
 
 // Namespace gets the namespace from the configuration
 func (s *EnvSettings) Namespace() string {
+       if s.namespace != "" {
+               return s.namespace
+       }
        if ns, _, err := s.config.ToRawKubeConfigLoader().Namespace(); err == nil {
                return ns
        }

Copy link

github-actions bot commented Dec 7, 2023

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

@github-actions github-actions bot added the Stale label Dec 7, 2023
@alex-hunt-materialize
Copy link
Author

Any update on this? Helm really shouldn't need to interact with kubernetes to template yaml files.

@github-actions github-actions bot removed the Stale label Dec 8, 2023
Copy link

github-actions bot commented Mar 8, 2024

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

@github-actions github-actions bot added the Stale label Mar 8, 2024
@edude03
Copy link

edude03 commented Mar 8, 2024

Definitely still a bug

@alex-hunt-materialize
Copy link
Author

Still a bug, and my PR is still a merge-able fix waiting for review #12126

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Categorizes issue or PR as related to a bug. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
Projects
None yet
5 participants