Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Go SDK upgrade performing actions in the wrong namespace #8685

Closed
imilchev opened this issue Sep 2, 2020 · 12 comments · May be fixed by #8785 or #12940
Closed

Go SDK upgrade performing actions in the wrong namespace #8685

imilchev opened this issue Sep 2, 2020 · 12 comments · May be fixed by #8785 or #12940

Comments

@imilchev
Copy link

imilchev commented Sep 2, 2020

Output of helm version:

version.BuildInfo{Version:"v3.3.1", GitCommit:"249e5215cde0c3fa72e27eb7a30e8d55c9696144", GitTreeState:"clean", GoVersion:"go1.14.7"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): Rancher on bare-metal

I am writing an application using the Helm Go SDK which is supposed to upgrade an already deployed helm chart on my cluster. The chart is bitnami/kafka and is deployed in namespace kafka. All works well apart from the fact that for some reason when I actually perform the upgrade, helm starts creating StatefulSets and ConfigMaps in my default namespace and deletes things from the kafka namespace. I already spent 1 day investigating this issue and I am nowhere so some help would be appreciated.

package main

import (
	"log"
	"os"

	"helm.sh/helm/v3/pkg/action"
	"helm.sh/helm/v3/pkg/chart/loader"
	"helm.sh/helm/v3/pkg/cli"
)

func main() {
	settings := cli.New()

	actionConfig := new(action.Configuration)
	// You can pass an empty string instead of settings.Namespace() to list
	// all namespaces
	if err := actionConfig.Init(settings.RESTClientGetter(), "kafka", os.Getenv("HELM_DRIVER"), log.Printf); err != nil {
		log.Printf("%+v", err)
		os.Exit(1)
	}

	// Get current state
	getClient := action.NewGetValues(actionConfig)
	release, err := getClient.Run("kafka")
	if err != nil {
		log.Printf("%+v", err)
		os.Exit(1)
	}

	currentReplicaCount, ok := release["replicaCount"].(float64)
	if !ok {
		log.Printf("The current replica count %s is not a number.", release["replicaCount"])
		os.Exit(1)
	}
	log.Printf("Replica count: %+v", currentReplicaCount)

	// Upgrade
	upgradeClient := action.NewUpgrade(actionConfig)

	upgradeClient.Namespace = "kafka"
	upgradeClient.Atomic = true
	upgradeClient.ReuseValues = true

	//upgradeClient.DryRun = true
	updatedValues := map[string]interface{}{
		"replicaCount": currentReplicaCount + 1,
	}

	chartPath, err := upgradeClient.ChartPathOptions.LocateChart("bitnami/kafka", settings)
	if err != nil {
		log.Printf("%+v", err)
	}

	ch, err := loader.Load(chartPath)
	if err != nil {
		log.Printf("%+v", err)
	}

	upgrade, err := upgradeClient.Run("kafka", ch, updatedValues)
	if err != nil {
		log.Printf("%+v", err)
		os.Exit(1)
	}

	log.Printf("%+v", upgrade)
}
  • Retrieving the current config works fine
  • Dry run of the upgrade seems to return correct values (all namespaces are kafka)
  • Actual execution output:
2020/09/02 09:18:52 &{cfg:0xc000057d80 ChartPathOptions:{CaFile: CertFile: KeyFile: InsecureSkipTLSverify:false Keyring: Password: RepoURL: Username: Verify:false Version:} Install:false Devel:false Namespace:kafka SkipCRDs:false Timeout:0s Wait:false DisableHooks:false DryRun:false Force:false ResetValues:false ReuseValues:true Recreate:false MaxHistory:0 Atomic:true CleanupOnFail:false SubNotes:false Description: PostRenderer:<nil> DisableOpenAPIValidation:false}
2020/09/02 09:18:53 preparing upgrade for kafka
2020/09/02 09:18:53 reusing the old release's values
2020/09/02 09:18:54 performing update for kafka
2020/09/02 09:18:54 creating upgraded release for kafka
2020/09/02 09:18:54 checking 10 resources for changes
2020/09/02 09:18:54 Looks like there are no changes for PodDisruptionBudget "kafka-zookeeper"
2020/09/02 09:18:54 Looks like there are no changes for PodDisruptionBudget "kafka"
2020/09/02 09:18:54 Looks like there are no changes for ServiceAccount "kafka"
2020/09/02 09:18:54 Created a new ConfigMap called "kafka-scripts" in default
2020/09/02 09:18:54 Looks like there are no changes for Service "kafka-zookeeper-headless"
2020/09/02 09:18:55 Looks like there are no changes for Service "kafka-zookeeper"
2020/09/02 09:18:55 Looks like there are no changes for Service "kafka-headless"
2020/09/02 09:18:55 Created a new StatefulSet called "kafka" in default
2020/09/02 09:18:55 beginning wait for 10 resources with timeout of 0s
2020/09/02 09:18:57 StatefulSet is not ready: default/kafka. 0 out of 3 expected pods are ready

P.S.: The output of helm upgrade -n kafka --reuse-values --set replicaCount=4 kafka bitnami/kafka --debug

upgrade.go:121: [debug] preparing upgrade for kafka
upgrade.go:440: [debug] reusing the old release's values
upgrade.go:129: [debug] performing update for kafka
upgrade.go:308: [debug] creating upgraded release for kafka
client.go:173: [debug] checking 10 resources for changes
client.go:436: [debug] Looks like there are no changes for PodDisruptionBudget "kafka-zookeeper"
client.go:436: [debug] Looks like there are no changes for PodDisruptionBudget "kafka"
client.go:436: [debug] Looks like there are no changes for ServiceAccount "kafka"
client.go:436: [debug] Looks like there are no changes for ConfigMap "kafka-scripts"
client.go:436: [debug] Looks like there are no changes for Service "kafka-zookeeper-headless"
client.go:436: [debug] Looks like there are no changes for Service "kafka-zookeeper"
client.go:436: [debug] Looks like there are no changes for Service "kafka-headless"
upgrade.go:136: [debug] updating status for upgraded release for kafka
Release "kafka" has been upgraded. Happy Helming!
@bacongobbler
Copy link
Member

bacongobbler commented Sep 2, 2020

Check and make sure the kube client used by the upgrade action is pointing at the correct namespace. By default, it will use the kubeconfig's default namespace, which is usually default.

helm/pkg/kube/client.go

Lines 61 to 62 in 04fb358

// Namespace allows to bypass the kubeconfig file for the choice of the namespace
Namespace string

helm/pkg/kube/client.go

Lines 130 to 138 in 04fb358

func (c *Client) namespace() string {
if c.Namespace != "" {
return c.Namespace
}
if ns, _, err := c.Factory.ToRawKubeConfigLoader().Namespace(); err == nil {
return ns
}
return v1.NamespaceDefault
}

@imilchev
Copy link
Author

imilchev commented Sep 2, 2020

Well, I have set the namespace to kafka for the action configuration and also for the upgrade client.

upgradeClient := action.NewUpgrade(actionConfig)

upgradeClient.Namespace = "kafka"
upgradeClient.Atomic = true
upgradeClient.ReuseValues = true

log.Printf("%+v", upgradeClient)

This code actually says that the namespace is kafka for the client itself. The weird thing is that just for part of the chart this happens and I really have no clue what more I can do to debug this.

2020/09/02 21:17:43 &{cfg:0xc0002a5c40 ChartPathOptions:{CaFile: CertFile: KeyFile: InsecureSkipTLSverify:false Keyring: Password: RepoURL: Username: Verify:false Version:} Install:false Devel:false Namespace:kafka SkipCRDs:false Timeout:0s Wait:false DisableHooks:false DryRun:false Force:false ResetValues:false ReuseValues:true Recreate:false MaxHistory:0 Atomic:true CleanupOnFail:false SubNotes:false Description: PostRenderer:<nil> DisableOpenAPIValidation:false}

@bacongobbler
Copy link
Member

Is it possible that the resources are hard-coding a namespace parameter?

Looking at

helm/pkg/kube/client.go

Lines 181 to 197 in 04fb358

if _, err := helper.Get(info.Namespace, info.Name, info.Export); err != nil {
if !apierrors.IsNotFound(err) {
return errors.Wrap(err, "could not get information about the resource")
}
// Append the created resource to the results, even if something fails
res.Created = append(res.Created, info)
// Since the resource does not exist, create it.
if err := createResource(info); err != nil {
return errors.Wrap(err, "failed to create resource")
}
kind := info.Mapping.GroupVersionKind.Kind
c.Log("Created a new %s called %q in %s\n", kind, info.Name, info.Namespace)
return nil
}

It would appear that the namespace is coming from the resource rather than the kube client. Check the output of helm template.

@imilchev
Copy link
Author

imilchev commented Sep 2, 2020

What I did also is logging the KubeClient in the action config and this was the output:

2020/09/02 21:44:01 &{Factory:0xc0001313e0 Log:0x4fc280 Namespace:}

I am trying to figure out why with the Helm CLI all works fine but in my code it doesn't. If it would fail with the Helm CLI I wouldn't have spent the time on this for sure.

@imilchev
Copy link
Author

imilchev commented Sep 2, 2020

The output of helm template is:

---
# Source: kafka/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kafka
  labels:
    app.kubernetes.io/name: kafka
    helm.sh/chart: kafka-11.8.2
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: kafka
---
# Source: kafka/templates/scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kafka-scripts
  labels:
    app.kubernetes.io/name: kafka
    helm.sh/chart: kafka-11.8.2
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/managed-by: Helm
data:
  setup.sh: |-
    #!/bin/bash

    ID="${MY_POD_NAME#"kafka-"}"
    export KAFKA_CFG_BROKER_ID="$ID"

    exec /entrypoint.sh /run.sh
---
# Source: kafka/charts/zookeeper/templates/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: kafka-zookeeper-headless
  namespace: kafka
  labels:
    app.kubernetes.io/name: zookeeper
    helm.sh/chart: zookeeper-5.21.5
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: zookeeper
spec:
  type: ClusterIP
  clusterIP: None
  publishNotReadyAddresses: true
  ports:
    
    - name: tcp-client
      port: 2181
      targetPort: client
    
    
    - name: follower
      port: 2888
      targetPort: follower
    - name: tcp-election
      port: 3888
      targetPort: election
  selector:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/component: zookeeper
---
# Source: kafka/charts/zookeeper/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: kafka-zookeeper
  namespace: kafka
  labels:
    app.kubernetes.io/name: zookeeper
    helm.sh/chart: zookeeper-5.21.5
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: zookeeper
spec:
  type: ClusterIP
  ports:
    
    - name: tcp-client
      port: 2181
      targetPort: client
    
    
    - name: follower
      port: 2888
      targetPort: follower
    - name: tcp-election
      port: 3888
      targetPort: election
  selector:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/component: zookeeper
---
# Source: kafka/templates/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: kafka-headless
  labels:
    app.kubernetes.io/name: kafka
    helm.sh/chart: kafka-11.8.2
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: kafka
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - name: tcp-client
      port: 9092
      protocol: TCP
      targetPort: kafka-client
    - name: tcp-internal
      port: 9093
      protocol: TCP
      targetPort: kafka-internal
  selector:
    app.kubernetes.io/name: kafka
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/component: kafka
---
# Source: kafka/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: kafka
  labels:
    app.kubernetes.io/name: kafka
    helm.sh/chart: kafka-11.8.2
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: kafka
spec:
  type: ClusterIP
  ports:
    - name: tcp-client
      port: 9092
      protocol: TCP
      targetPort: kafka-client
      nodePort: null
  selector:
    app.kubernetes.io/name: kafka
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/component: kafka
---
# Source: kafka/charts/zookeeper/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka-zookeeper
  namespace: kafka
  labels:
    app.kubernetes.io/name: zookeeper
    helm.sh/chart: zookeeper-5.21.5
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: zookeeper
    role: zookeeper
spec:
  serviceName: kafka-zookeeper-headless
  replicas: 1
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/name: zookeeper
      app.kubernetes.io/instance: kafka
      app.kubernetes.io/component: zookeeper
  template:
    metadata:
      name: kafka-zookeeper
      labels:
        app.kubernetes.io/name: zookeeper
        helm.sh/chart: zookeeper-5.21.5
        app.kubernetes.io/instance: kafka
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: zookeeper
    spec:
      
      securityContext:
        fsGroup: 1001
      containers:
        - name: zookeeper
          image: docker.io/bitnami/zookeeper:3.6.1-debian-10-r88
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsUser: 1001
          command:
            - bash
            - -ec
            - |
                # Execute entrypoint as usual after obtaining ZOO_SERVER_ID based on POD hostname
                HOSTNAME=`hostname -s`
                if [[ $HOSTNAME =~ (.*)-([0-9]+)$ ]]; then
                  ORD=${BASH_REMATCH[2]}
                  export ZOO_SERVER_ID=$((ORD+1))
                else
                  echo "Failed to get index from hostname $HOST"
                  exit 1
                fi
                exec /entrypoint.sh /run.sh
          resources:
            requests:
              cpu: 250m
              memory: 256Mi
          env:
            - name: ZOO_DATA_LOG_DIR
              value: ""
            - name: ZOO_PORT_NUMBER
              value: "2181"
            - name: ZOO_TICK_TIME
              value: "2000"
            - name: ZOO_INIT_LIMIT
              value: "10"
            - name: ZOO_SYNC_LIMIT
              value: "5"
            - name: ZOO_MAX_CLIENT_CNXNS
              value: "60"
            - name: ZOO_4LW_COMMANDS_WHITELIST
              value: "srvr, mntr, ruok"
            - name: ZOO_LISTEN_ALLIPS_ENABLED
              value: "no"
            - name: ZOO_AUTOPURGE_INTERVAL
              value: "0"
            - name: ZOO_AUTOPURGE_RETAIN_COUNT
              value: "3"
            - name: ZOO_MAX_SESSION_TIMEOUT
              value: "40000"
            - name: ZOO_SERVERS
              value: kafka-zookeeper-0.kafka-zookeeper-headless.kafka.svc.cluster.local:2888:3888 
            - name: ZOO_ENABLE_AUTH
              value: "no"
            - name: ZOO_HEAP_SIZE
              value: "1024"
            - name: ZOO_LOG_LEVEL
              value: "ERROR"
            - name: ALLOW_ANONYMOUS_LOGIN
              value: "yes"
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
          ports:
            
            - name: client
              containerPort: 2181
            
            
            - name: follower
              containerPort: 2888
            - name: election
              containerPort: 3888
          livenessProbe:
            exec:
              command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          readinessProbe:
            exec:
              command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          volumeMounts:
            - name: data
              mountPath: /bitnami/zookeeper
      volumes:
  volumeClaimTemplates:
    - metadata:
        name: data
        annotations:
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "8Gi"
---
# Source: kafka/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
  labels:
    app.kubernetes.io/name: kafka
    helm.sh/chart: kafka-11.8.2
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: kafka
spec:
  podManagementPolicy: Parallel
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: kafka
      app.kubernetes.io/instance: kafka
      app.kubernetes.io/component: kafka
  serviceName: kafka-headless
  updateStrategy:
    type: "RollingUpdate"
  template:
    metadata:
      labels:
        app.kubernetes.io/name: kafka
        helm.sh/chart: kafka-11.8.2
        app.kubernetes.io/instance: kafka
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: kafka
    spec:      
      securityContext:
        fsGroup: 1001
        runAsUser: 1001
      serviceAccountName: kafka
      containers:
        - name: kafka
          image: docker.io/bitnami/kafka:2.6.0-debian-10-r0
          imagePullPolicy: "IfNotPresent"
          command:
            - /scripts/setup.sh
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: KAFKA_CFG_ZOOKEEPER_CONNECT
              value: "kafka-zookeeper"
            - name: KAFKA_INTER_BROKER_LISTENER_NAME
              value: "INTERNAL"
            - name: KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP
              value: "INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT"
            - name: KAFKA_CFG_LISTENERS
              value: "INTERNAL://:9093,CLIENT://:9092"
            - name: KAFKA_CFG_ADVERTISED_LISTENERS
              value: "INTERNAL://$(MY_POD_NAME).kafka-headless.kafka.svc.cluster.local:9093,CLIENT://$(MY_POD_NAME).kafka-headless.kafka.svc.cluster.local:9092"
            - name: ALLOW_PLAINTEXT_LISTENER
              value: "yes"
            - name: KAFKA_CFG_DELETE_TOPIC_ENABLE
              value: "false"
            - name: KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE
              value: "true"
            - name: KAFKA_HEAP_OPTS
              value: "-Xmx1024m -Xms1024m"
            - name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MESSAGES
              value: "10000"
            - name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MS
              value: "1000"
            - name: KAFKA_CFG_LOG_RETENTION_BYTES
              value: "1073741824"
            - name: KAFKA_CFG_LOG_RETENTION_CHECK_INTERVALS_MS
              value: "300000"
            - name: KAFKA_CFG_LOG_RETENTION_HOURS
              value: "168"
            - name: KAFKA_CFG_MESSAGE_MAX_BYTES
              value: "1000012"
            - name: KAFKA_CFG_LOG_SEGMENT_BYTES
              value: "1073741824"
            - name: KAFKA_CFG_LOG_DIRS
              value: "/bitnami/kafka/data"
            - name: KAFKA_CFG_DEFAULT_REPLICATION_FACTOR
              value: "1"
            - name: KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR
              value: "1"
            - name: KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
              value: "1"
            - name: KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR
              value: "1"
            - name: KAFKA_CFG_NUM_IO_THREADS
              value: "8"
            - name: KAFKA_CFG_NUM_NETWORK_THREADS
              value: "3"
            - name: KAFKA_CFG_NUM_PARTITIONS
              value: "1"
            - name: KAFKA_CFG_NUM_RECOVERY_THREADS_PER_DATA_DIR
              value: "1"
            - name: KAFKA_CFG_SOCKET_RECEIVE_BUFFER_BYTES
              value: "102400"
            - name: KAFKA_CFG_SOCKET_REQUEST_MAX_BYTES
              value: "104857600"
            - name: KAFKA_CFG_SOCKET_SEND_BUFFER_BYTES
              value: "102400"
            - name: KAFKA_CFG_ZOOKEEPER_CONNECTION_TIMEOUT_MS
              value: "6000"
          ports:
            - name: kafka-client
              containerPort: 9092
            - name: kafka-internal
              containerPort: 9093
          livenessProbe:
            tcpSocket:
              port: kafka-client
            initialDelaySeconds: 10
            timeoutSeconds: 5
            failureThreshold: 
            periodSeconds: 
            successThreshold: 
          readinessProbe:
            tcpSocket:
              port: kafka-client
            initialDelaySeconds: 5
            timeoutSeconds: 5
            failureThreshold: 6
            periodSeconds: 
            successThreshold: 
          resources:
            limits: {}
            requests: {}
          volumeMounts:
            - name: data
              mountPath: /bitnami/kafka
            - name: scripts
              mountPath: /scripts/setup.sh
              subPath: setup.sh
      volumes:
        - name: scripts
          configMap:
            name: kafka-scripts
            defaultMode: 0755
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "8Gi"

@imilchev
Copy link
Author

imilchev commented Sep 2, 2020

I just confirmed that if I change the default namespace in my kube config, then everything starts working as expected. It seems the issue is with setting the correct namespace for the kube client indeed but I am not sure where it is going wrong. All my code is the few lines I posted above which I took from the cli package

@bacongobbler
Copy link
Member

Just checking in here. Did you happen to figure out what may be causing the issue?

@imilchev
Copy link
Author

Not at all. I banged my head against it for 2 days and in the end I gave up... Either it's something very obscure that is happening or it's something really obvious that I am just not able to see. I deployed my app in k8s and I noticed it behaves in the same way also when it is working from inside my cluster. Since I have to upgrade 2 charts that are in 2 different namespaces, I ended up deploying my app 2 times - 1 for each namespace and that works. Normally I would expect it to work with just a single app that can handle all namespaces correctly but for some reason the upgrade still decides to create stateful sets in the default namespace.

@github-actions
Copy link

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

@bacongobbler
Copy link
Member

closing as stale.

@mtiller
Copy link

mtiller commented Jul 29, 2021

Just a note to anybody who comes across this issue. The solution proposed in #9171 worked for me (e.g., setting HELM_NAMESPACE before initializing any bits of the Helm SDK). Hopefully a future version of the SDK will address this because without that workaround, using the SDK is quite difficult.

@fnikolai
Copy link

fnikolai commented Feb 27, 2024

#9171 didn't solve the problem for me.

What seems to work is the solution of Hypher

var actionConfig action.Configuration

err := actionConfig.Init(
	configFlags,
	namespace,
	"secret",
	func(format string, v ...interface{}) {
		fmt.Sprintf(format, v)
	},
)

if err != nil {
	fmt.Println(err)
	os.Exit(1)
}

// When actionConfig.Init is called it sets up the driver with the default namespace.
// We need to change the namespace to honor the release namespace.
// https://github.com/helm/helm/issues/9171
actionConfig.KubeClient.(*kube.Client).Namespace = namespace

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants