Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: UPGRADE FAILED: cannot re-use a name that is still in use #4174

Closed
nmiculinic opened this issue Jun 5, 2018 · 30 comments
Closed

Error: UPGRADE FAILED: cannot re-use a name that is still in use #4174

nmiculinic opened this issue Jun 5, 2018 · 30 comments
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines.

Comments

@nmiculinic
Copy link
Contributor

nmiculinic commented Jun 5, 2018

Output of helm version:

Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:26:04Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.12-gke.1", GitCommit:"f47fa5292f604d07539ddbf7e5840b77d686051b", GitTreeState:"clean", BuildDate:"2018-05-11T16:56:15Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):
GKE

I got the following error during helm upgrade:
Error: UPGRADE FAILED: cannot re-use a name that is still in use

It'd be nice knowing which name is that, and of what type.

@sah4ez
Copy link

sah4ez commented Jun 6, 2018

Has the same issue.
Output of helm version:

$ helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Output of kubectl version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.10-rancher1", GitCommit:"66aaf7681d4a74778ffae722d1f0f0f42c80a984", GitTreeState:"clean", BuildDate:"2018-03-20T16:02:56Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

I got error during:

$ helm upgrade --force --install --timeout 600 --wait --namespace=default my-release charts/app
Error: UPGRADE FAILED: cannot re-use a name that is still in use

But previous updates were successful.

@sah4ez
Copy link

sah4ez commented Jun 6, 2018

But I had some typos in helm charts, and helm template did not successful.

@SlickNik SlickNik added the good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. label Jun 7, 2018
@cchung100m
Copy link

Hi @nmiculinic, @sah4ez,

it seems like a bug from Kubernetes 1.9 API update and that would lead to naming conflict on the template. This error would still affect helm version 2.9.0 and 2.9.1 currently. #3134

I'd recommend you to try to change the helm version 2.8.2 which may be a workaround.

Hope that helps!

Regards,

@dragon9783
Copy link

@cchung100m same issue with helm version 2.8.2

kubernetes version: 1.10.2
helm client version: 2.8.2
helm server version: 2.8.2
helm upgrade *** --debug
[debug] Created tunnel using local port: '39117'

[debug] SERVER: "127.0.0.1:39117"

Error: UPGRADE FAILED: cannot re-use a name that is still in use

@dragon9783
Copy link

I see, because my charts template have some YAML parse error, after fixed , it work well

@prein
Copy link

prein commented Jul 4, 2018

Same here. I got this error and it took me ages to correlate it with YAML error. Thanks @dragon9783 for sharing your finding.
I'll think of how to test the YAML before running into that misleading error.

@yuvipanda
Copy link
Contributor

I could test it by using the helm template command.

@vijaygos
Copy link

Could you share what were the kind of yaml errors you had? I have run into a similar problem but both helm install --debug --dry-run and helm lint don't indicate anything wrong with my yaml.

@sah4ez
Copy link

sah4ez commented Oct 31, 2018

@vijaygos can try helm template or helm lint for analyze your templates.

@johnraz
Copy link

johnraz commented Dec 31, 2018

I was wondering if something is being done in order to improve the user experience about this specific issue?

The error message is really misleading and it would be nice to add some clue that this may be caused by a bad template and not forcefully a release name being already used.

@elchtestAtBosch
Copy link

Today I found that the --force parameter makes this problem worse. With the parameter I get "Error: UPGRADE FAILED: a released named x is in use, cannot re-use a name that is still in use" removing the parameter returns "Error: UPGRADE FAILED: YAML parse error on ../templates/secret.yaml: error converting YAML to JSON: yaml: line 11: did not find expected key"

@sah4ez
Copy link

sah4ez commented Feb 15, 2019

@elchtestAtBosch try this #4174 (comment)
and check your 11 line, maybe there contains unexpected a space or a tab sign.

@Nowaker
Copy link

Nowaker commented Mar 14, 2019

This error happens always after getting a random transport is closing error. That means, if my helm upgrade --install fails with transport is closing error (timeout), then the release enters PENDING_UPGRADE status and all subsequent helm upgrade --install will result in:

  • Error: UPGRADE FAILED: "namehere" has no deployed releases if --force is NOT passed
  • Error: UPGRADE FAILED: a released named namehere is in use, cannot re-use a name that is still in use if --force IS passed

There are no YAML errors in the deployment. It deploys correctly when a different release name is provided.

@Nowaker
Copy link

Nowaker commented Mar 15, 2019

The way to recover from this error is:

helm history <namehere>
# note the revision number of the revision before PENDING_UPGRADE
helm rollback <namehere> <revision number>

The same helm install --upgrade failing with error messages above will now succeed as normal.

CC @bacongobbler. Can this be reopened and possibly fixed somehow?

@andreasevers
Copy link

The problem is even bigger when your initial deployment fails. In that scenario, there's nothing to rollback towards, and everything seems to be stuck.

@shcallaway
Copy link

For me, the problem was that I was attempting to helm upgrade an existing release using an invalid chart.

To be specific, my chart contained an invalid Job manifest. I was updating the container image field on every commit, and as it turns out, this particular field is immutable on the Job resource. The result was that, every time I attempted to helm upgrade, Tiller would throw an error:

Cannot patch Job: "my-job" (Job.batch "my-job" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", ... }}: field is immutable)

But the error that Helm would log was completely unrelated:

Error: UPGRADE FAILED: cannot re-use a name that is still in use

TL;DR - This error is caused by an invalid Kubernetes manifest, but you may not realize what is invalid unless you look at the Tiller logs!

@illinar
Copy link

illinar commented Sep 13, 2019

For those stuck when initial deployment fails. After you fix your chart but still cannot deploy with the same release name, check the secrets with kubectl and remove the one corresponding to the release.

$ kubectl -n your-namespace get secrets

NAME                  TYPE                          DATA   AGE
your-release-name.v1  helm.sh/release               1      5m

$ kubectl -n your-namespace delete secret your-release-name.v1

PS: I am testing with helm3, not sure if helm2 stores releases exactly in the same way in the cluster.

@linbingdouzhe
Copy link

same problem here . i can not install the old name any more. i can only install a new name...really bad for me

@iomarcovalente
Copy link

I had the same issue, after I installed and removed a chart I got the error:
Error: cannot re-use a name that is still in use

Also helm list would show that the chart is not there anymore

I tried linting and templating, and the chart returned successful result, but found that changing name would then highlight leftover resources from a previous installation.

Once those files were deleted I could redeploy with the old name.

@openJT
Copy link

openJT commented Dec 11, 2019

+1 on @illinar 's answer, I also had to remove config maps and services. I am also testing with helm 3.

Thanks!

@real-zony
Copy link

@illinar +1,This method solved my problem。

@abdennour
Copy link

With Helm 2, all releases metadata are saved in Tiller.
With Helm 3, all releases metadata are saved as Secrets in the same Namespace of the release.
If you got "cannot re-use a name that is still in use", this means you may need to check some orphan secrets and delete them :

kubectl -n ${NAMESPACE} delete secret -lname=${HELM_RELEASE}

@hickeyma
Copy link
Contributor

@abdennour An update on #4174 (comment).

Helm v2 stores release data as ConfigMqps (default) or Secrets in the namespace of the Tiller instance ( kube-system by default)

It could be retrieved with the command:
kubectl get configmap/secret -n <tiller_namespace> -l "OWNER=TILLER"

For example:

$ kubectl get configmap -n kube-system -l "OWNER=TILLER"
NAME         DATA   AGE
mysql-2.v1   1      4d17h
mysql-5.v1   1      4d17h

The name is the release version/revision and that is what you delete. For example: kubectl delete configmap mysql-2.v1 -n kube-system

Helm v3 stored release data as Secrets (default) or ConfigMqps in the namespace of the release.

It can be retrieved using the command:
kubectl get secret --all-namespaces -l "owner=helm"

$ kubectl get secret --all-namespaces -l "owner=helm"
NAMESPACE   NAME                               TYPE                 DATA   AGE
default     sh.helm.release.v1.bar.v1          helm.sh/release.v1   1      5d22h
default     sh.helm.release.v1.mydemo.v1       helm.sh/release.v1   1      2d16h
default     sh.helm.release.v1.mydemo.v2       helm.sh/release.v1   1      2d16h
default     sh.helm.release.v1.mysql.v1        helm.sh/release.v1   1      5d12h
default     sh.helm.release.v1.tester-del.v1   helm.sh/release.v1   1      5d12h
test        sh.helm.release.v1.foo.v1          helm.sh/release.v1   1      3d17h

The name is the release version/revision (which includes an object prefix) and that is what you delete. For example: kubectl delete secret sh.helm.release.v1.foo.v1 -n test

@qiuqiuqiubo
Copy link

我觉得可能是重复安装,或者没有删除干净之前的错误安装

@qiuqiuqiubo
Copy link

我就是没有删除干净之前的错误安装

@minrk
Copy link

minrk commented Sep 9, 2020

For anyone else who finds themselves here, I found that this less helpful error was a result of the presence of --force. Running helm upgrade --force --dry-run ... produced this:

UPGRADE FAILED
Error: a release named staging is in use, cannot re-use a name that is still in use
Error: UPGRADE FAILED: a release named staging is in use, cannot re-use a name that is still in use

while helm upgrade --dry-run ... (without --force) gave this actionable error:

UPGRADE FAILED
Error: Chart requires kubernetesVersion: >= 1.15.0 which is incompatible with Kubernetes v1.15.12-gke.2
Error: UPGRADE FAILED: Chart requires kubernetesVersion: >= 1.15.0 which is incompatible with Kubernetes v1.15.12-gke.2

In my case pointing to the "wontfix" bug of incorrect ordering of prereleases in #6190 (c/o Masterminds/semver#69)

@adiii717
Copy link

In my case, I was the wrong to name in the include.

      imagePullSecrets:
        - name: {{ include "helm-chart.fullname" . }}-my-sec

Where the template name was different

{{- define "myapp.fullname" -}}

change include section fix the issue

- name: {{ include "myapp.fullname" . }}-my-sec

@Type1J
Copy link

Type1J commented Feb 6, 2021

@illinar Thanks! I didn't know that the secret was there.

@msmith93
Copy link

How does a cluster get into the state of having dangling Secrets from a Helm Release that was uninstalled? It would be great to know how to avoid getting into that state in the first place.

@Type1J
Copy link

Type1J commented Jul 15, 2021

It happens for me because I'm using the Terraform Helm provider, and I was waiting until the deployment is complete. If it never becomes ready (can easily happen due to a missing environment var, etc.), then Terraform times out. I can revert by kubectl, still, but I can't "fix" it by doing a new deployment. I'm in a beter state not waiting until the new pod is ready, but I don't like that config (I may have dependent deploys in the future).

The problem had already occured at this point, and if I use Terraform to destroy the deployment (something that you never really want to do in prod because your fallback (last build's replicaset) is still up while your new build is waiting for readyness), it doesn't remove the secret.

Is there a way to "clean" the secret instead of deleting it, so that I can just make a new replicaset in the same deployment, as one normally would with a new version? I'd like to do it this way to avoid downtime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines.
Projects
None yet
Development

No branches or pull requests