-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: UPGRADE FAILED: cannot re-use a name that is still in use #4174
Comments
Has the same issue.
Output of
I got error during:
But previous updates were successful. |
But I had some typos in helm charts, and |
Hi @nmiculinic, @sah4ez, it seems like a bug from Kubernetes 1.9 API update and that would lead to naming conflict on the template. This error would still affect helm version 2.9.0 and 2.9.1 currently. #3134 I'd recommend you to try to change the helm version 2.8.2 which may be a workaround. Hope that helps! Regards, |
@cchung100m same issue with helm version 2.8.2
|
I see, because my charts template have some YAML parse error, after fixed , it work well |
Same here. I got this error and it took me ages to correlate it with YAML error. Thanks @dragon9783 for sharing your finding. |
I could test it by using the |
Could you share what were the kind of yaml errors you had? I have run into a similar problem but both helm install --debug --dry-run and helm lint don't indicate anything wrong with my yaml. |
@vijaygos can try |
I was wondering if something is being done in order to improve the user experience about this specific issue? The error message is really misleading and it would be nice to add some clue that this may be caused by a bad template and not forcefully a release name being already used. |
Today I found that the --force parameter makes this problem worse. With the parameter I get "Error: UPGRADE FAILED: a released named x is in use, cannot re-use a name that is still in use" removing the parameter returns "Error: UPGRADE FAILED: YAML parse error on ../templates/secret.yaml: error converting YAML to JSON: yaml: line 11: did not find expected key" |
@elchtestAtBosch try this #4174 (comment) |
This error happens always after getting a random
There are no YAML errors in the deployment. It deploys correctly when a different release name is provided. |
The way to recover from this error is:
The same CC @bacongobbler. Can this be reopened and possibly fixed somehow? |
The problem is even bigger when your initial deployment fails. In that scenario, there's nothing to rollback towards, and everything seems to be stuck. |
For me, the problem was that I was attempting to To be specific, my chart contained an invalid Job manifest. I was updating the container image field on every commit, and as it turns out, this particular field is immutable on the Job resource. The result was that, every time I attempted to
But the error that Helm would log was completely unrelated:
TL;DR - This error is caused by an invalid Kubernetes manifest, but you may not realize what is invalid unless you look at the Tiller logs! |
For those stuck when initial deployment fails. After you fix your chart but still cannot deploy with the same release name, check the secrets with kubectl and remove the one corresponding to the release. $ kubectl -n your-namespace get secrets
NAME TYPE DATA AGE
your-release-name.v1 helm.sh/release 1 5m
$ kubectl -n your-namespace delete secret your-release-name.v1 PS: I am testing with helm3, not sure if helm2 stores releases exactly in the same way in the cluster. |
same problem here . i can not install the old name any more. i can only install a new name...really bad for me |
I had the same issue, after I installed and removed a chart I got the error: Also I tried linting and templating, and the chart returned successful result, but found that changing name would then highlight leftover resources from a previous installation. Once those files were deleted I could redeploy with the old name. |
+1 on @illinar 's answer, I also had to remove config maps and services. I am also testing with helm 3. Thanks! |
@illinar +1,This method solved my problem。 |
With Helm 2, all releases metadata are saved in Tiller. kubectl -n ${NAMESPACE} delete secret -lname=${HELM_RELEASE}
|
@abdennour An update on #4174 (comment). Helm v2 stores release data as ConfigMqps (default) or Secrets in the namespace of the Tiller instance ( It could be retrieved with the command: For example: $ kubectl get configmap -n kube-system -l "OWNER=TILLER"
NAME DATA AGE
mysql-2.v1 1 4d17h
mysql-5.v1 1 4d17h The name is the release version/revision and that is what you delete. For example: Helm v3 stored release data as Secrets (default) or ConfigMqps in the namespace of the release. It can be retrieved using the command: $ kubectl get secret --all-namespaces -l "owner=helm"
NAMESPACE NAME TYPE DATA AGE
default sh.helm.release.v1.bar.v1 helm.sh/release.v1 1 5d22h
default sh.helm.release.v1.mydemo.v1 helm.sh/release.v1 1 2d16h
default sh.helm.release.v1.mydemo.v2 helm.sh/release.v1 1 2d16h
default sh.helm.release.v1.mysql.v1 helm.sh/release.v1 1 5d12h
default sh.helm.release.v1.tester-del.v1 helm.sh/release.v1 1 5d12h
test sh.helm.release.v1.foo.v1 helm.sh/release.v1 1 3d17h The name is the release version/revision (which includes an object prefix) and that is what you delete. For example: |
我觉得可能是重复安装,或者没有删除干净之前的错误安装 |
我就是没有删除干净之前的错误安装 |
For anyone else who finds themselves here, I found that this less helpful error was a result of the presence of
while
In my case pointing to the "wontfix" bug of incorrect ordering of prereleases in #6190 (c/o Masterminds/semver#69) |
In my case, I was the wrong to name in the include.
Where the template name was different
change include section fix the issue
|
@illinar Thanks! I didn't know that the secret was there. |
How does a cluster get into the state of having dangling Secrets from a Helm Release that was uninstalled? It would be great to know how to avoid getting into that state in the first place. |
It happens for me because I'm using the Terraform Helm provider, and I was waiting until the deployment is complete. If it never becomes ready (can easily happen due to a missing environment var, etc.), then Terraform times out. I can revert by kubectl, still, but I can't "fix" it by doing a new deployment. I'm in a beter state not waiting until the new pod is ready, but I don't like that config (I may have dependent deploys in the future). The problem had already occured at this point, and if I use Terraform to destroy the deployment (something that you never really want to do in prod because your fallback (last build's replicaset) is still up while your new build is waiting for readyness), it doesn't remove the secret. Is there a way to "clean" the secret instead of deleting it, so that I can just make a new replicaset in the same deployment, as one normally would with a new version? I'd like to do it this way to avoid downtime. |
Output of
helm version
:Output of
kubectl version
:Cloud Provider/Platform (AKS, GKE, Minikube etc.):
GKE
I got the following error during
helm upgrade
:Error: UPGRADE FAILED: cannot re-use a name that is still in use
It'd be nice knowing which name is that, and of what type.
The text was updated successfully, but these errors were encountered: