Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2.6.0 provider version causing Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion #893

Closed
nitrocode opened this issue Jun 17, 2022 · 23 comments
Labels

Comments

@nitrocode
Copy link

nitrocode commented Jun 17, 2022

⚠️ NOTE FROM MAINTAINERS ⚠️

v1alpha1 of the client authentication API was removed in the Kubernetes client in version 1.24. The latest release of this provider was updated to use 1.24 of the Kubernetes client Go modules, and 3.9 of the upstream Helm module. We know this seems like a breaking change but is expected as API versions marked alpha can be removed in minor releases of the Kubernetes project.

The upstream helm Go module was also updated to use the 1.24 client in helm 3.9 so you will see this issue if you use the helm command directly with a kubeconfig that tries to use the v1alpha1 client authentication API.

AWS users will need to update their config to use the v1beta1 API. Support for v1beta1 was added as default in the awscli in v1.24.0 so you may need to update your awscli package and run aws eks update-kubeconfig again.

Adding this note here as users pinning to the previous version of this provider will not see a fix to this issue the next time they update: you need to update your config to the new version and update your exec plugins. If your exec plugin still only supports v1alpha1 you need to open an issue with them to update it.


Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 1.1.9
Provider version: 2.6.0
Kubernetes version: 1.21

Affected Resource(s)

  • helm_release

Terraform Configuration Files

Using module https://github.com/cloudposse/terraform-aws-helm-release

This is how we set the provider

provider "helm" {
  kubernetes {
    host                   = var.cluster_endpoint
    cluster_ca_certificate = base64decode(var.cluster_ca_cert)
    exec {
      api_version = "client.authentication.k8s.io/v1alpha1"
      args        = ["eks", "get-token", "--cluster-name", var.cluster_name]
      command     = "aws"
    }
  }
}

I tried changing the api_version to client.authentication.k8s.io/v1beta1 but then that gave me a mismatch with the expected value of client.authentication.k8s.io/v1alpha1.

Debug Output

NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.

Panic Output

Steps to Reproduce

  1. terraform apply

Expected Behavior

Terraform plans correctly

Actual Behavior

Terraform fails with this error

╷
│ Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
│
│   with module.datadog.helm_release.this[0],
│   on .terraform/modules/datadog/main.tf line 35, in resource "helm_release" "this":35: resource "helm_release" "this" {
│
╵
Releasing state lock. This may take a few moments...
exit status 1

Important Factoids

Pinning the provider version to the last release 2.5.1 works

terraform {
  required_version = ">= 1.0.0"

  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = "= 2.5.1"
    }
  }
}

A fast way that we pinned our root modules using

brew install minamijoyo/tfupdate/tfupdate
tfupdate provider --version "= 2.5.1" "helm" -r .

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@nitrocode nitrocode added the bug label Jun 17, 2022
@nitrocode
Copy link
Author

cc: @jrhouston for visibility

@mkochco
Copy link

mkochco commented Jun 17, 2022

Is this specific to AWS EKS clusters? Are they still on v1alpha? Seems like this also occurred with 2.4.0

@nitrocode
Copy link
Author

nitrocode commented Jun 17, 2022

Yes, the AWS EKS cluster is using the v1alpha1 apiVersion

⨠ aws eks get-token --cluster-name "<REDACTED>" | jq .apiVersion
"client.authentication.k8s.io/v1alpha1"

@jrhouston
Copy link
Contributor

jrhouston commented Jun 18, 2022

It looks like the v1alpha1 authentication API was removed in Kubernetes 1.24 – we upgraded to the 0.24.0 line of k8s dependencies in the latest version of this provider. It feels like a breaking change but removal of alpha APIs is expected in minor version bumps of Kubernetes.

I was able to fix this for EKS by updating the awscli package and changing the api_version in my exec block v1beta1.

The latest version of the awscli uses this version:

$ aws --version                                                                                                                           
aws-cli/2.7.8 Python/3.9.11 Darwin/20.6.0 exe/x86_64 prompt/off

$ aws eks get-token --cluster-name $NAME | jq '.apiVersion'                                                                       
"client.authentication.k8s.io/v1beta1"

@Nuru
Copy link

Nuru commented Jun 18, 2022

@jrhouston as one who primarily works with AWS, I request that you track Kubernetes dependencies along the lines of the latest Kubernetes version EKS supports, currently 1.22. This would help to preserve compatibility between the provider and EKS clusters. (I understand if people not using EKS feel differently, but you can't please everyone, so I'm staking my claim.)

@org-ci-cd
Copy link

org-ci-cd commented Jun 20, 2022

@jrhouston how to switch to the v1beta1 version of the API ?
Did it break anything with the different helm packages you had installed while doing so ?

Edit 1:
ah I think I found it
image

Edit 2:
It works
image

@jrhouston
Copy link
Contributor

jrhouston commented Jun 20, 2022

@jrhouston as one who primarily works with AWS, I request that you track Kubernetes dependencies along the lines of the latest Kubernetes version EKS supports, currently 1.22. This would help to preserve compatibility between the provider and EKS clusters. (I understand if people not using EKS feel differently, but you can't please everyone, so I'm staking my claim.)

I agree with you in principle, and we do tend to hold off on releasing things that are going to break on the older versions Kubernetes in the main cloud providers.

However, in this case the API contract here is actually between the aws command and the kubernetes client. The apiVersion here is of the YAML that the aws eks get-token command spits out onto stdout. It's not actually a cluster resource so this will still work on EKS cluster version 1.22 and below – it's just that you need to update the api_version in the exec block your Terraform config, and potentially update your awscli package to the latest version. You can see they moved to v1beta1 in their changelog a few versions ago.

You may also need to run aws eks update-kubeconfig command if you are using the kubeconfig file.

Perhaps we should add a validator to check if the version specified is v1alpha1 and write out a warning message telling the user what to do here.

If there are any non-EKS users watching this issue I would appreciate if they could chime in on their situation.

@ndacic
Copy link

ndacic commented Jun 20, 2022

Having same issue with helm_release terraform resource.
image

@lupindeterd
Copy link

we encounter this issue on eks.7(platformVersion) with 1.21(k8s version). I tried using the aws cliv2 but no avail. Pinning the helm provider version as suggested above works for us. It's looks like the helm provider removed the support to "v1alpha1" as my kubeconfig still uses it.

terraform {
  required_version = ">= 1.0.0"

  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = "= 2.5.1"
    }
  }
}

@jrhouston
Copy link
Contributor

jrhouston commented Jun 21, 2022

@lupindeterd the latest version of the awscli definitely supports v1beta1 of this API – you may need to run aws eks update-kubeconfig if you're using the kubeconfig file. I just tested this on EKS with k8s version 1.21

@initanmol
Copy link

initanmol commented Jun 21, 2022

May be related to kubernetes-sigs/aws-iam-authenticator#439

@stevehipwell
Copy link

This is due to aws/aws-cli#6940 changing the AWS CLI behaviour (there are plenty of issues in that repo regarding this and other changes).

@tomjohnburton
Copy link

tomjohnburton commented Jun 24, 2022

Not sure if this is the place for it but....

Fun fact

If you're running this in CI, and you're using token auth and it's complaining that there is no kube config file, simply create an empty one

data "aws_eks_cluster" "cluster" {
  name = var.cluster_name
}
data "aws_eks_cluster_auth" "cluster" {
  name = var.cluster_name
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster.token
  }
}
touch ~/.kube/config
terraform plan

Also I cannot get eks exec auth to work. I'm using the "data" "aws_eks_cluster_auth" instead to get the token

@Nuru
Copy link

Nuru commented Jun 27, 2022

@jrhouston wrote:

... in this case the API contract here is actually between the aws command and the kubernetes client. The apiVersion here is of the YAML that the aws eks get-token command spits out onto stdout. It's not actually a cluster resource so this will still work on EKS cluster version 1.22 and below – it's just that you need to update the api_version in the exec block your Terraform config, and potentially update your awscli package to the latest version. You can see they moved to v1beta1 in their changelog a few versions ago.

The better behavior (already implemented in the Kubernetes provider) is to pass the configured value of the apiVersion to the aws CLI via the KUBERNETES_EXEC_INFO environment variable. This, however, requires that the Helm provider continue to accept the v1aplha1 version when configured to do so (as the Kubernetes provider does today). See also aws/aws-cli#6476

@ricktbaker
Copy link

I'm having something similar. Had the original error. Upgraded providers and the aws cli, now I'm getting a different error that I can't seem to get out of and I'm not sure if related just yet.

Kubernetes cluster unreachable: the server has asked for the client to provide credentials

@llamahunter
Copy link

Also I cannot get eks exec auth to work. I'm using the "data" "aws_eks_cluster_auth" instead to get the token

Be careful with this approach. It caches the auth token in the state during the plan, and if you don't use it 'quickly' enough, it will expire part way through apply. We switched to the 'exec' plugin to avoid this.

@llamahunter
Copy link

Note that 1.22 has the v1beta1 version of client.authentication.k8s.io. Note that you will need to use the awscli v2 version, as the v1 version does not seem to support anything other than v1alpha1. The exec plugin uses the awscli.

@mattduguid
Copy link

mattduguid commented Jul 27, 2022

combination of helm provider 2.6.0 and aws cli 2.7.8 allowed us to get it working with api_version = "client.authentication.k8s.io/v1beta1", other versions we are using, k8s = 1.22, terraform = 1.2.5, terraform provider aws = 4.23.0 running under azure devops

@AxelJoly
Copy link

We got the same issue, fixed the helm provider version to 2.4.1 solved the issue

@kappa8219
Copy link

Changing api version seems to be a step forward(in contrast to module version pin)

@mamoit
Copy link

mamoit commented Aug 31, 2022

We couldn't use aws-cli v2 since we're running on alpine, and it is painful to get it running there.
So we started using the aws-iam-authenticator.
With k8s v1.23 it even supports client.authentication.k8s.io/v1.

provider "helm" {
  kubernetes {
    host                   = var.cluster_endpoint
    cluster_ca_certificate = base64decode(var.cluster_ca_cert)
    exec {
      api_version = "client.authentication.k8s.io/v1"
      args        = ["token", "--cluster-id", var.cluster_name]
      command     = "aws-iam-authenticator"
    }
  }
}

@iBrandyJackson
Copy link
Member

This issue is strictly informative, therefore we are closing to prevent confusion. Anyone running into this issue, as the description states will need to update your config to the new version and update your exec plugins. If you continue to run into specific issues that updating both config and exec plugins does not solve, we ask you to please open a new GitHub issue and we will review. Thanks!

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 22, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests