Skip to content

Bootstrapping GKE cluster for k8s tests

Ciro S. Costa edited this page Dec 19, 2018 · 2 revisions

Necessary labels

The tests under topgun/k8s depend on having at least two nodes with different labels, indicating different node images:

  • a worker with label=nodeImage=ubuntu, and
  • a worker with label=nodeImage=cos.

This is necessary for testing baggageclaim drivers under different node images.

ConfigMap length bug -- use a later version of GKE

At the time of writing, the default version of GKE, 1.9.7, has an issue with the helm chart. It seems that the YAML rendered by helm from the templates is too long -- there is some hardcoded limit on the length of a configmap (cf. https://github.com/helm/helm/issues/1413).

We then retrieved the correct kubeconfig from GKE, using a command like

gcloud container clusters get-credentials cluster-1 --zone us-central1-a --project cf-concourse-production

We found that this problem could be overcome by deploying using GKE 1.10.7.

helm in an RBAC-enabled cluster

Then there were a few steps to follow which have to do with creating an appropriately-authorized service account for the tiller (server-side component of Helm):

# create serviceaccount and clusterrolebinding
kubectl -n kube-system create sa concourse
kubectl -n kube-system create clusterrolebinding concourse --clusterrole=cluster-admin --serviceaccount=kube-system:concourse
token=$(kubectl -n kube-system get secret $(kubectl -n kube-system get sa concourse -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 -D)
kubectl config set-credentials concourse --token "$token"
kubectl config set-context concourse --user concourse --cluster <your-cluster-name>
kubectl config use-context concourse
# create tiller with appropriate permissions
helm init --service-account concourse

where <your-cluster-name> is the name of the GKE cluster (you can see it in the kubeconfig, or in the output of commands like kubectl config get-contexts), something like gke_cf-concourse-production_us-central1-a_cluster-1 in our case.

put kubeconfig in vault

then you can simply copy the contents of the kubeconfig (or at least the sections relevant to your new cluster) into vault, at the path concourse/main/main/kube_config at the time of writing.