layout | page_title | description |
---|---|---|
docs |
Cluster Peering on Kubernetes |
If you use Consul on Kubernetes, learn how to enable cluster peering, create peering CRDs, and then manage peering connections in consul-k8s. |
To establish a cluster peering connection on Kubernetes, you need to enable the feature in the Helm chart and create custom resource definitions (CRDs) for each side of the peering.
The following CRDs are used to create and manage a peering connection:
PeeringAcceptor
: Generates a peering token and accepts an incoming peering connection.PeeringDialer
: Uses a peering token to make an outbound peering connection with the cluster that generated the token.
As of Consul v1.14, you can also implement service failovers and redirects to control traffic between peers.
To learn how to peer clusters and connect services across peers in AWS Elastic Kubernetes Service (EKS) environments, complete the Consul Cluster Peering on Kubernetes tutorial.
You must implement the following requirements to create and use cluster peering connections with Kubernetes:
- Consul v1.14.0 or later
- At least two Kubernetes clusters
- The installation must be running on Consul on Kubernetes version 1.0.0 or later
Complete the following procedure after you have provisioned a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters.
-
Use the
kubectl
command to export the Kubernetes context names and then set them to variables for future use. For more information on how to use kubeconfig and contexts, refer to the Kubernetes docs on configuring access to multiple clusters.You can use the following methods to get the context names for your clusters:
- Use the
kubectl config current-context
command to get the context for the cluster you are currently in. - Use the
kubectl config get-contexts
command to get all configured contexts in your kubeconfig file.
$ export CLUSTER1_CONTEXT=<CONTEXT for first Kubernetes cluster> $ export CLUSTER2_CONTEXT=<CONTEXT for second Kubernetes cluster>
- Use the
-
To establish cluster peering through Kubernetes, create a
values.yaml
file with the following Helm values.global: name: consul image: "hashicorp/consul:1.14.0" peering: enabled: true connectInject: enabled: true dns: enabled: true enableRedirection: true server: exposeService: enabled: true controller: enabled: true meshGateway: enabled: true replicas: 1
These Helm values configure the servers in each cluster so that they expose ports over a Kubernetes load balancer service. For additional configuration options, refer to
server.exposeService
.When generating a peering token from one of the clusters, Consul includes a load balancer address in the token so that the peering stream goes through the load balancer in front of the servers. For additional configuration options, refer to
global.peering.tokenGeneration
.
Install Consul on Kubernetes by using the CLI to apply values.yaml
to each cluster.
-
In
cluster-01
, run the following commands:$ export HELM_RELEASE_NAME=cluster-01
$ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "1.0.0" --values values.yaml --kube-context $CLUSTER1_CONTEXT
```
-
In
cluster-02
, run the following commands:$ export HELM_RELEASE_NAME=cluster-02
$ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "1.0.0" --values values.yaml --kube-context $CLUSTER2_CONTEXT
```
To peer Kubernetes clusters running Consul, you need to create a peering token and share it with the other cluster. Complete the following steps to create the peer connection.
Peers identify each other using the metadata.name
values you establish when creating the PeeringAcceptor
and PeeringDialer
CRDs.
-
In
cluster-01
, create thePeeringAcceptor
custom resource.apiVersion: consul.hashicorp.com/v1alpha1 kind: PeeringAcceptor metadata: name: cluster-02 ## The name of the peer you want to connect to spec: peer: secret: name: "peering-token" key: "data" backend: "kubernetes"
-
Apply the
PeeringAcceptor
resource to the first cluster.$ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml
-
Save your peering token so that you can export it to the other cluster.
$ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml
-
Apply the peering token to the second cluster.
$ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml
-
In
cluster-02
, create thePeeringDialer
custom resource.apiVersion: consul.hashicorp.com/v1alpha1 kind: PeeringDialer metadata: name: cluster-01 ## The name of the peer you want to connect to spec: peer: secret: name: "peering-token" key: "data" backend: "kubernetes"
-
Apply the
PeeringDialer
resource to the second cluster.$ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml
The examples described in this section demonstrate how to export a service named backend
. You should change instances of backend
in the example code to the name of the service you want to export.
-
For the service in
cluster-02
that you want to export, add the"consul.hashicorp.com/connect-inject": "true"
annotation to your service's pods prior to deploying. The annotation allows the workload to join the mesh. It is highlighted in the following example:# Service to expose backend apiVersion: v1 kind: Service metadata: name: backend spec: selector: app: backend ports: - name: http protocol: TCP port: 80 targetPort: 9090 --- apiVersion: v1 kind: ServiceAccount metadata: name: backend --- # Deployment for backend apiVersion: apps/v1 kind: Deployment metadata: name: backend labels: app: backend spec: replicas: 1 selector: matchLabels: app: backend template: metadata: labels: app: backend annotations: "consul.hashicorp.com/connect-inject": "true" spec: serviceAccountName: backend containers: - name: backend image: nicholasjackson/fake-service:v0.22.4 ports: - containerPort: 9090 env: - name: "LISTEN_ADDR" value: "0.0.0.0:9090" - name: "NAME" value: "backend" - name: "MESSAGE" value: "Response from backend"
-
Deploy the
backend
service to the second cluster.$ kubectl apply --context $CLUSTER2_CONTEXT --filename backend.yaml
-
In
cluster-02
, create anExportedServices
custom resource.apiVersion: consul.hashicorp.com/v1alpha1 kind: ExportedServices metadata: name: default ## The name of the partition containing the service spec: services: - name: backend ## The name of the service you want to export consumers: - peer: cluster-01 ## The name of the peer that receives the service
-
Apply the
ExportedServices
resource to the second cluster.$ kubectl apply --context $CLUSTER2_CONTEXT --filename exportedsvc.yaml
-
Create service intentions for the second cluster.
apiVersion: consul.hashicorp.com/v1alpha1 kind: ServiceIntentions metadata: name: backend-deny spec: destination: name: backend sources: - name: "*" action: deny - name: frontend action: allow
-
Apply the intentions to the second cluster.
$ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yaml
-
Add the
"consul.hashicorp.com/connect-inject": "true"
annotation to your service's pods before deploying the workload so that the services incluster-01
can dialbackend
incluster-02
. To dial the upstream service from an application, configure the application so that that requests are sent to the correct DNS name as specified in Service Virtual IP Lookups. In the following example, the annotation that allows the workload to join the mesh and the configuration provided to the workload that enables the workload to dial the upstream service using the correct DNS name is highlighted.# Service to expose frontend apiVersion: v1 kind: Service metadata: name: frontend spec: selector: app: frontend ports: - name: http protocol: TCP port: 9090 targetPort: 9090 --- apiVersion: v1 kind: ServiceAccount metadata: name: frontend --- apiVersion: apps/v1 kind: Deployment metadata: name: frontend labels: app: frontend spec: replicas: 1 selector: matchLabels: app: frontend template: metadata: labels: app: frontend annotations: "consul.hashicorp.com/connect-inject": "true" spec: serviceAccountName: frontend containers: - name: frontend image: nicholasjackson/fake-service:v0.22.4 securityContext: capabilities: add: ["NET_ADMIN"] ports: - containerPort: 9090 env: - name: "LISTEN_ADDR" value: "0.0.0.0:9090" - name: "UPSTREAM_URIS" value: "http://backend.virtual.cluster-02.consul" - name: "NAME" value: "frontend" - name: "MESSAGE" value: "Hello World" - name: "HTTP_CLIENT_KEEP_ALIVES" value: "false"
-
Apply the service file to the first cluster.
$ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml
-
Run the following command in
frontend
and then check the output to confirm that you peered your clusters successfully.$ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090 { "name": "frontend", "uri": "/", "type": "HTTP", "ip_addresses": [ "10.16.2.11" ], "start_time": "2022-08-26T23:40:01.167199", "end_time": "2022-08-26T23:40:01.226951", "duration": "59.752279ms", "body": "Hello World", "upstream_calls": { "http://backend.virtual.cluster-02.consul": { "name": "backend", "uri": "http://backend.virtual.cluster-02.consul", "type": "HTTP", "ip_addresses": [ "10.32.2.10" ], "start_time": "2022-08-26T23:40:01.223503", "end_time": "2022-08-26T23:40:01.224653", "duration": "1.149666ms", "headers": { "Content-Length": "266", "Content-Type": "text/plain; charset=utf-8", "Date": "Fri, 26 Aug 2022 23:40:01 GMT" }, "body": "Response from backend", "code": 200 } }, "code": 200 }
To end a peering connection, delete both the PeeringAcceptor
and PeeringDialer
resources.
- Delete the
PeeringDialer
resource from the second cluster.
$ kubectl --context $CLUSTER2_CONTEXT delete --filename dialer.yaml
- Delete the
PeeringAcceptor
resource from the first cluster.
$ kubectl --context $CLUSTER1_CONTEXT delete --filename acceptor.yaml
-
Confirm that you deleted your peering connection in
cluster-01
by querying the the/health
HTTP endpoint. The peered services should no longer appear.- Exec into the server pod for the first cluster.
$ kubectl exec -it consul-server-0 --context $CLUSTER1_CONTEXT -- /bin/sh
- If you've enabled ACLs, export an ACL token to access the
/health
HTP endpoint for services. The bootstrap token may be used if an ACL token is not already provisioned.
$ export CONSUL_HTTP_TOKEN=<INSERT BOOTSTRAP ACL TOKEN>
- Query the the
/health
HTTP endpoint. The peered services should no longer appear.
$ curl "localhost:8500/v1/health/connect/backend?peer=cluster-02"
To recreate or reset the peering connection, you need to generate a new peering token from the cluster where you created the PeeringAcceptor
.
-
In the
PeeringAcceptor
CRD, add the annotationconsul.hashicorp.com/peering-version
. If the annotation already exists, update its value to a higher version.apiVersion: consul.hashicorp.com/v1alpha1 kind: PeeringAcceptor metadata: name: cluster-02 annotations: consul.hashicorp.com/peering-version: "1" ## The peering version you want to set, must be in quotes spec: peer: secret: name: "peering-token" key: "data" backend: "kubernetes"
-
After updating
PeeringAcceptor
, repeat the following steps to create a peering connection:- Create a peering token
- Establish a peering connection between clusters
- Export services between clusters
- Authorize services for peers
Your peering connection is re-established with the updated token.
~> Note: The only way to create or set a new peering token is to manually adjust the value of the annotation consul.hashicorp.com/peering-version
. Creating a new token causes the previous token to expire.
As of Consul v1.14, you can use dynamic traffic management to configure your service mesh so that services automatically failover and redirect between peers.
To configure automatic service failovers and redirect, edit the ServiceResolver
CRD so that traffic resolves to a backup service instance on a peer. The following example updates the ServiceResolver
CRD in cluster-01
so that Consul redirects traffic intended for the frontend
service to a backup instance in cluster-02
when it detects multiple connection failures to the primary instance.
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: frontend
spec:
connectTimeout: 15s
failover:
'*':
targets:
- peer: 'cluster-02'
service: 'backup'
namespace: 'default'