Skip to content

Latest commit

 

History

History
428 lines (346 loc) · 13.6 KB

File metadata and controls

428 lines (346 loc) · 13.6 KB
Error in user YAML: (<unknown>): could not find expected ':' while scanning a simple key at line 4 column 1
---
layout: docs
page_title: Cluster Peering on Kubernetes
description: >-
If you use Consul on Kubernetes, learn how to enable cluster peering, create peering CRDs, and then manage peering connections in consul-k8s.
---

Cluster Peering on Kubernetes

~> Cluster peering is currently in beta: Functionality associated with cluster peering is subject to change. You should never use the beta release in secure environments or production scenarios. Features in beta may have performance issues, scaling issues, and limited support.

Cluster peering is not currently available in the HCP Consul offering.

To establish a cluster peering connection on Kubernetes, you need to enable the feature in the Helm chart and create custom resource definitions (CRDs) for each side of the peering.

The following CRDs are used to create and manage a peering connection:

  • PeeringAcceptor: Generates a peering token and accepts an incoming peering connection.
  • PeeringDialer: Uses a peering token to make an outbound peering connection with the cluster that generated the token.

Prerequisites

You must implement the following requirements to create and use cluster peering connections with Kubernetes:

  • Consul version 1.13.1 or later
  • At least two Kubernetes clusters
  • The installation must be running on Consul on Kubernetes version 0.47.1 or later

Prepare for install

  1. After provisioning a Kubernetes cluster and setting up your kubeconfig file to manage access to multiple Kubernetes clusters, export the Kubernetes context names for future use with kubectl. For more information on how to use kubeconfig and contexts, refer to Configure access to multiple clusters on the Kubernetes documentation website.

    You can use the following methods to get the context names for your clusters:

    • Issue the kubectl config current-context command to get the context for the cluster you are currently in.
    • Issue the kubectl config get-contexts command to get all configured contexts in your kubeconfig file.
    $ export CLUSTER1_CONTEXT=<CONTEXT for first Kubernetes cluster>
    $ export CLUSTER2_CONTEXT=<CONTEXT for second Kubernetes cluster>
    
  2. To establish cluster peering through Kubernetes, create a values.yaml file with the following Helm values.

    With these values, the servers in each cluster will be exposed over a Kubernetes Load balancer service. This service can be customized using server.exposeService.

    When generating a peering token from one of the clusters, Consul uses the address(es) of the load balancer in the peering token so that the peering stream goes through the load balancer in front of the servers. For customizing the addresses used in the peering token, refer to global.peering.tokenGeneration.

    global:
      image: "hashicorp/consul:1.13.1"
      peering:
        enabled: true
    connectInject:
      enabled: true
    dns:
      enabled: true
      enableRedirection: true
    server:
      exposeService:
        enabled: true
    controller:
      enabled: true
    meshGateway:
      enabled: true
      replicas: 1

Install Consul on Kubernetes

  1. Install Consul on Kubernetes on each Kubernetes cluster by applying values.yaml using the Helm CLI.

    1. Install Consul on Kubernetes on cluster-01
    $ export HELM_RELEASE_NAME=cluster-01
    
    $ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "0.47.1" --values values.yaml --kube-context $CLUSTER1_CONTEXT
    
    1. Install Consul on Kubernetes on cluster-02
    $ export HELM_RELEASE_NAME=cluster-02
    
    $ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "0.47.1" --values values.yaml --kube-context $CLUSTER2_CONTEXT
    

Create a peering token

To peer Kubernetes clusters running Consul, you need to create a peering token and share it with the other cluster. As part of the peering process, the peer names for each respective cluster within the peering are established by using the metadata.name values for the PeeringAcceptor and PeeringDialer CRDs.

  1. In cluster-01, create the PeeringAcceptor custom resource.

    apiVersion: consul.hashicorp.com/v1alpha1
    kind: PeeringAcceptor
    metadata:
      name: cluster-02 ## The name of the peer you want to connect to
    spec:
      peer:
        secret:
          name: "peering-token"
          key: "data"
          backend: "kubernetes"
  2. Apply the PeeringAcceptor resource to the first cluster.

    $ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml
    
  3. Save your peering token so that you can export it to the other cluster.

    $ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml
    

Establish a peering connection between clusters

  1. Apply the peering token to the second cluster.

    $ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml
    
  2. In cluster-02, create the PeeringDialer custom resource.

    apiVersion: consul.hashicorp.com/v1alpha1
    kind: PeeringDialer
    metadata:
      name: cluster-01 ## The name of the peer you want to connect to
    spec:
      peer:
        secret:
          name: "peering-token"
          key: "data"
          backend: "kubernetes"
  3. Apply the PeeringDialer resource to the second cluster.

    $ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml 
    

Export services between clusters

  1. For the service in "cluster-02" that you want to export, add the following annotation to your service's pods.

    # Service to expose backend
    apiVersion: v1
    kind: Service
    metadata:
      name: backend
    spec:
      selector:
        app: backend
      ports:
      - name: http
        protocol: TCP
        port: 80
        targetPort: 9090
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: backend
    ---
    # deployment for backend
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: backend
      labels:
        app: backend
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: backend
      template:
        metadata:
          labels:
            app: backend
          annotations:
            "consul.hashicorp.com/connect-inject": "true"
        spec:
          serviceAccountName: backend
          containers:
          - name: backend
            image: nicholasjackson/fake-service:v0.22.4
            ports:
            - containerPort: 9090
            env:
            - name: "LISTEN_ADDR"
              value: "0.0.0.0:9090"
            - name: "NAME"
              value: "backend"
            - name: "MESSAGE"
              value: "Response from backend"
  2. In cluster-02, create an ExportedServices custom resource.

    apiVersion: consul.hashicorp.com/v1alpha1
    kind: ExportedServices
    metadata:
      name: default ## The name of the partition containing the service
    spec:
      services:
        - name: backend ## The name of the service you want to export
          consumers:
          - peer: cluster-01 ## The name of the peer that receives the service
  3. Apply the service file and the ExportedServices resource for the second cluster.

    $ kubectl apply --context $CLUSTER2_CONTEXT --filename backend.yaml --filename exportedsvc.yaml
    

Authorize services for peers

  1. Create service intentions for the second cluster.

    apiVersion: consul.hashicorp.com/v1alpha1
    kind: ServiceIntentions
    metadata:
      name: backend-deny
    spec:
      destination:
        name: backend
      sources:
       - name: "*"
         action: deny
       - name: frontend
         action: allow
  2. Apply the intentions to the second cluster.

    $ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yml
    
  3. For the services in cluster-01 that you want to access the "backend," add the following annotations to the service file. To dial the upstream service from an application, ensure that the requests are sent to the correct DNS name as specified in Service Virtual IP Lookups.

    # Service to expose frontend
    apiVersion: v1
    kind: Service
    metadata:
      name: frontend
    spec:
      selector:
        app: frontend
      ports:
      - name: http
        protocol: TCP
        port: 9090
        targetPort: 9090
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: frontend
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: frontend
      labels:
        app: frontend
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: frontend
      template:
        metadata:
          labels:
            app: frontend
          annotations:
            "consul.hashicorp.com/connect-inject": "true"
        spec:
          serviceAccountName: frontend
          containers:
          - name: frontend
            image: nicholasjackson/fake-service:v0.22.4
            securityContext:
              capabilities:
                add: ["NET_ADMIN"]
            ports:
            - containerPort: 9090
            env:
            - name: "LISTEN_ADDR"
              value: "0.0.0.0:9090"
            - name: "UPSTREAM_URIS"
              value: "http://backend.virtual.cluster-02.consul"
            - name: "NAME"
              value: "frontend"
            - name: "MESSAGE"
              value: "Hello World"
            - name: "HTTP_CLIENT_KEEP_ALIVES"
              value: "false"
  4. Apply the service file to the first cluster.

    $ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml
    
  5. Run the following command in frontend and check the output to confirm that you peered your clusters successfully.

    $ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090
    {
      "name": "frontend",
      "uri": "/",
      "type": "HTTP",
      "ip_addresses": [
        "10.16.2.11"
      ],
      "start_time": "2022-08-26T23:40:01.167199",
      "end_time": "2022-08-26T23:40:01.226951",
      "duration": "59.752279ms",
      "body": "Hello World",
      "upstream_calls": {
        "http://backend.virtual.cluster-02.consul": {
          "name": "backend",
          "uri": "http://backend.virtual.cluster-02.consul",
          "type": "HTTP",
          "ip_addresses": [
            "10.32.2.10"
          ],
          "start_time": "2022-08-26T23:40:01.223503",
          "end_time": "2022-08-26T23:40:01.224653",
          "duration": "1.149666ms",
          "headers": {
            "Content-Length": "266",
            "Content-Type": "text/plain; charset=utf-8",
            "Date": "Fri, 26 Aug 2022 23:40:01 GMT"
          },
          "body": "Response from backend",
          "code": 200
        }
      },
      "code": 200
    }
    

End a peering connection

To end a peering connection, delete both the PeeringAcceptor and PeeringDialer resources.

To confirm that you deleted your peering connection, in cluster-01, query the /health HTTP endpoint. The peered services should no longer appear.

$ curl "localhost:8500/v1/health/connect/backend?peer=cluster-02"

Recreate or reset a peering connection

To recreate or reset the peering connection, you need to generate a new peering token on the cluster where you created the PeeringAcceptor (in this example, cluster-01).

  1. You can do this by creating or updating the annotation consul.hashicorp.com/peering-version on the PeeringAcceptor. If the annotation already exists, update its value to a version that is higher.

    apiVersion: consul.hashicorp.com/v1alpha1
    kind: PeeringAcceptor
    metadata:
      name: cluster-02
      annotations:
        consul.hashicorp.com/peering-version: 1 ## The peering version you want to set.
    spec:
      peer:
        secret:
          name: "peering-token"
          key: "data"
          backend: "kubernetes"
  2. Once you have done this, repeat the steps in the peering process. This includes saving your peering token so that you can export it to the other cluster. This will re-establish peering with the updated token.

~> Note: A new peering token is only generated upon manually setting and updating the value of the annotation consul.hashicorp.com/peering-version. Creating a new token will cause the previous token to expire.