Skip to content

Latest commit

 

History

History
524 lines (389 loc) · 17.2 KB

create-manage-peering.mdx

File metadata and controls

524 lines (389 loc) · 17.2 KB
layout page_title description
docs
Cluster Peering - Create and Manage Connections
Generate a peering token to establish communication, export services, and authorize requests for cluster peering connections. Learn how to create, list, read, check, and delete peering connections.

Create and Manage Cluster Peering Connections

A peering token enables cluster peering between different datacenters. Once you generate a peering token, you can use it to establish a connection between clusters. Then you can export services and create intentions so that peered clusters can call those services.

Create a peering connection

Cluster peering is enabled by default on Consul servers as of 1.14. For additional information, like disabling peering, refer to Configuration Files.

Use the following steps to create a peering connection:

  1. Create a peering token
  2. Establish a connection between clusters
  3. Export services between clusters
  4. Authorize services for peers

You can generate peering tokens and initiate connections on any available agent using either the API, CLI, or the Consul UI. If you use the API or CLI, we recommend performing these operations through a client agent in the partition you want to connect.

The UI does not currently support exporting services between clusters or authorizing services for peers.

Create a peering token

To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection.

Every time you generate a peering token, a single-use establishment secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections.

In cluster-01, use the /peering/token endpoint to issue a request for a peering token.

$ curl --request POST --data '{"PeerName":"cluster-02"}' --url http://localhost:8500/v1/peering/token

The CLI outputs the peering token, which is a base64-encoded string containing the token details.

Create a JSON file that contains the first cluster's name and the peering token.

{
    "PeerName": "cluster-01",
    "PeeringToken": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImF1ZCI6IlNvbHIifQ.5T7L_L1MPfQ_5FjKGa1fTPqrzwK4bNSM812nW6oyjb8"
}

In cluster-01, use the consul peering generate-token command to issue a request for a peering token.

$ consul peering generate-token -name cluster-02

The CLI outputs the peering token, which is a base64-encoded string containing the token details. Save this value to a file or clipboard to be used in the next step on cluster-02.

  1. In the Consul UI for the datacenter associated with cluster-01, click Peers.
  2. Click Add peer connection.
  3. In the Generate token tab, enter cluster-02 in the Name of peer field.
  4. Click the Generate token button.
  5. Copy the token before you proceed. You cannot view it again after leaving this screen. If you lose your token, you must generate a new one.

Establish a connection between clusters

Next, use the peering token to establish a secure connection between the clusters.

In one of the client agents in "cluster-02," use peering_token.json and the /peering/establish endpoint to establish the peering connection. This endpoint does not generate an output unless there is an error.

$ curl --request POST --data @peering_token.json http://127.0.0.1:8500/v1/peering/establish

When you connect server agents through cluster peering, their default behavior is to peer to the default partition. To establish peering connections for other partitions through server agents, you must add the Partition field to peering_token.json and specify the partitions you want to peer. For additional configuration information, refer to Cluster Peering - HTTP API.

You can dial the peering/establish endpoint once per peering token. Peering tokens cannot be reused after being used to establish a connection. If you need to re-establish a connection, you must generate a new peering token.

In one of the client agents in "cluster-02," issue the consul peering establish command and specify the token generated in the previous step. The command establishes the peering connection. The commands prints "Successfully established peering connection with cluster-01" after the connection is established.

$ consul peering establish -name cluster-01 -peering-token token-from-generate

When you connect server agents through cluster peering, they peer their default partitions. To establish peering connections for other partitions through server agents, you must add the -partition flag to the establish command and specify the partitions you want to peer. For additional configuration information, refer to consul peering establish command .

You can run the peering establish command once per peering token. Peering tokens cannot be reused after being used to establish a connection. If you need to re-establish a connection, you must generate a new peering token.

  1. In the Consul UI for the datacenter associated with cluster 02, click Peers and then Add peer connection.
  2. Click Establish peering.
  3. In the Name of peer field, enter cluster-01. Then paste the peering token in the Token field.
  4. Click Add peer.

Export services between clusters

After you establish a connection between the clusters, you need to create a configuration entry that defines the services that are available for other clusters. Consul uses this configuration entry to advertise service information and support service mesh connections across clusters.

First, create a configuration entry and specify the Kind as "exported-services".

Kind = "exported-services"
Name = "default"
Services = [
  {
    ## The name and namespace of the service to export.
    Name      = "service-name"
    Namespace = "default"

    ## The list of peer clusters to export the service to.
    Consumers = [
      {
        ## The peer name to reference in config is the one set
        ## during the peering process.
        Peer = "cluster-02"
      }
    ]
  }
]

Then, add the configuration entry to your cluster.

$ consul config write peering-config.hcl

Before you proceed, wait for the clusters to sync and make services available to their peers. You can issue an endpoint query to check the peered cluster status.

Authorize services for peers

Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters.

First, create a configuration entry and specify the Kind as "service-intentions". Declare the service on "cluster-02" that can access the service in "cluster-01." The following example sets service intentions so that "frontend-service" can access "backend-service."

Kind      = "service-intentions"
Name      = "backend-service"

Sources = [
  {
    Name   = "frontend-service"
    Peer   = "cluster-02"
    Action = "allow"
  }
]

If the peer's name is not specified in Peer, then Consul assumes that the service is in the local cluster.

Then, add the configuration entry to your cluster.

$ consul config write peering-intentions.hcl

Manage peering connections

After you establish a peering connection, you can get a list of all active peering connections, read a specific peering connection's information, check peering connection health, and delete peering connections.

List all peering connections

You can list all active peering connections in a cluster.

After you establish a peering connection, query the /peerings/ endpoint to get a list of all peering connections. For example, the following command requests a list of all peering connections on localhost and returns the information as a series of JSON objects:

$  curl http://127.0.0.1:8500/v1/peerings

[
    {
        "ID": "462c45e8-018e-f19d-85eb-1fc1bcc2ef12",
        "Name": "cluster-02",
        "State": "ACTIVE",
        "Partition": "default",
        "PeerID": "e83a315c-027e-bcb1-7c0c-a46650904a05",
        "PeerServerName": "server.dc1.consul",
        "PeerServerAddresses": [
            "10.0.0.1:8300"
        ],
        "CreateIndex": 89,
        "ModifyIndex": 89
    },
    {
        "ID": "1460ada9-26d2-f30d-3359-2968aa7dc47d",
        "Name": "cluster-03",
        "State": "INITIAL",
        "Partition": "default",
        "Meta": {
            "env": "production"
        },
        "CreateIndex": 109,
        "ModifyIndex": 119
    },
]

After you establish a peering connection, run the consul peering list command to get a list of all peering connections. For example, the following command requests a list of all peering connections and returns the information in a table:

$  consul peerings list

Name        State    Imported Svcs  Exported Svcs  Meta
cluster-02  ACTIVE   0              2              env=production
cluster-03  PENDING  0              0

In the Consul UI, click Peers. The UI lists peering connections you created for clusters in a datacenter.

The name that appears in the list is the name of the cluster in a different datacenter with an established peering connection.

Read a peering connection

You can get information about individual peering connections between clusters.

After you establish a peering connection, query the /peering/ endpoint to get peering information about for a specific cluster. For example, the following command requests peering connection information for "cluster-02" and returns the info as a JSON object:

$  curl http://127.0.0.1:8500/v1/peering/cluster-02

{
    "ID": "462c45e8-018e-f19d-85eb-1fc1bcc2ef12",
    "Name": "cluster-02",
    "State": "INITIAL",
    "PeerID": "e83a315c-027e-bcb1-7c0c-a46650904a05",
    "PeerServerName": "server.dc1.consul",
    "PeerServerAddresses": [
        "10.0.0.1:8300"
    ],
    "CreateIndex": 89,
    "ModifyIndex": 89
}

After you establish a peering connection, run the consul peering read command to get peering information about for a specific cluster. For example, the following command requests peering connection information for "cluster-02":

$  consul peering read -name cluster-02

Name:         cluster-02
ID:           3b001063-8079-b1a6-764c-738af5a39a97
State:        ACTIVE
Meta:
    env=production

Peer ID:               e83a315c-027e-bcb1-7c0c-a46650904a05
Peer Server Name:      server.dc1.consul
Peer CA Pems:          0
Peer Server Addresses:
    10.0.0.1:8300

Imported Services: 0
Exported Services: 2

Create Index: 89
Modify Index: 89

In the Consul UI, click Peers. The UI lists peering connections you created for clusters in that datacenter. Click the name of a peered cluster to view additional details about the peering connection.

Check peering connection health

You can check the status of your peering connection to perform health checks.

To confirm that the peering connection between your clusters remains healthy, query the health/service endpoint of one cluster from the other cluster. For example, in "cluster-02," query the endpoint and add the peer=cluster-01 query parameter to the end of the URL.

$ curl \
    "http://127.0.0.1:8500/v1/health/service/<service-name>?peer=cluster-01"

A successful query includes service information in the output.

Delete peering connections

You can disconnect the peered clusters by deleting their connection. Deleting a peering connection stops data replication to the peer and deletes imported data, including services and CA certificates.

In "cluster-01," request the deletion through the /peering/ endpoint.

$ curl --request DELETE http://127.0.0.1:8500/v1/peering/cluster-02

In "cluster-01," request the deletion through the consul peering delete command.

$ consul peering delete -name cluster-02

Successfully submitted peering connection, cluster-02, for deletion

In the Consul UI, click Peers. The UI lists peering connections you created for clusters in that datacenter.

Next to the name of the peer, click More (three horizontal dots) and then Delete. Click Delete to confirm and remove the peering connection.

L7 traffic management between peers

The following sections describe how to enable L7 traffic management features between peered clusters.

Service resolvers for redirects and failover

As of Consul v1.14, you can use dynamic traffic management to configure your service mesh so that services automatically failover and redirect between peers. The following examples update the service-resolver config entry in cluster-01 so that Consul redirects traffic intended for the frontend service to a backup instance in peer cluster-02 when it detects multiple connection failures.

<CodeTabs tabs={[ "HCL", "Kubernetes YAML", "JSON" ]}>

Kind           = "service-resolver"
Name           = "frontend"
ConnectTimeout = "15s"
Failover = {
  "*" = {
    Targets = [
      {Peer = "cluster-02"}
    ]   
  }
}
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
  name: frontend
spec:
  connectTimeout: 15s
  failover:
    '*':
      targets:
        - peer: 'cluster-02'
          service: 'frontend'
          namespace: 'default'
{
    "ConnectTimeout": "15s",
    "Kind": "service-resolver",
    "Name": "frontend",
    "Failover": {
        "*": {
            "Targets": [
                {
                    "Peer": "cluster-02"
                }
            ]
        }
    },
    "CreateIndex": 250,
    "ModifyIndex": 250
}

Service splitters and custom routes

The service-splitter and service-router configuration entry kinds do not support directly targeting a service instance hosted on a peer. To split or route traffic to a service on a peer, you must combine the definition with a service-resolver configuration entry that defines the service hosted on the peer as an upstream service. For example, to split traffic evenly between frontend services hosted on peers, first define the desired behavior locally:

<CodeTabs tabs={[ "HCL", "Kubernetes YAML", "JSON" ]}>

Kind = "service-splitter"
Name = "frontend"
Splits = [
  {
    Weight  = 50
    ## defaults to service with same name as configuration entry ("frontend")
  },
  {
    Weight  = 50
    Service = "frontend-peer"
  },
]
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceSplitter
metadata:
  name: frontend
spec:
  splits:
    - weight: 50
      ## defaults to service with same name as configuration entry ("web")
    - weight: 50
      service: frontend-peer
{
  "Kind": "service-splitter",
  "Name": "frontend",
  "Splits": [
    {
      "Weight": 50
    },
    {
      "Weight": 50,
      "Service": "frontend-peer"
    }
  ]
}

Then, create a local service-resolver configuration entry named frontend-peer and define a redirect targeting the peer and its service:

<CodeTabs tabs={[ "HCL", "Kubernetes YAML", "JSON" ]}>

Kind           = "service-resolver"
Name           = "frontend-peer"
Redirect {
  Service = frontend
  Peer = "cluster-02"
}
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
  name: frontend-peer
spec:
  redirect:
    peer: 'cluster-02'
    service: 'frontend'
{
    "Kind": "service-resolver",
    "Name": "frontend-peer",
    "Redirect": {
      "Service": "frontend",
      "Peer": "cluster-02"
  }
}