Docs/cluster peering 1.15 updates (#16291)

* initial commit

* initial commit

* Overview updates

* Overview page improvements

* More Overview improvements

* improvements

* Small fixes/updates

* Updates

* Overview updates

* Nav data

* More nav updates

* Fix

* updates

* Updates + tip test

* Directory test

* refining

* Create restructure w/ k8s

* Single usage page

* Technical Specification

* k8s pages

* typo

* L7 traffic management

* Manage connections

* k8s page fix

* Create page tab corrections

* link to k8s

* intentions

* corrections

* Add-on intention descriptions

* adjustments

* Missing </CodeTabs>

* Diagram improvements

* Final diagram update

* Apply suggestions from code review

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
Co-authored-by: David Yu <dyu@hashicorp.com>

* diagram name fix

* Fixes

* Updates to index.mdx

* Tech specs page corrections

* Tech specs page rename

* update link to tech specs

* K8s - new pages + tech specs

* k8s - manage peering connections

* k8s L7 traffic management

* Separated establish connection pages

* Directory fixes

* Usage clean up

* k8s docs edits

* Updated nav data

* CodeBlock Component fix

* filename

* CodeBlockConfig removal

* Redirects

* Update k8s filenames

* Reshuffle k8s tech specs for clarity, fmt yaml files

* Update general cluster peering docs, reorder CLI > API > UI, cross link to kubernetes

* Fix config rendering in k8s usage docs, cross link to general usage from k8s docs

* fix legacy link

* update k8s docs

* fix nested list rendering

* redirect fix

* page error

---------

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
Co-authored-by: David Yu <dyu@hashicorp.com>
Co-authored-by: Tu Nguyen <im2nguyen@gmail.com>
This commit is contained in:
Jeff Boruszak 2023-02-23 11:58:39 -06:00 committed by GitHub
parent d1294cf14e
commit cddf86f337
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 1642 additions and 1220 deletions

View File

@ -48,6 +48,7 @@ $ curl \
### Sample Response
<Tabs>
<Tab heading="OSS">
```json
@ -77,6 +78,7 @@ $ curl \
```
</Tab>
<Tab heading="Enterprise">
```json

View File

@ -1,537 +0,0 @@
---
layout: docs
page_title: Cluster Peering - Create and Manage Connections
description: >-
Generate a peering token to establish communication, export services, and authorize requests for cluster peering connections. Learn how to create, list, read, check, and delete peering connections.
---
<TabProvider>
# Create and Manage Cluster Peering Connections
A peering token enables cluster peering between different datacenters. Once you generate a peering token, you can use it to establish a connection between clusters. Then you can export services and create intentions so that peered clusters can call those services.
## Create a peering connection
Cluster peering is enabled by default on Consul servers as of v1.14. For additional information, including options to disable cluster peering, refer to [Configuration Files](/consul/docs/agent/config/config-files).
The process to create a peering connection is a sequence with multiple steps:
1. Create a peering token
1. Establish a connection between clusters
1. Export services between clusters
1. Authorize services for peers
You can generate peering tokens and initiate connections on any available agent using either the API, CLI, or the Consul UI. If you use the API or CLI, we recommend performing these operations through a client agent in the partition you want to connect.
The UI does not currently support exporting services between clusters or authorizing services for peers.
### Create a peering token
To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection.
Every time you generate a peering token, a single-use establishment secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections.
<Tabs>
<Tab heading="Consul API" group="api">
In `cluster-01`, use the [`/peering/token` endpoint](/consul/api-docs/peering#generate-a-peering-token) to issue a request for a peering token.
```shell-session
$ curl --request POST --data '{"Peer":"cluster-02"}' --url http://localhost:8500/v1/peering/token
```
The CLI outputs the peering token, which is a base64-encoded string containing the token details.
Create a JSON file that contains the first cluster's name and the peering token.
<CodeBlockConfig filename="peering_token.json" hideClipboard>
```json
{
"Peer": "cluster-01",
"PeeringToken": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImF1ZCI6IlNvbHIifQ.5T7L_L1MPfQ_5FjKGa1fTPqrzwK4bNSM812nW6oyjb8"
}
```
</CodeBlockConfig>
</Tab>
<Tab heading="Consul CLI" group="cli">
In `cluster-01`, use the [`consul peering generate-token` command](/consul/commands/peering/generate-token) to issue a request for a peering token.
```shell-session
$ consul peering generate-token -name cluster-02
```
The CLI outputs the peering token, which is a base64-encoded string containing the token details.
Save this value to a file or clipboard to be used in the next step on `cluster-02`.
</Tab>
<Tab heading="Consul UI" group="ui">
1. In the Consul UI for the datacenter associated with `cluster-01`, click **Peers**.
1. Click **Add peer connection**.
1. In the **Generate token** tab, enter `cluster-02` in the **Name of peer** field.
1. Click the **Generate token** button.
1. Copy the token before you proceed. You cannot view it again after leaving this screen. If you lose your token, you must generate a new one.
</Tab>
</Tabs>
### Establish a connection between clusters
Next, use the peering token to establish a secure connection between the clusters.
<Tabs>
<Tab heading="Consul API" group="api">
In one of the client agents in "cluster-02," use `peering_token.json` and the [`/peering/establish` endpoint](/consul/api-docs/peering#establish-a-peering-connection) to establish the peering connection. This endpoint does not generate an output unless there is an error.
```shell-session
$ curl --request POST --data @peering_token.json http://127.0.0.1:8500/v1/peering/establish
```
When you connect server agents through cluster peering, their default behavior is to peer to the `default` partition. To establish peering connections for other partitions through server agents, you must add the `Partition` field to `peering_token.json` and specify the partitions you want to peer. For additional configuration information, refer to [Cluster Peering - HTTP API](/consul/api-docs/peering).
You can dial the `peering/establish` endpoint once per peering token. Peering tokens cannot be reused after being used to establish a connection. If you need to re-establish a connection, you must generate a new peering token.
</Tab>
<Tab heading="Consul CLI" group="cli">
In one of the client agents in "cluster-02," issue the [`consul peering establish` command](/consul/commands/peering/establish) and specify the token generated in the previous step. The command establishes the peering connection.
The commands prints "Successfully established peering connection with cluster-01" after the connection is established.
```shell-session
$ consul peering establish -name cluster-01 -peering-token token-from-generate
```
When you connect server agents through cluster peering, they peer their default partitions.
To establish peering connections for other partitions through server agents, you must add the `-partition` flag to the `establish` command and specify the partitions you want to peer.
For additional configuration information, refer to [`consul peering establish` command](/consul/commands/peering/establish) .
You can run the `peering establish` command once per peering token.
Peering tokens cannot be reused after being used to establish a connection.
If you need to re-establish a connection, you must generate a new peering token.
</Tab>
<Tab heading="Consul UI" group="ui">
1. In the Consul UI for the datacenter associated with `cluster 02`, click **Peers** and then **Add peer connection**.
1. Click **Establish peering**.
1. In the **Name of peer** field, enter `cluster-01`. Then paste the peering token in the **Token** field.
1. Click **Add peer**.
</Tab>
</Tabs>
### Export services between clusters
After you establish a connection between the clusters, you need to create a configuration entry that defines the services that are available for other clusters. Consul uses this configuration entry to advertise service information and support service mesh connections across clusters.
First, create a configuration entry and specify the `Kind` as `"exported-services"`.
<CodeBlockConfig filename="peering-config.hcl" hideClipboard>
```hcl
Kind = "exported-services"
Name = "default"
Services = [
{
## The name and namespace of the service to export.
Name = "service-name"
Namespace = "default"
## The list of peer clusters to export the service to.
Consumers = [
{
## The peer name to reference in config is the one set
## during the peering process.
Peer = "cluster-02"
}
]
}
]
```
</CodeBlockConfig>
Then, add the configuration entry to your cluster.
```shell-session
$ consul config write peering-config.hcl
```
Before you proceed, wait for the clusters to sync and make services available to their peers. You can issue an endpoint query to [check the peered cluster status](#check-peered-cluster-status).
### Authorize services for peers
Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters.
First, create a configuration entry and specify the `Kind` as `"service-intentions"`. Declare the service on "cluster-02" that can access the service in "cluster-01." The following example sets service intentions so that "frontend-service" can access "backend-service."
<CodeBlockConfig filename="peering-intentions.hcl" hideClipboard>
```hcl
Kind = "service-intentions"
Name = "backend-service"
Sources = [
{
Name = "frontend-service"
Peer = "cluster-02"
Action = "allow"
}
]
```
</CodeBlockConfig>
If the peer's name is not specified in `Peer`, then Consul assumes that the service is in the local cluster.
Then, add the configuration entry to your cluster.
```shell-session
$ consul config write peering-intentions.hcl
```
### Authorize Service Reads with ACLs
If ACLs are enabled on a Consul cluster, sidecar proxies that access exported services as an upstream must have an ACL token that grants read access.
Read access to all imported services is granted using either of the following rules associated with an ACL token:
- `service:write` permissions for any service in the sidecar's partition.
- `service:read` and `node:read` for all services and nodes, respectively, in sidecar's namespace and partition.
For Consul Enterprise, access is granted to all imported services in the service's partition.
These permissions are satisfied when using a [service identity](/consul/docs/security/acl/acl-roles#service-identities).
Example rule files can be found in [Reading Servers](/consul/docs/connect/config-entries/exported-services#reading-services) in the `exported-services` config entry documentation.
Refer to [ACLs System Overview](/consul/docs/security/acl) for more information on ACLs and their setup.
## Manage peering connections
After you establish a peering connection, you can get a list of all active peering connections, read a specific peering connection's information, check peering connection health, and delete peering connections.
### List all peering connections
You can list all active peering connections in a cluster.
<Tabs>
<Tab heading="Consul API" group="api">
After you establish a peering connection, [query the `/peerings/` endpoint](/consul/api-docs/peering#list-all-peerings) to get a list of all peering connections. For example, the following command requests a list of all peering connections on `localhost` and returns the information as a series of JSON objects:
```shell-session
$ curl http://127.0.0.1:8500/v1/peerings
[
{
"ID": "462c45e8-018e-f19d-85eb-1fc1bcc2ef12",
"Name": "cluster-02",
"State": "ACTIVE",
"Partition": "default",
"PeerID": "e83a315c-027e-bcb1-7c0c-a46650904a05",
"PeerServerName": "server.dc1.consul",
"PeerServerAddresses": [
"10.0.0.1:8300"
],
"CreateIndex": 89,
"ModifyIndex": 89
},
{
"ID": "1460ada9-26d2-f30d-3359-2968aa7dc47d",
"Name": "cluster-03",
"State": "INITIAL",
"Partition": "default",
"Meta": {
"env": "production"
},
"CreateIndex": 109,
"ModifyIndex": 119
},
]
```
</Tab>
<Tab heading="Consul CLI" group="cli">
After you establish a peering connection, run the [`consul peering list`](/consul/commands/peering/list) command to get a list of all peering connections.
For example, the following command requests a list of all peering connections and returns the information in a table:
```shell-session
$ consul peering list
Name State Imported Svcs Exported Svcs Meta
cluster-02 ACTIVE 0 2 env=production
cluster-03 PENDING 0 0
```
</Tab>
<Tab heading="Consul UI" group="ui">
In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in a datacenter.
The name that appears in the list is the name of the cluster in a different datacenter with an established peering connection.
</Tab>
</Tabs>
### Read a peering connection
You can get information about individual peering connections between clusters.
<Tabs>
<Tab heading="Consul API" group="api">
After you establish a peering connection, [query the `/peering/` endpoint](/consul/api-docs/peering#read-a-peering-connection) to get peering information about for a specific cluster. For example, the following command requests peering connection information for "cluster-02" and returns the info as a JSON object:
```shell-session
$ curl http://127.0.0.1:8500/v1/peering/cluster-02
{
"ID": "462c45e8-018e-f19d-85eb-1fc1bcc2ef12",
"Name": "cluster-02",
"State": "INITIAL",
"PeerID": "e83a315c-027e-bcb1-7c0c-a46650904a05",
"PeerServerName": "server.dc1.consul",
"PeerServerAddresses": [
"10.0.0.1:8300"
],
"CreateIndex": 89,
"ModifyIndex": 89
}
```
</Tab>
<Tab heading="Consul CLI" group="cli">
After you establish a peering connection, run the [`consul peering read`](/consul/commands/peering/list) command to get peering information about for a specific cluster.
For example, the following command requests peering connection information for "cluster-02":
```shell-session
$ consul peering read -name cluster-02
Name: cluster-02
ID: 3b001063-8079-b1a6-764c-738af5a39a97
State: ACTIVE
Meta:
env=production
Peer ID: e83a315c-027e-bcb1-7c0c-a46650904a05
Peer Server Name: server.dc1.consul
Peer CA Pems: 0
Peer Server Addresses:
10.0.0.1:8300
Imported Services: 0
Exported Services: 2
Create Index: 89
Modify Index: 89
```
</Tab>
<Tab heading="Consul UI" group="ui">
In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in that datacenter. Click the name of a peered cluster to view additional details about the peering connection.
</Tab>
</Tabs>
### Check peering connection health
You can check the status of your peering connection to perform health checks.
To confirm that the peering connection between your clusters remains healthy, query the [`health/service` endpoint](/consul/api-docs/health) of one cluster from the other cluster. For example, in "cluster-02," query the endpoint and add the `peer=cluster-01` query parameter to the end of the URL.
```shell-session
$ curl \
"http://127.0.0.1:8500/v1/health/service/<service-name>?peer=cluster-01"
```
A successful query includes service information in the output.
### Delete peering connections
You can disconnect the peered clusters by deleting their connection. Deleting a peering connection stops data replication to the peer and deletes imported data, including services and CA certificates.
<Tabs>
<Tab heading="Consul API" group="api">
In "cluster-01," request the deletion through the [`/peering/ endpoint`](/consul/api-docs/peering#delete-a-peering-connection).
```shell-session
$ curl --request DELETE http://127.0.0.1:8500/v1/peering/cluster-02
```
</Tab>
<Tab heading="Consul CLI" group="cli">
In "cluster-01," request the deletion through the [`consul peering delete`](/consul/commands/peering/list) command.
```shell-session
$ consul peering delete -name cluster-02
Successfully submitted peering connection, cluster-02, for deletion
```
</Tab>
<Tab heading="Consul UI" group="ui">
In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in that datacenter.
Next to the name of the peer, click **More** (three horizontal dots) and then **Delete**. Click **Delete** to confirm and remove the peering connection.
</Tab>
</Tabs>
## L7 traffic management between peers
The following sections describe how to enable L7 traffic management features between peered clusters.
### Service resolvers for redirects and failover
As of Consul v1.14, you can use [dynamic traffic management](/consul/docs/connect/l7-traffic) to configure your service mesh so that services automatically failover and redirect between peers. The following examples update the [`service-resolver` config entry](/consul/docs/connect/config-entries/) in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in peer `cluster-02` when it detects multiple connection failures.
<CodeTabs tabs={[ "HCL", "Kubernetes YAML", "JSON" ]}>
```hcl
Kind = "service-resolver"
Name = "frontend"
ConnectTimeout = "15s"
Failover = {
"*" = {
Targets = [
{Peer = "cluster-02"}
]
}
}
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: frontend
spec:
connectTimeout: 15s
failover:
'*':
targets:
- peer: 'cluster-02'
service: 'frontend'
namespace: 'default'
```
```json
{
"ConnectTimeout": "15s",
"Kind": "service-resolver",
"Name": "frontend",
"Failover": {
"*": {
"Targets": [
{
"Peer": "cluster-02"
}
]
}
},
"CreateIndex": 250,
"ModifyIndex": 250
}
```
</CodeTabs>
### Service splitters and custom routes
The `service-splitter` and `service-router` configuration entry kinds do not support directly targeting a service instance hosted on a peer. To split or route traffic to a service on a peer, you must combine the definition with a `service-resolver` configuration entry that defines the service hosted on the peer as an upstream service. For example, to split traffic evenly between `frontend` services hosted on peers, first define the desired behavior locally:
<CodeTabs tabs={[ "HCL", "Kubernetes YAML", "JSON" ]}>
```hcl
Kind = "service-splitter"
Name = "frontend"
Splits = [
{
Weight = 50
## defaults to service with same name as configuration entry ("frontend")
},
{
Weight = 50
Service = "frontend-peer"
},
]
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceSplitter
metadata:
name: frontend
spec:
splits:
- weight: 50
## defaults to service with same name as configuration entry ("web")
- weight: 50
service: frontend-peer
```
```json
{
"Kind": "service-splitter",
"Name": "frontend",
"Splits": [
{
"Weight": 50
},
{
"Weight": 50,
"Service": "frontend-peer"
}
]
}
```
</CodeTabs>
Then, create a local `service-resolver` configuration entry named `frontend-peer` and define a redirect targeting the peer and its service:
<CodeTabs tabs={[ "HCL", "Kubernetes YAML", "JSON" ]}>
```hcl
Kind = "service-resolver"
Name = "frontend-peer"
Redirect {
Service = frontend
Peer = "cluster-02"
}
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: frontend-peer
spec:
redirect:
peer: 'cluster-02'
service: 'frontend'
```
```json
{
"Kind": "service-resolver",
"Name": "frontend-peer",
"Redirect": {
"Service": "frontend",
"Peer": "cluster-02"
}
}
```
</CodeTabs>
</TabProvider>

View File

@ -1,32 +1,37 @@
---
layout: docs
page_title: Service Mesh - What is Cluster Peering?
page_title: Cluster Peering Overview
description: >-
Cluster peering establishes communication between independent clusters in Consul, allowing services to interact across datacenters. Learn about the cluster peering process, differences with WAN federation for multi-datacenter deployments, and technical constraints.
Cluster peering establishes communication between independent clusters in Consul, allowing services to interact across datacenters. Learn how cluster peering works, its differences with WAN federation for multi-datacenter deployments, and how to troubleshoot common issues.
---
# What is Cluster Peering?
# Cluster peering overview
You can create peering connections between two or more independent clusters so that services deployed to different partitions or datacenters can communicate.
This topic provides an overview of cluster peering, which lets you connect two or more independent Consul clusters so that services deployed to different partitions or datacenters can communicate.
Cluster peering is enabled in Consul by default. For specific information about cluster peering configuration and usage, refer to following pages.
## Overview
## What is cluster peering?
Cluster peering is a process that allows Consul clusters to communicate with each other. The cluster peering process consists of the following steps:
Consul supports cluster peering connections between two [admin partitions](/consul/docs/enterprise/admin-partitions) _in different datacenters_. Deployments without an Enterprise license can still use cluster peering because every datacenter automatically includes a default partition. Meanwhile, admin partitions _in the same datacenter_ do not require cluster peering connections because you can export services between them without generating or exchanging a peering token.
1. Create a peering token in one cluster.
1. Use the peering token to establish peering with a second cluster.
1. Export services between clusters.
1. Create intentions to authorize services for peers.
The following diagram describes Consul's cluster peering architecture.
This process establishes cluster peering between two [admin partitions](/consul/docs/enterprise/admin-partitions). Deployments without an Enterprise license can still use cluster peering because every datacenter automatically includes a `default` partition.
![Diagram of cluster peering with admin partitions](/img/cluster-peering-diagram.png)
For detailed instructions on establishing cluster peering connections, refer to [Create and Manage Peering Connections](/consul/docs/connect/cluster-peering/create-manage-peering).
In this diagram, the `default` partition in Consul DC 1 has a cluster peering connection with the `web` partition in Consul DC 2. Enforced by their respective mesh gateways, this cluster peering connection enables `Service B` to communicate with `Service C` as a service upstream.
> To learn how to peer clusters and connect services across peers in AWS Elastic Kubernetes Service (EKS) environments, complete the [Consul Cluster Peering on Kubernetes tutorial](/consul/tutorials/developer-mesh/cluster-peering-aws?utm_source=docs).
Cluster peering leverages several components of Consul's architecture to enforce secure communication between services:
### Differences between WAN federation and cluster peering
- A _peering token_ contains an embedded secret that securely establishes communication when shared symetrically between datacenters. Sharing this token enables each datacenter's server agents to recognize requests from authorized peers, similar to how the [gossip encryption key secures agent LAN gossip](/consul/docs/security/encryption#gossip-encryption).
- A _mesh gateway_ encrypts outgoing traffic, decrypts incoming traffic, and directs traffic to healthy services. Consul's service mesh features must be enabled in order to use mesh gateways. Mesh gateways support the specific admin partitions they are deployed on. Refer to [Mesh gateways](/consul/docs/connect/gateways/mesh-gateway) for more information.
- An _exported service_ communicates with downstreams deployed in other admin partitions. They are explicitly defined in an [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services).
- A _service intention_ secures [service-to-service communication in a service mesh](/consul/docs/connect/intentions). Intentions enable identity-based access between services by exchanging TLS certificates, which the service's sidecar proxy verifies upon each request.
WAN federation and cluster peering are different ways to connect Consul deployments. WAN federation connects multiple datacenters to make them function as if they were a single cluster, while cluster peering treats each datacenter as a separate cluster. As a result, WAN federation requires a primary datacenter to maintain and replicate global states such as ACLs and configuration entries, but cluster peering does not.
### Compared with WAN federation
WAN federation and cluster peering are different ways to connect services through mesh gateways so that they can communicate across datacenters. WAN federation connects multiple datacenters to make them function as if they were a single cluster, while cluster peering treats each datacenter as a separate cluster. As a result, WAN federation requires a primary datacenter to maintain and replicate global states such as ACLs and configuration entries, but cluster peering does not.
WAN federation and cluster peering also treat encrypted traffic differently. While mesh gateways between WAN federated datacenters use mTLS to keep data encrypted, mesh gateways between peers terminate mTLS sessions, decrypt data to HTTP services, and then re-encrypt traffic to send to services. Data must be decrypted in order to evaluate and apply dynamic routing rules at the destination cluster, which reduces coupling between peers.
Regardless of whether you connect your clusters through WAN federation or cluster peering, human and machine users can use either method to discover services in other clusters or dial them through the service mesh.
@ -42,11 +47,40 @@ Regardless of whether you connect your clusters through WAN federation or cluste
| Shares key/value stores | &#9989; | &#10060; |
| Can replicate ACL tokens, policies, and roles | &#9989; | &#10060; |
## Important Cluster Peering Constraints
## Guidance
Consider the following technical constraints:
The following resources are available to help you use Consul's cluster peering features.
**Tutorials:**
- To learn how to peer clusters and connect services across peers in AWS Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) environments, complete the [Consul Cluster Peering on Kubernetes tutorial](/consul/tutorials/developer-mesh/cluster-peering).
**Usage documentation:**
- [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-peering)
- [Manage cluster peering connections](/consul/docs/connect/cluster-peering/usage/manage-connections)
- [Manage L7 traffic with cluster peering](/consul/docs/connect/cluster-peering/usage/peering-traffic-management)
**Kubernetes-specific documentation:**
- [Cluster peering on Kubernetes technical specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs)
- [Establish cluster peering connections on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering)
- [Manage cluster peering connections on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/manage-peering)
- [Manage L7 traffic with cluster peering on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/l7-traffic)
**Reference documentation:**
- [Cluster peering technical specifications](/consul/docs/connect/cluster-peering/tech-specs)
- [HTTP API reference: `/peering/` endpoint](/consul/api-docs/peering)
- [CLI reference: `peering` command](/consul/commands/peering).
## Basic troubleshooting
If you experience errors when using Consul's cluster peering features, refer to the following list of technical constraints.
- Peer names can only contain lowercase characters.
- Services with node, instance, and check definitions totaling more than 8MB cannot be exported to a peer.
- Two admin partitions in the same datacenter cannot be peered. Use [`exported-services`](/consul/docs/connect/config-entries/exported-services#exporting-services-to-peered-clusters) directly.
- The `consul intention` CLI command is not supported. To manage intentions that specify services in peered clusters, use [configuration entries](/consul/docs/connect/config-entries/service-intentions).
- Accessing key/value stores across peers is not supported.
- Two admin partitions in the same datacenter cannot be peered. Use the [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services#exporting-services-to-peered-clusters) instead.
- To manage intentions that specify services in peered clusters, use [configuration entries](/consul/docs/connect/config-entries/service-intentions). The `consul intention` CLI command is not supported.
- The Consul UI does not support exporting services between clusters or creating service intentions. Use either the API or the CLI to complete these required steps when establishing new cluster peering connections.
- Accessing key/value stores across peers is not supported.

View File

@ -1,593 +0,0 @@
---
layout: docs
page_title: Cluster Peering on Kubernetes
description: >-
If you use Consul on Kubernetes, learn how to enable cluster peering, create peering CRDs, and then manage peering connections in consul-k8s.
---
# Cluster Peering on Kubernetes
To establish a cluster peering connection on Kubernetes, you need to enable several pre-requisite values in the Helm chart and create custom resource definitions (CRDs) for each side of the peering.
The following Helm values are mandatory for cluster peering:
- [`global.tls.enabled = true`](/consul/docs/k8s/helm#v-global-tls-enabled)
- [`meshGateway.enabled = true`](/consul/docs/k8s/helm#v-meshgateway-enabled)
The following CRDs are used to create and manage a peering connection:
- `PeeringAcceptor`: Generates a peering token and accepts an incoming peering connection.
- `PeeringDialer`: Uses a peering token to make an outbound peering connection with the cluster that generated the token.
Peering connections, including both data plane and control plane traffic, is routed through mesh gateways.
As of Consul v1.14, you can also [implement service failovers and redirects to control traffic](/consul/docs/connect/l7-traffic) between peers.
> To learn how to peer clusters and connect services across peers in AWS Elastic Kubernetes Service (EKS) environments, complete the [Consul Cluster Peering on Kubernetes tutorial](/consul/tutorials/developer-mesh/cluster-peering-aws?utm_source=docs).
## Prerequisites
You must implement the following requirements to create and use cluster peering connections with Kubernetes:
- Consul v1.14.0 or later
- At least two Kubernetes clusters
- The installation must be running on Consul on Kubernetes version 1.0.0 or later
### Prepare for installation
Complete the following procedure after you have provisioned a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters.
1. Use the `kubectl` command to export the Kubernetes context names and then set them to variables for future use. For more information on how to use kubeconfig and contexts, refer to the [Kubernetes docs on configuring access to multiple clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
You can use the following methods to get the context names for your clusters:
- Use the `kubectl config current-context` command to get the context for the cluster you are currently in.
- Use the `kubectl config get-contexts` command to get all configured contexts in your kubeconfig file.
```shell-session
$ export CLUSTER1_CONTEXT=<CONTEXT for first Kubernetes cluster>
$ export CLUSTER2_CONTEXT=<CONTEXT for second Kubernetes cluster>
```
1. To establish cluster peering through Kubernetes, create a `values.yaml` file with the following Helm values. **NOTE:** Mesh Gateway replicas are defaulted to 1 replica, and could be adjusted using the `meshGateway.replicas` value for higher availability.
<CodeBlockConfig filename="values.yaml">
```yaml
global:
name: consul
image: "hashicorp/consul:1.14.1"
peering:
enabled: true
tls:
enabled: true
meshGateway:
enabled: true
```
</CodeBlockConfig>
### Install Consul on Kubernetes
Install Consul on Kubernetes by using the CLI to apply `values.yaml` to each cluster.
1. In `cluster-01`, run the following commands:
```shell-session
$ export HELM_RELEASE_NAME=cluster-01
```
```shell-session
$ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "1.0.1" --values values.yaml --set global.datacenter=dc1 --kube-context $CLUSTER1_CONTEXT
```
1. In `cluster-02`, run the following commands:
```shell-session
$ export HELM_RELEASE_NAME=cluster-02
```
```shell-session
$ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "1.0.1" --values values.yaml --set global.datacenter=dc2 --kube-context $CLUSTER2_CONTEXT
```
## Create a peering connection for Consul on Kubernetes
To peer Kubernetes clusters running Consul, you need to create a peering token on one cluster (`cluster-01`) and share
it with the other cluster (`cluster-02`). The generated peering token from `cluster-01` will include the addresses of
the servers for that cluster. The servers for `cluster-02` will use that information to dial the servers in
`cluster-01`. Complete the following steps to create the peer connection.
### Using mesh gateways for the peering connection
If the servers in `cluster-01` are not directly routable from the dialing cluster `cluster-02`, then you'll need to set up peering through mesh gateways.
1. In `cluster-01` apply the `Mesh` custom resource so the generated token will have the mesh gateway addresses which will be routable from the other cluster.
<CodeBlockConfig filename="mesh.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: Mesh
metadata:
name: mesh
spec:
peering:
peerThroughMeshGateways: true
```
</CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT apply -f mesh.yaml
```
1. In `cluster-02` apply the `Mesh` custom resource so that the servers for `cluster-02` will use their local mesh gateway to dial the servers for `cluster-01`.
<CodeBlockConfig filename="mesh.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: Mesh
metadata:
name: mesh
spec:
peering:
peerThroughMeshGateways: true
```
</CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply -f mesh.yaml
```
### Create a peering token
Peers identify each other using the `metadata.name` values you establish when creating the `PeeringAcceptor` and `PeeringDialer` CRDs.
1. In `cluster-01`, create the `PeeringAcceptor` custom resource.
<CodeBlockConfig filename="acceptor.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringAcceptor
metadata:
name: cluster-02 ## The name of the peer you want to connect to
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
1. Apply the `PeeringAcceptor` resource to the first cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml
````
1. Save your peering token so that you can export it to the other cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml
```
### Establish a peering connection between clusters
1. Apply the peering token to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml
```
1. In `cluster-02`, create the `PeeringDialer` custom resource.
<CodeBlockConfig filename="dialer.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringDialer
metadata:
name: cluster-01 ## The name of the peer you want to connect to
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
1. Apply the `PeeringDialer` resource to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml
```
### Configure the mesh gateway mode for traffic between services
Mesh gateways are required for service-to-service traffic between peered clusters. By default, this will mean that a
service dialing another service in a remote peer will dial the remote mesh gateway to reach that service. If you would
like to configure the mesh gateway mode such that this traffic always leaves through the local mesh gateway, you can use the `ProxyDefaults` CRD.
1. In `cluster-01` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode.
<CodeBlockConfig filename="proxy-defaults.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ProxyDefaults
metadata:
name: global
spec:
meshGateway:
mode: local
```
</CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT apply -f proxy-defaults.yaml
```
1. In `cluster-02` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode.
<CodeBlockConfig filename="proxy-defaults.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ProxyDefaults
metadata:
name: global
spec:
meshGateway:
mode: local
```
</CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply -f proxy-defaults.yaml
```
### Export services between clusters
The examples described in this section demonstrate how to export a service named `backend`. You should change instances of `backend` in the example code to the name of the service you want to export.
1. For the service in `cluster-02` that you want to export, add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods prior to deploying. The annotation allows the workload to join the mesh. It is highlighted in the following example:
<CodeBlockConfig filename="backend.yaml" highlight="37">
```yaml
# Service to expose backend
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9090
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend
---
# Deployment for backend
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
serviceAccountName: backend
containers:
- name: backend
image: nicholasjackson/fake-service:v0.22.4
ports:
- containerPort: 9090
env:
- name: "LISTEN_ADDR"
value: "0.0.0.0:9090"
- name: "NAME"
value: "backend"
- name: "MESSAGE"
value: "Response from backend"
```
</CodeBlockConfig>
1. Deploy the `backend` service to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename backend.yaml
```
1. In `cluster-02`, create an `ExportedServices` custom resource.
<CodeBlockConfig filename="exported-service.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ExportedServices
metadata:
name: default ## The name of the partition containing the service
spec:
services:
- name: backend ## The name of the service you want to export
consumers:
- peer: cluster-01 ## The name of the peer that receives the service
```
</CodeBlockConfig>
1. Apply the `ExportedServices` resource to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename exported-service.yaml
```
### Authorize services for peers
1. Create service intentions for the second cluster.
<CodeBlockConfig filename="intention.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: backend-deny
spec:
destination:
name: backend
sources:
- name: "*"
action: deny
- name: frontend
action: allow
peer: cluster-01 ## The peer of the source service
```
</CodeBlockConfig>
1. Apply the intentions to the second cluster.
<CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yaml
```
</CodeBlockConfig>
1. Add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods before deploying the workload so that the services in `cluster-01` can dial `backend` in `cluster-02`. To dial the upstream service from an application, configure the application so that that requests are sent to the correct DNS name as specified in [Service Virtual IP Lookups](/consul/docs/discovery/dns#service-virtual-ip-lookups). In the following example, the annotation that allows the workload to join the mesh and the configuration provided to the workload that enables the workload to dial the upstream service using the correct DNS name is highlighted. [Service Virtual IP Lookups for Consul Enterprise](/consul/docs/discovery/dns#service-virtual-ip-lookups-for-consul-enterprise) details how you would similarly format a DNS name including partitions and namespaces.
<CodeBlockConfig filename="frontend.yaml" highlight="36,51">
```yaml
# Service to expose frontend
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
selector:
app: frontend
ports:
- name: http
protocol: TCP
port: 9090
targetPort: 9090
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
serviceAccountName: frontend
containers:
- name: frontend
image: nicholasjackson/fake-service:v0.22.4
securityContext:
capabilities:
add: ["NET_ADMIN"]
ports:
- containerPort: 9090
env:
- name: "LISTEN_ADDR"
value: "0.0.0.0:9090"
- name: "UPSTREAM_URIS"
value: "http://backend.virtual.cluster-02.consul"
- name: "NAME"
value: "frontend"
- name: "MESSAGE"
value: "Hello World"
- name: "HTTP_CLIENT_KEEP_ALIVES"
value: "false"
```
</CodeBlockConfig>
1. Apply the service file to the first cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml
```
1. Run the following command in `frontend` and then check the output to confirm that you peered your clusters successfully.
<CodeBlockConfig highlight="31">
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090
{
"name": "frontend",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.16.2.11"
],
"start_time": "2022-08-26T23:40:01.167199",
"end_time": "2022-08-26T23:40:01.226951",
"duration": "59.752279ms",
"body": "Hello World",
"upstream_calls": {
"http://backend.virtual.cluster-02.consul": {
"name": "backend",
"uri": "http://backend.virtual.cluster-02.consul",
"type": "HTTP",
"ip_addresses": [
"10.32.2.10"
],
"start_time": "2022-08-26T23:40:01.223503",
"end_time": "2022-08-26T23:40:01.224653",
"duration": "1.149666ms",
"headers": {
"Content-Length": "266",
"Content-Type": "text/plain; charset=utf-8",
"Date": "Fri, 26 Aug 2022 23:40:01 GMT"
},
"body": "Response from backend",
"code": 200
}
},
"code": 200
}
```
</CodeBlockConfig>
## End a peering connection
To end a peering connection, delete both the `PeeringAcceptor` and `PeeringDialer` resources.
1. Delete the `PeeringDialer` resource from the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT delete --filename dialer.yaml
```
1. Delete the `PeeringAcceptor` resource from the first cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT delete --filename acceptor.yaml
````
1. Confirm that you deleted your peering connection in `cluster-01` by querying the the `/health` HTTP endpoint. The peered services should no longer appear.
1. Exec into the server pod for the first cluster.
```shell-session
$ kubectl exec -it consul-server-0 --context $CLUSTER1_CONTEXT -- /bin/sh
```
1. If you've enabled ACLs, export an ACL token to access the `/health` HTP endpoint for services. The bootstrap token may be used if an ACL token is not already provisioned.
```shell-session
$ export CONSUL_HTTP_TOKEN=<INSERT BOOTSTRAP ACL TOKEN>
```
1. Query the the `/health` HTTP endpoint. The peered services should no longer appear.
```shell-session
$ curl "localhost:8500/v1/health/connect/backend?peer=cluster-02"
```
## Recreate or reset a peering connection
To recreate or reset the peering connection, you need to generate a new peering token from the cluster where you created the `PeeringAcceptor`.
1. In the `PeeringAcceptor` CRD, add the annotation `consul.hashicorp.com/peering-version`. If the annotation already exists, update its value to a higher version.
<CodeBlockConfig filename="acceptor.yml" highlight="6" hideClipboard>
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringAcceptor
metadata:
name: cluster-02
annotations:
consul.hashicorp.com/peering-version: "1" ## The peering version you want to set, must be in quotes
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
1. After updating `PeeringAcceptor`, repeat the following steps to create a peering connection:
1. [Create a peering token](#create-a-peering-token)
1. [Establish a peering connection between clusters](#establish-a-peering-connection-between-clusters)
1. [Export services between clusters](#export-services-between-clusters)
1. [Authorize services for peers](#authorize-services-for-peers)
Your peering connection is re-established with the updated token.
~> **Note:** The only way to create or set a new peering token is to manually adjust the value of the annotation `consul.hashicorp.com/peering-version`. Creating a new token causes the previous token to expire.
## Traffic management between peers
As of Consul v1.14, you can use [dynamic traffic management](/consul/docs/connect/l7-traffic) to configure your service mesh so that services automatically failover and redirect between peers.
To configure automatic service failovers and redirect, edit the `ServiceResolver` CRD so that traffic resolves to a backup service instance on a peer. The following example updates the `ServiceResolver` CRD in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in `cluster-02` when it detects multiple connection failures to the primary instance.
<CodeBlockConfig filename="service-resolver.yaml" hideClipboard>
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: frontend
spec:
connectTimeout: 15s
failover:
'*':
targets:
- peer: 'cluster-02'
service: 'backup'
namespace: 'default'
```
</CodeBlockConfig>

View File

@ -0,0 +1,69 @@
---
layout: docs
page_title: Cluster Peering Technical Specifications
description: >-
Cluster peering connections in Consul interact with mesh gateways, sidecar proxies, exported services, and ACLs. Learn about the configuration requirements for these components.
---
# Cluster peering technical specifications
This reference topic describes the technical specifications associated with using cluster peering in your deployments. These specifications include required Consul components and their configurations.
## Requirements
To use cluster peering features, make sure your Consul environment meets the following prerequisites:
- Consul v1.14 or higher.
- A local Consul agent is required to manage mesh gateway configuration.
- Use [Envoy proxies](/consul/docs/connect/proxies/envoy). Envoy is the only proxy with mesh gateway capabilities in Consul.
In addition, the following service mesh components are required in order to establish cluster peering connections:
- [Mesh gateways](#mesh-gateway-requirements)
- [Sidecar proxies](#sidecar-proxy-requirements)
- [Exported services](#exported-service-requirements)
- [ACLs](#acl-requirements)
### Mesh gateway requirements
Mesh gateways are required for routing service mesh traffic between partitions with cluster peering connections. Consider the following general requirements for mesh gateways when using cluster peering:
- A cluster requires a registered mesh gateway in order to export services to peers.
- For Enterprise, this mesh gateway must also be registered in the same partition as the exported services and their `exported-services` configuration entry.
- To use the `local` mesh gateway mode, you must register a mesh gateway in the importing cluster.
In addition, you must define the `Proxy.Config` settings using opaque parameters compatible with your proxy. Refer to the [Gateway options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional Envoy proxy configuration information.
#### Mesh gateway modes
By default, all cluster peering connections use mesh gateways in [remote mode](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters#remote). Be aware of these additional requirements when changing a mesh gateway's mode.
- For mesh gateways that connect peered clusters, you can set the `mode` as either `remote` or `local`.
- The `none` mode is invalid for mesh gateways with cluster peering connections.
Refer to [mesh gateway modes](/consul/docs/connect/gateways/mesh-gateway#modes) for more information.
## Sidecar proxy requirements
The Envoy proxies that function as sidecars in your service mesh require configuration in order to properly route traffic to peers. Sidecar proxies are defined in the [service definition](/consul/docs/discovery/services).
- Configure the `proxy.upstreams` parameters to route traffic to the correct service, namespace, and peer. Refer to the [`upstreams`](/consul/docs/connect/registration/service-registration#upstream-configuration-reference) documentation for details.
- The `proxy.upstreams.destination_name` parameter is always required.
- The `proxy.upstreams.destination_peer` parameter must be configured to enable cross-cluster traffic.
- The `proxy.upstream/destination_namespace` configuration is only necessary if the destination service is in a non-default namespace.
## Exported service requirements
The `exported-services` configuration entry is required in order for services to communicate across partitions with cluster peering connections.
Basic guidance on using the `exported-services` configuration entry is included in [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/create-cluster-peering).
Refer to the [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services) reference for more information.
## ACL requirements
If ACLs are enabled, you must add tokens to grant the following permissions:
- Grant `service:write` permissions to services that define mesh gateways in their server definition.
- Grant `service:read` permissions for all services on the partition.
- Grant `mesh:write` permissions to the mesh gateways that participate in cluster peering connections. This permission allows a leaf certificate to be issued for mesh gateways to terminate TLS sessions for HTTP requests.

View File

@ -0,0 +1,271 @@
---
layout: docs
page_title: Establish Cluster Peering Connections
description: >-
Generate a peering token to establish communication, export services, and authorize requests for cluster peering connections. Learn how to establish peering connections with Consul's HTTP API, CLI or UI.
---
# Establish cluster peering connections
This page details the process for establishing a cluster peering connection between services deployed to different datacenters. You can interact with Consul's cluster peering features using the CLI, the HTTP API, or the UI. The overall process for establishing a cluster peering connection consists of the following steps:
1. Create a peering token in one cluster.
1. Use the peering token to establish peering with a second cluster.
1. Export services between clusters.
1. Create intentions to authorize services for peers.
Cluster peering between services cannot be established until all four steps are complete.
For Kubernetes-specific guidance for establishing cluster peering connections, refer to [Establish cluster peering connections on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering).
## Requirements
You must meet the following requirements to use cluster peering:
- Consul v1.14.1 or higher
- Services hosted in admin partitions on separate datacenters
If you need to make services available to an admin partition in the same datacenter, do not use cluster peering. Instead, use the [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services) to make service upstreams available to other admin partitions in a single datacenter.
### Mesh gateway requirements
Mesh gateways are required for all cluster peering connections. Consider the following architectural requirements when creating mesh gateways:
- A registered mesh gateway is required in order to export services to peers.
- For Enterprise, the mesh gateway that exports services must be registered in the same partition as the exported services and their `exported-services` configuration entry.
- To use the `local` mesh gateway mode, you must register a mesh gateway in the importing cluster.
## Create a peering token
To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection.
Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections.
<Tabs>
<Tab heading="Consul CLI" group="cli">
1. In `cluster-01`, use the [`consul peering generate-token` command](/consul/commands/peering/generate-token) to issue a request for a peering token.
```shell-session
$ consul peering generate-token -name cluster-02
```
The CLI outputs the peering token, which is a base64-encoded string containing the token details.
1. Save this value to a file or clipboard to use in the next step on `cluster-02`.
</Tab>
<Tab heading="HTTP API" group="api">
1. In `cluster-01`, use the [`/peering/token` endpoint](/consul/api-docs/peering#generate-a-peering-token) to issue a request for a peering token.
```shell-session
$ curl --request POST --data '{"Peer":"cluster-02"}' --url http://localhost:8500/v1/peering/token
```
The CLI outputs the peering token, which is a base64-encoded string containing the token details.
1. Create a JSON file that contains the first cluster's name and the peering token.
<CodeBlockConfig filename="peering_token.json" hideClipboard>
```json
{
"Peer": "cluster-01",
"PeeringToken": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImF1ZCI6IlNvbHIifQ.5T7L_L1MPfQ_5FjKGa1fTPqrzwK4bNSM812nW6oyjb8"
}
```
</CodeBlockConfig>
</Tab>
<Tab heading="Consul UI" group="ui">
To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection.
Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections.
1. In the Consul UI for the datacenter associated with `cluster-01`, click **Peers**.
1. Click **Add peer connection**.
1. In the **Generate token** tab, enter `cluster-02` in the **Name of peer** field.
1. Click the **Generate token** button.
1. Copy the token before you proceed. You cannot view it again after leaving this screen. If you lose your token, you must generate a new one.
</Tab>
</Tabs>
## Establish a connection between clusters
Next, use the peering token to establish a secure connection between the clusters.
<Tabs>
<Tab heading="Consul CLI" group="cli">
1. In one of the client agents deployed to "cluster-02," issue the [`consul peering establish` command](/consul/commands/peering/establish) and specify the token generated in the previous step.
```shell-session
$ consul peering establish -name cluster-01 -peering-token token-from-generate
"Successfully established peering connection with cluster-01"
```
When you connect server agents through cluster peering, they peer their default partitions. To establish peering connections for other partitions through server agents, you must add the `-partition` flag to the `establish` command and specify the partitions you want to peer. For additional configuration information, refer to [`consul peering establish` command](/consul/commands/peering/establish).
You can run the `peering establish` command once per peering token. Peering tokens cannot be reused after being used to establish a connection. If you need to re-establish a connection, you must generate a new peering token.
</Tab>
<Tab heading="HTTP API" group="api">
1. In one of the client agents in "cluster-02," use `peering_token.json` and the [`/peering/establish` endpoint](/consul/api-docs/peering#establish-a-peering-connection) to establish the peering connection. This endpoint does not generate an output unless there is an error.
```shell-session
$ curl --request POST --data @peering_token.json http://127.0.0.1:8500/v1/peering/establish
```
When you connect server agents through cluster peering, their default behavior is to peer to the `default` partition. To establish peering connections for other partitions through server agents, you must add the `Partition` field to `peering_token.json` and specify the partitions you want to peer. For additional configuration information, refer to [Cluster Peering - HTTP API](/consul/api-docs/peering).
You can dial the `peering/establish` endpoint once per peering token. Peering tokens cannot be reused after being used to establish a connection. If you need to re-establish a connection, you must generate a new peering token.
</Tab>
<Tab heading="Consul UI" group="ui">
1. In the Consul UI for the datacenter associated with `cluster 02`, click **Peers** and then **Add peer connection**.
1. Click **Establish peering**.
1. In the **Name of peer** field, enter `cluster-01`. Then paste the peering token in the **Token** field.
1. Click **Add peer**.
</Tab>
</Tabs>
## Export services between clusters
After you establish a connection between the clusters, you need to create an `exported-services` configuration entry that defines the services that are available for other clusters. Consul uses this configuration entry to advertise service information and support service mesh connections across clusters.
An `exported-services` configuration entry makes services available to another admin partition. While it can target admin partitions either locally or remotely. Clusters peers always export services to remote partitions. Refer to [exported service consumers](/consul/docs/connect/config-entries/exported-services#consumers-1) for more information.
You must use the Consul CLI to complete this step. The HTTP API and the Consul UI do not support `exported-services` configuration entries.
<Tabs>
<Tab heading="Consul CLI" group="cli">
1. Create a configuration entry and specify the `Kind` as `"exported-services"`.
<CodeBlockConfig filename="peering-config.hcl" hideClipboard>
```hcl
Kind = "exported-services"
Name = "default"
Services = [
{
## The name and namespace of the service to export.
Name = "service-name"
Namespace = "default"
## The list of peer clusters to export the service to.
Consumers = [
{
## The peer name to reference in config is the one set
## during the peering process.
Peer = "cluster-02"
}
]
}
]
```
</CodeBlockConfig>
1. Add the configuration entry to your cluster.
```shell-session
$ consul config write peering-config.hcl
```
Before you proceed, wait for the clusters to sync and make services available to their peers. To check the peered cluster status, [read the cluster peering connection](/consul/docs/connect/cluster-peering/usage/manage-connections#read-a-peering-connection).
</Tab>
</Tabs>
## Authorize services for peers
Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters.
You must use the HTTP API or the Consul CLI to complete this step. The Consul UI supports intentions for local clusters only.
<Tabs>
<Tab heading="Consul CLI" group="cli">
1. Create a configuration entry and specify the `Kind` as `"service-intentions"`. Declare the service on "cluster-02" that can access the service in "cluster-01." In the following example, the service intentions configuration entry authorizes the `backend-service` to communicate with the `frontend-service` that is hosted on remote peer `cluster-02`:
<CodeBlockConfig filename="peering-intentions.hcl" hideClipboard>
```hcl
Kind = "service-intentions"
Name = "backend-service"
Sources = [
{
Name = "frontend-service"
Peer = "cluster-02"
Action = "allow"
}
]
```
</CodeBlockConfig>
If the peer's name is not specified in `Peer`, then Consul assumes that the service is in the local cluster.
1. Add the configuration entry to your cluster.
```shell-session
$ consul config write peering-intentions.hcl
```
</Tab>
<Tab heading="HTTP API" group="api">
1. Create a configuration entry and specify the `Kind` as `"service-intentions"`. Declare the service on "cluster-02" that can access the service in "cluster-01." In the following example, the service intentions configuration entry authorizes the `backend-service` to communicate with the `frontend-service` that is hosted on remote peer `cluster-02`:
<CodeBlockConfig filename="peering-intentions.hcl" hideClipboard>
```hcl
Kind = "service-intentions"
Name = "backend-service"
Sources = [
{
Name = "frontend-service"
Peer = "cluster-02"
Action = "allow"
}
]
```
</CodeBlockConfig>
If the peer's name is not specified in `Peer`, then Consul assumes that the service is in the local cluster.
1. Add the configuration entry to your cluster.
```shell-session
$ curl --request PUT --data @peering-intentions.hcl http://127.0.0.1:8500/v1/config
```
</Tab>
</Tabs>
### Authorize service reads with ACLs
If ACLs are enabled on a Consul cluster, sidecar proxies that access exported services as an upstream must have an ACL token that grants read access.
Read access to all imported services is granted using either of the following rules associated with an ACL token:
- `service:write` permissions for any service in the sidecar's partition.
- `service:read` and `node:read` for all services and nodes, respectively, in sidecar's namespace and partition.
For Consul Enterprise, the permissions apply to all imported services in the service's partition. These permissions are satisfied when using a [service identity](/consul/docs/security/acl/acl-roles#service-identities).
Refer to [Reading servers](/consul/docs/connect/config-entries/exported-services#reading-services) in the `exported-services` configuration entry documentation for example rules.
For additional information about how to configure and use ACLs, refer to [ACLs system overview](/consul/docs/security/acl).

View File

@ -0,0 +1,137 @@
---
layout: docs
page_title: Manage Cluster Peering Connections
description: >-
Learn how to list, read, and delete cluster peering connections using Consul. You can use the HTTP API, the CLI, or the Consul UI to manage cluster peering connections.
---
# Manage cluster peering connections
This usage topic describes how to manage cluster peering connections using the CLI, the HTTP API, and the UI.
After you establish a cluster peering connection, you can get a list of all active peering connections, read a specific peering connection's information, and delete peering connections.
For Kubernetes-specific guidance for managing cluster peering connections, refer to [Manage cluster peering connections on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/manage-peering).
## List all peering connections
You can list all active peering connections in a cluster.
<Tabs>
<Tab heading="Consul CLI" group="cli">
```shell-session
$ consul peering list
Name State Imported Svcs Exported Svcs Meta
cluster-02 ACTIVE 0 2 env=production
cluster-03 PENDING 0 0
```
For more information, including optional flags and parameters, refer to the [`consul peering list` CLI command reference](/consul/commands/peering/list).
</Tab>
<Tab heading="HTTP API" group="api">
The following example shows how to format an API request to list peering connections:
```shell-session
$ curl --header "X-Consul-Token: 0137db51-5895-4c25-b6cd-d9ed992f4a52" http://127.0.0.1:8500/v1/peerings
```
For more information, including optional parameters and sample responses, refer to the [`/peering` endpoint reference](/consul/api-docs/peering#list-all-peerings).
</Tab>
<Tab heading="Consul UI" group="ui">
In the Consul UI, click **Peers**.
The UI lists peering connections you created for clusters in a datacenter. The name that appears in the list is the name of the cluster in a different datacenter with an established peering connection.
</Tab>
</Tabs>
## Read a peering connection
You can get information about individual peering connections between clusters.
<Tabs>
<Tab heading="Consul CLI" group="cli">
The following example outputs information about a peering connection locally referred to as "cluster-02":
```shell-session
$ consul peering read -name cluster-02
Name: cluster-02
ID: 3b001063-8079-b1a6-764c-738af5a39a97
State: ACTIVE
Meta:
env=production
Peer ID: e83a315c-027e-bcb1-7c0c-a46650904a05
Peer Server Name: server.dc1.consul
Peer CA Pems: 0
Peer Server Addresses:
10.0.0.1:8300
Imported Services: 0
Exported Services: 2
Create Index: 89
Modify Index: 89
```
For more information, including optional flags and parameters, refer to the [`consul peering read` CLI command reference](/consul/commands/peering/read).
</Tab>
<Tab heading="HTTP API" group="api">
```shell-session
$ curl --header "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" http://127.0.0.1:8500/v1/peering/cluster-02
```
For more information, including optional parameters and sample responses, refer to the [`/peering` endpoint reference](/consul/api-docs/peering#read-a-peering-connection).
</Tab>
<Tab heading="Consul UI" group="ui">
1. In the Consul UI, click **Peers**.
1. Click the name of a peered cluster to view additional details about the peering connection.
</Tab>
</Tabs>
## Delete peering connections
You can disconnect the peered clusters by deleting their connection. Deleting a peering connection stops data replication to the peer and deletes imported data, including services and CA certificates.
<Tabs>
<Tab heading="Consul CLI" group="cli">
The following examples deletes a peering connection to a cluster locally referred to as "cluster-02":
```shell-session
$ consul peering delete -name cluster-02
Successfully submitted peering connection, cluster-02, for deletion
```
For more information, including optional flags and parameters, refer to the [`consul peering delete` CLI command reference](/consul/commands/peering/delete).
</Tab>
<Tab heading="HTTP API" group="api">
```shell-session
$ curl --request DELETE --header "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" http://127.0.0.1:8500/v1/peering/cluster-02
```
This endpoint does not return a response. For more information, including optional parameters, refer to the [`/peering` endpoint reference](/consul/api-docs/peering/consul/api-docs/peering#delete-a-peering-connection).
</Tab>
<Tab heading="Consul UI" group="ui">
1. In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in that datacenter.
1. Next to the name of the peer, click **More** (three horizontal dots) and then **Delete**.
1. Click **Delete** to confirm and remove the peering connection.
</Tab>
</Tabs>

View File

@ -0,0 +1,168 @@
---
layout: docs
page_title: Cluster Peering L7 Traffic Management
description: >-
Combine service resolver configurations with splitter and router configurations to manage L7 traffic in Consul deployments with cluster peering connections. Learn how to define dynamic traffic rules to target peers for redirects and failover.
---
# Manage L7 traffic with cluster peering
This usage topic describes how to configure and apply the [`service-resolver` configuration entry](/consul/docs/connect/config-entries/service-resolver) to set up redirects and failovers between services that have an existing cluster peering connection.
For Kubernetes-specific guidance for managing L7 traffic with cluster peering, refer to [Manage L7 traffic with cluster peering on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/l7-traffic).
## Service resolvers for redirects and failover
When you use cluster peering to connect datacenters through their admin partitions, you can use [dynamic traffic management](/consul/docs/connect/l7-traffic) to configure your service mesh so that services automatically forward traffic to services hosted on peer clusters.
However, the `service-splitter` and `service-router` configuration entry kinds do not natively support directly targeting a service instance hosted on a peer. Before you can split or route traffic to a service on a peer, you must define the service hosted on the peer as an upstream service by configuring a failover in the `service-resolver` configuration entry. Then, you can set up a redirect in a second service resolver to interact with the peer service by name.
For more information about formatting, updating, and managing configuration entries in Consul, refer to [How to use configuration entries](/consul/docs/agent/config-entries).
## Configure dynamic traffic between peers
To configure L7 traffic management behavior in deployments with cluster peering connections, complete the following steps in order:
1. Define the peer cluster as a failover target in the service resolver configuration.
The following examples update the [`service-resolver` configuration entry](/consul/docs/connect/config-entries/service-resolver) in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in peer `cluster-02` when it detects multiple connection failures.
<CodeTabs tabs={[ "HCL", "JSON", "YAML" ]}>
```hcl
Kind = "service-resolver"
Name = "frontend"
ConnectTimeout = "15s"
Failover = {
"*" = {
Targets = [
{Peer = "cluster-02"}
]
}
}
```
```json
{
"ConnectTimeout": "15s",
"Kind": "service-resolver",
"Name": "frontend",
"Failover": {
"*": {
"Targets": [
{
"Peer": "cluster-02"
}
]
}
},
"CreateIndex": 250,
"ModifyIndex": 250
}
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: frontend
spec:
connectTimeout: 15s
failover:
'*':
targets:
- peer: 'cluster-02'
service: 'frontend'
namespace: 'default'
```
</CodeTabs>
1. Define the desired behavior in `service-splitter` or `service-router` configuration entries.
The following example splits traffic evenly between `frontend` services hosted on peers by defining the desired behavior locally:
<CodeTabs tabs={[ "HCL", "JSON", "YAML" ]}>
```hcl
Kind = "service-splitter"
Name = "frontend"
Splits = [
{
Weight = 50
## defaults to service with same name as configuration entry ("frontend")
},
{
Weight = 50
Service = "frontend-peer"
},
]
```
```json
{
"Kind": "service-splitter",
"Name": "frontend",
"Splits": [
{
"Weight": 50
},
{
"Weight": 50,
"Service": "frontend-peer"
}
]
}
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceSplitter
metadata:
name: frontend
spec:
splits:
- weight: 50
## defaults to service with same name as configuration entry ("frontend")
- weight: 50
service: frontend-peer
```
</CodeTabs>
1. Create a local `service-resolver` configuration entry named `frontend-peer` and define a redirect targeting the peer and its service:
<CodeTabs tabs={[ "HCL", "JSON", "YAML" ]}>
```hcl
Kind = "service-resolver"
Name = "frontend-peer"
Redirect {
Service = frontend
Peer = "cluster-02"
}
```
```json
{
"Kind": "service-resolver",
"Name": "frontend-peer",
"Redirect": {
"Service": "frontend",
"Peer": "cluster-02"
}
}
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: frontend-peer
spec:
redirect:
peer: 'cluster-02'
service: 'frontend'
```
</CodeTabs>

View File

@ -335,6 +335,57 @@ spec:
```
</CodeTabs>
### Cluster peering
When using cluster peering connections, intentions secure your deployments with authorized service-to-service communication between remote datacenters. In the following example, the service intentions configuration entry authorizes the `backend-service` to communicate with the `frontend-service` that is hosted on remote peer `cluster-02`:
<CodeTabs tabs={[ "HCL", "Kubernetes YAML", "JSON" ]}>
```hcl
Kind = "service-intentions"
Name = "backend-service"
Sources = [
{
Name = "frontend-service"
Peer = "cluster-02"
Action = "allow"
}
]
```
```yaml
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: backend-deny
spec:
destination:
name: backend
sources:
- name: "*"
action: deny
- name: frontend
action: allow
peer: cluster-01 ## The peer of the source service
```
```json
{
"Kind": "service-intentions",
"Name": "backend-service",
"Sources": [
{
"Name": "frontend-service",
"Peer": "cluster-02",
"Action": "allow"
}
]
}
```
</CodeTabs>
## Available Fields
<ConfigEntryReference

View File

@ -1,60 +0,0 @@
---
layout: docs
page_title: Enabling Service-to-service Traffic Across Peered Clusters
description: >-
Mesh gateways are specialized proxies that route data between services that cannot communicate directly. Learn how to enable service-to-service traffic across clusters in different datacenters or admin partitions that have an established peering connection.
---
# Enabling Service-to-service Traffic Across Peered Clusters
Mesh gateways are required for you to route service mesh traffic between peered Consul clusters. Clusters can reside in different clouds or runtime environments where general interconnectivity between all services in all clusters is not feasible.
At a minimum, a peered cluster exporting a service must have a mesh gateway registered.
For Enterprise, this mesh gateway must also be registered in the same partition as the exported service(s).
To use the `local` mesh gateway mode, there must also be a mesh gateway regsitered in the importing cluster.
Unlike mesh gateways for WAN-federated datacenters and partitions, mesh gateways between peers terminate mTLS sessions to decrypt data to HTTP services and then re-encrypt traffic to send to services. Data must be decrypted in order to evaluate and apply dynamic routing rules at the destination cluster, which reduces coupling between peers.
## Prerequisites
To configure mesh gateways for cluster peering, make sure your Consul environment meets the following requirements:
- Consul version 1.14.0 or newer.
- A local Consul agent is required to manage mesh gateway configuration.
- Use [Envoy proxies](/consul/docs/connect/proxies/envoy). Envoy is the only proxy with mesh gateway capabilities in Consul.
## Configuration
Configure the following settings to register and use the mesh gateway as a service in Consul.
### Gateway registration
- Specify `mesh-gateway` in the `kind` field to register the gateway with Consul.
- Define the `Proxy.Config` settings using opaque parameters compatible with your proxy. For Envoy, refer to the [Gateway Options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional configuration information.
Alternatively, you can also use the CLI to spin up and register a gateway in Consul. For additional information, refer to the [`consul connect envoy` command](/consul/commands/connect/envoy#mesh-gateways).
### Sidecar registration
- Configure the `proxy.upstreams` parameters to route traffic to the correct service, namespace, and peer. Refer to the [`upstreams` documentation](/consul/docs/connect/registration/service-registration#upstream-configuration-reference) for details.
- The service `proxy.upstreams.destination_name` is always required.
- The `proxy.upstreams.destination_peer` must be configured to enable cross-cluster traffic.
- The `proxy.upstream/destination_namespace` configuration is only necessary if the destination service is in a non-default namespace.
### Service exports
- Include the `exported-services` configuration entry to enable Consul to export services contained in a cluster to one or more additional clusters. For additional information, refer to the [Exported Services documentation](/consul/docs/connect/config-entries/exported-services).
### ACL configuration
If ACLs are enabled, you must add a token granting `service:write` for the gateway's service name and `service:read` for all services in the Enterprise admin partition or OSS datacenter to the gateway's service definition.
These permissions authorize the token to route communications for other Consul service mesh services.
You must also grant `mesh:write` to mesh gateways routing peering traffic in the data plane.
This permission allows a leaf certificate to be issued for mesh gateways to terminate TLS sessions for HTTP requests.
### Modes
Modes are configurable as either `remote` or `local` for mesh gateways that connect peered clusters.
The `none` setting is invalid for mesh gateways in peered clusters and will be ignored by the gateway.
By default, all proxies connecting to peered clusters use mesh gateways in [remote mode](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters#remote).

View File

@ -0,0 +1,180 @@
---
layout: docs
page_title: Cluster Peering on Kubernetes Technical Specifications
description: >-
In Kubernetes deployments, cluster peering connections interact with mesh gateways, sidecar proxies, exported services, and ACLs. Learn about requirements specific to k8s, including required Helm values and CRDs.
---
# Cluster peering on Kubernetes technical specifications
This reference topic describes the technical specifications associated with using cluster peering in your Kubernetes deployments. These specifications include [required Helm values](#helm-requirements) and [required custom resource definitions (CRDs)](#crd-requirements), as well as required Consul components and their configurations.
For cluster peering requirements in non-Kubernetes deployments, refer to [cluster peering technical specifications](/consul/docs/connect/cluster-peering/tech-specs).
## General Requirements
To use cluster peering features, make sure your Consul environment meets the following prerequisites:
- Consul v1.14 or higher
- Consul on Kubernetes v1.0.0 or higher
- At least two Kubernetes clusters
In addition, the following service mesh components are required in order to establish cluster peering connections:
- [Helm](#helm-requirements)
- [Custom resource definitions (CRD)](#crd-requirements)
- [Mesh gateways](#mesh-gateway-requirements)
- [Exported services](#exported-service-requirements)
- [ACLs](#acl-requirements)
## Helm requirements
Mesh gateways are required when establishing cluster peering connections. The following values must be set in the Helm chart to enable mesh gateways:
- [`global.tls.enabled = true`](/consul/docs/k8s/helm#v-global-tls-enabled)
- [`meshGateway.enabled = true`](/consul/docs/k8s/helm#v-meshgateway-enabled)
Refer to the following example Helm configuration:
<CodeBlockConfig filename="values.yaml">
```yaml
global:
name: consul
image: "hashicorp/consul:1.14.1"
peering:
enabled: true
tls:
enabled: true
meshGateway:
enabled: true
```
</CodeBlockConfig>
After mesh gateways are enabled in the Helm chart, you can separately [configure Mesh CRDs](#mesh-gateway-configuration-for-kubernetes).
## CRD requirements
You must create the following CRDs in order to establish a peering connection:
- `PeeringAcceptor`: Generates a peering token and accepts an incoming peering connection.
- `PeeringDialer`: Uses a peering token to make an outbound peering connection with the cluster that generated the token.
Refer to the following example CRDs:
<CodeTabs tabs={[ "PeeringAcceptor", "PeeringDialer" ]}>
<CodeBlockConfig filename="acceptor.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringAcceptor
metadata:
name: cluster-02 ## The name of the peer you want to connect to
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
<CodeBlockConfig filename="dialer.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringDialer
metadata:
name: cluster-01 ## The name of the peer you want to connect to
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
</CodeTabs>
## Mesh gateway requirements
Mesh gateways are required for routing service mesh traffic between partitions with cluster peering connections. Consider the following general requirements for mesh gateways when using cluster peering:
- A cluster requires a registered mesh gateway in order to export services to peers.
- For Enterprise, this mesh gateway must also be registered in the same partition as the exported services and their `exported-services` configuration entry.
- To use the `local` mesh gateway mode, you must register a mesh gateway in the importing cluster.
In addition, you must define the `Proxy.Config` settings using opaque parameters compatible with your proxy. Refer to the [Gateway options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional Envoy proxy configuration information.
### Mesh gateway modes
By default, all cluster peering connections use mesh gateways in [remote mode](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters#remote). Be aware of these additional requirements when changing a mesh gateway's mode.
- For mesh gateways that connect peered clusters, you can set the `mode` as either `remote` or `local`.
- The `none` mode is invalid for mesh gateways with cluster peering connections.
Refer to [mesh gateway modes](/consul/docs/connect/gateways/mesh-gateway#modes) for more information.
### Mesh gateway configuration for Kubernetes
Mesh gateways are required for cluster peering connections. Complete the following steps to add mesh gateways to your deployment so that you can establish cluster peering connections:
1. In `cluster-01` create the `Mesh` custom resource with `peeringThroughMeshGateways` set to `true`.
<CodeBlockConfig filename="mesh.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: Mesh
metadata:
name: mesh
spec:
peering:
peerThroughMeshGateways: true
```
</CodeBlockConfig>
1. Apply the mesh gateway to `cluster-01`. Replace `CLUSTER1_CONTEXT` to target the first Consul cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT apply -f mesh.yaml
```
1. Repeat the process to create and apply a mesh gateway with cluster peering enabled to `cluster-02`. Replace `CLUSTER2_CONTEXT` to target the second Consul cluster.
<CodeBlockConfig filename="mesh.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: Mesh
metadata:
name: mesh
spec:
peering:
peerThroughMeshGateways: true
```
</CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply -f mesh.yaml
```
## Exported service requirements
The `exported-services` CRD is required in order for services to communicate across partitions with cluster peering connections.
Refer to the [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services) reference for more information.
## ACL requirements
If ACLs are enabled, you must add tokens to grant the following permissions:
- Grant `service:write` permissions to services that define mesh gateways in their server definition.
- Grant `service:read` permissions for all services on the partition.
- Grant `mesh:write` permissions to the mesh gateways that participate in cluster peering connections. This permission allows a leaf certificate to be issued for mesh gateways to terminate TLS sessions for HTTP requests.

View File

@ -0,0 +1,453 @@
---
layout: docs
page_title: Establish Cluster Peering Connections on Kubernetes
description: >-
To establish a cluster peering connection on Kubernetes, generate a peering token to establish communication. Then export services and authorize requests with service intentions.
---
# Establish cluster peering connections on Kubernetes
This page details the process for establishing a cluster peering connection between services in a Consul on Kubernetes deployment.
The overall process for establishing a cluster peering connection consists of the following steps:
1. Create a peering token in one cluster.
1. Use the peering token to establish peering with a second cluster.
1. Export services between clusters.
1. Create intentions to authorize services for peers.
Cluster peering between services cannot be established until all four steps are complete.
For general guidance for establishing cluster peering connections, refer to [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-peering).
## Prerequisites
You must meet the following requirements to use Consul's cluster peering features with Kubernetes:
- Consul v1.14.1 or higher
- Consul on Kubernetes v1.0.0 or higher
- At least two Kubernetes clusters
In Consul on Kubernetes, peers identify each other using the `metadata.name` values you establish when creating the `PeeringAcceptor` and `PeeringDialer` CRDs. For additional information about requirements for cluster peering on Kubernetes deployments, refer to [Cluster peering on Kubernetes technical specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs).
### Assign cluster IDs to environmental variables
After you provision a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters, you can assign your clusters to environmental variables for future use.
1. Get the context names for your Kubernetes clusters using one of these methods:
- Run the `kubectl config current-context` command to get the context for the cluster you are currently in.
- Run the `kubectl config get-contexts` command to get all configured contexts in your kubeconfig file.
1. Use the `kubectl` command to export the Kubernetes context names and then set them to variables. For more information on how to use kubeconfig and contexts, refer to the [Kubernetes docs on configuring access to multiple clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
```shell-session
$ export CLUSTER1_CONTEXT=<CONTEXT for first Kubernetes cluster>
$ export CLUSTER2_CONTEXT=<CONTEXT for second Kubernetes cluster>
```
### Update the Helm chart
To use cluster peering with Consul on Kubernetes deployments, update the Helm chart with [the required values](/consul/docs/k8s/connect/cluster-peering/tech-specs#helm-requirements). After updating the Helm chart, you can use the `consul-k8s` CLI to apply `values.yaml` to each cluster.
1. In `cluster-01`, run the following commands:
```shell-session
$ export HELM_RELEASE_NAME1=cluster-01
```
```shell-session
$ helm install ${HELM_RELEASE_NAME1} hashicorp/consul --create-namespace --namespace consul --version "1.0.1" --values values.yaml --set global.datacenter=dc1 --kube-context $CLUSTER1_CONTEXT
```
1. In `cluster-02`, run the following commands:
```shell-session
$ export HELM_RELEASE_NAME2=cluster-02
```
```shell-session
$ helm install ${HELM_RELEASE_NAME2} hashicorp/consul --create-namespace --namespace consul --version "1.0.1" --values values.yaml --set global.datacenter=dc2 --kube-context $CLUSTER2_CONTEXT
```
### Configure the mesh gateway mode for traffic between services
In Kubernetes deployments, you can configure mesh gateways to use `local` mode so that a service dialing a service in a remote peer dials the remote mesh gateway instead of the local mesh gateway. To configure the mesh gateway mode so that this traffic always leaves through the local mesh gateway, you can use the `ProxyDefaults` CRD.
1. In `cluster-01` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode.
<CodeBlockConfig filename="proxy-defaults.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ProxyDefaults
metadata:
name: global
spec:
meshGateway:
mode: local
```
</CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT apply -f proxy-defaults.yaml
```
1. In `cluster-02` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode.
<CodeBlockConfig filename="proxy-defaults.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ProxyDefaults
metadata:
name: global
spec:
meshGateway:
mode: local
```
</CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply -f proxy-defaults.yaml
```
## Create a peering token
To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection.
Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections.
1. In `cluster-01`, create the `PeeringAcceptor` custom resource. To ensure cluster peering connections are secure, the `metadata.name` field cannot be duplicated. Refer to the peer by a specific name.
<CodeBlockConfig filename="acceptor.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringAcceptor
metadata:
name: cluster-02 ## The name of the peer you want to connect to
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
1. Apply the `PeeringAcceptor` resource to the first cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml
```
1. Save your peering token so that you can export it to the other cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml
```
## Establish a connection between clusters
Next, use the peering token to establish a secure connection between the clusters.
1. Apply the peering token to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml
```
1. In `cluster-02`, create the `PeeringDialer` custom resource. To ensure cluster peering connections are secure, the `metadata.name` field cannot be duplicated. Refer to the peer by a specific name.
<CodeBlockConfig filename="dialer.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringDialer
metadata:
name: cluster-01 ## The name of the peer you want to connect to
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
1. Apply the `PeeringDialer` resource to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml
```
## Export services between clusters
After you establish a connection between the clusters, you need to create an `exported-services` CRD that defines the services that are available to another admin partition.
While the CRD can target admin partitions either locally or remotely, clusters peering always exports services to remote admin partitions. Refer to [exported service consumers](/consul/docs/connect/config-entries/exported-services#consumers-1) for more information.
1. For the service in `cluster-02` that you want to export, add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods prior to deploying. The annotation allows the workload to join the mesh. It is highlighted in the following example:
<CodeBlockConfig filename="backend.yaml" highlight="37">
```yaml
# Service to expose backend
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9090
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend
---
# Deployment for backend
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
serviceAccountName: backend
containers:
- name: backend
image: nicholasjackson/fake-service:v0.22.4
ports:
- containerPort: 9090
env:
- name: "LISTEN_ADDR"
value: "0.0.0.0:9090"
- name: "NAME"
value: "backend"
- name: "MESSAGE"
value: "Response from backend"
```
</CodeBlockConfig>
1. Deploy the `backend` service to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename backend.yaml
```
1. In `cluster-02`, create an `ExportedServices` custom resource. The name of the peer that consumes the service should be identical to the name set in the `PeeringDialer` CRD.
<CodeBlockConfig filename="exported-service.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ExportedServices
metadata:
name: default ## The name of the partition containing the service
spec:
services:
- name: backend ## The name of the service you want to export
consumers:
- peer: cluster-01 ## The name of the peer that receives the service
```
</CodeBlockConfig>
1. Apply the `ExportedServices` resource to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename exported-service.yaml
```
## Authorize services for peers
Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters.
1. Create service intentions for the second cluster. The name of the peer should match the name set in the `PeeringDialer` CRD.
<CodeBlockConfig filename="intention.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: backend-deny
spec:
destination:
name: backend
sources:
- name: "*"
action: deny
- name: frontend
action: allow
peer: cluster-01 ## The peer of the source service
```
</CodeBlockConfig>
1. Apply the intentions to the second cluster.
<CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yaml
```
</CodeBlockConfig>
1. Add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods before deploying the workload so that the services in `cluster-01` can dial `backend` in `cluster-02`. To dial the upstream service from an application, configure the application so that that requests are sent to the correct DNS name as specified in [Service Virtual IP Lookups](/consul/docs/discovery/dns#service-virtual-ip-lookups). In the following example, the annotation that allows the workload to join the mesh and the configuration provided to the workload that enables the workload to dial the upstream service using the correct DNS name is highlighted. [Service Virtual IP Lookups for Consul Enterprise](/consul/docs/discovery/dns#service-virtual-ip-lookups-for-consul-enterprise) details how you would similarly format a DNS name including partitions and namespaces.
<CodeBlockConfig filename="frontend.yaml" highlight="36,51">
```yaml
# Service to expose frontend
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
selector:
app: frontend
ports:
- name: http
protocol: TCP
port: 9090
targetPort: 9090
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
serviceAccountName: frontend
containers:
- name: frontend
image: nicholasjackson/fake-service:v0.22.4
securityContext:
capabilities:
add: ["NET_ADMIN"]
ports:
- containerPort: 9090
env:
- name: "LISTEN_ADDR"
value: "0.0.0.0:9090"
- name: "UPSTREAM_URIS"
value: "http://backend.virtual.cluster-02.consul"
- name: "NAME"
value: "frontend"
- name: "MESSAGE"
value: "Hello World"
- name: "HTTP_CLIENT_KEEP_ALIVES"
value: "false"
```
</CodeBlockConfig>
1. Apply the service file to the first cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml
```
1. Run the following command in `frontend` and then check the output to confirm that you peered your clusters successfully.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090
```
<CodeBlockConfig highlight="29" hideClipboard>
```json
{
"name": "frontend",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.16.2.11"
],
"start_time": "2022-08-26T23:40:01.167199",
"end_time": "2022-08-26T23:40:01.226951",
"duration": "59.752279ms",
"body": "Hello World",
"upstream_calls": {
"http://backend.virtual.cluster-02.consul": {
"name": "backend",
"uri": "http://backend.virtual.cluster-02.consul",
"type": "HTTP",
"ip_addresses": [
"10.32.2.10"
],
"start_time": "2022-08-26T23:40:01.223503",
"end_time": "2022-08-26T23:40:01.224653",
"duration": "1.149666ms",
"headers": {
"Content-Length": "266",
"Content-Type": "text/plain; charset=utf-8",
"Date": "Fri, 26 Aug 2022 23:40:01 GMT"
},
"body": "Response from backend",
"code": 200
}
},
"code": 200
}
```
</CodeBlockConfig>
### Authorize service reads with ACLs
If ACLs are enabled on a Consul cluster, sidecar proxies that access exported services as an upstream must have an ACL token that grants read access.
Read access to all imported services is granted using either of the following rules associated with an ACL token:
- `service:write` permissions for any service in the sidecar's partition.
- `service:read` and `node:read` for all services and nodes, respectively, in sidecar's namespace and partition.
For Consul Enterprise, the permissions apply to all imported services in the service's partition. These permissions are satisfied when using a [service identity](/consul/docs/security/acl/acl-roles#service-identities).
Refer to [Reading servers](/consul/docs/connect/config-entries/exported-services#reading-services) in the `exported-services` configuration entry documentation for example rules.
For additional information about how to configure and use ACLs, refer to [ACLs system overview](/consul/docs/security/acl).

View File

@ -0,0 +1,75 @@
---
layout: docs
page_title: Manage L7 Traffic With Cluster Peering on Kubernetes
description: >-
Combine service resolver configurations with splitter and router configurations to manage L7 traffic in Consul on Kubernetes deployments with cluster peering connections. Learn how to define dynamic traffic rules to target peers for redirects in k8s.
---
# Manage L7 traffic with cluster peering on Kubernetes
This usage topic describes how to configure the `service-resolver` custom resource definition (CRD) to set up and manage L7 traffic between services that have an existing cluster peering connection in Consul on Kubernetes deployments.
For general guidance for managing L7 traffic with cluster peering, refer to [Manage L7 traffic with cluster peering](/consul/docs/connect/cluster-peering/usage/peering-traffic-management).
## Service resolvers for redirects and failover
When you use cluster peering to connect datacenters through their admin partitions, you can use [dynamic traffic management](/consul/docs/connect/l7-traffic) to configure your service mesh so that services automatically forward traffic to services hosted on peer clusters.
However, the `service-splitter` and `service-router` CRDs do not natively support directly targeting a service instance hosted on a peer. Before you can split or route traffic to a service on a peer, you must define the service hosted on the peer as an upstream service by configuring a failover in a `service-resolver` CRD. Then, you can set up a redirect in a second service resolver to interact with the peer service by name.
For more information about formatting, updating, and managing configuration entries in Consul, refer to [How to use configuration entries](/consul/docs/agent/config-entries).
## Configure dynamic traffic between peers
To configure L7 traffic management behavior in deployments with cluster peering connections, complete the following steps in order:
1. Define the peer cluster as a failover target in the service resolver configuration.
The following example updates the [`service-resolver` CRD](/consul/docs/connect/config-entries/) in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in peer `cluster-02` when it detects multiple connection failures.
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: frontend
spec:
connectTimeout: 15s
failover:
'*':
targets:
- peer: 'cluster-02'
service: 'frontend'
namespace: 'default'
```
1. Define the desired behavior in `service-splitter` or `service-router` CRD.
The following example splits traffic evenly between `frontend` and `frontend-peer`:
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceSplitter
metadata:
name: frontend
spec:
splits:
- weight: 50
## defaults to service with same name as configuration entry ("frontend")
- weight: 50
service: frontend-peer
```
1. Create a second `service-resolver` configuration entry on the local cluster that resolves the name of the peer service you used when splitting or routing the traffic.
The following example uses the name `frontend-peer` to define a redirect targeting the `frontend` service on the peer `cluster-02`:
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: frontend-peer
spec:
redirect:
peer: 'cluster-02'
service: 'frontend'
```

View File

@ -0,0 +1,121 @@
---
layout: docs
page_title: Manage Cluster Peering Connections on Kubernetes
description: >-
Learn how to list, read, and delete cluster peering connections using Consul on Kubernetes. You can also reset cluster peering connections on k8s deployments.
---
# Manage cluster peering connections on Kubernetes
This usage topic describes how to manage cluster peering connections on Kubernetes deployments.
After you establish a cluster peering connection, you can get a list of all active peering connections, read a specific peering connection's information, and delete peering connections.
For general guidance for managing cluster peering connections, refer to [Manage L7 traffic with cluster peering](/consul/docs/connect/cluster-peering/usage/peering-traffic-management).
## Reset a peering connection
To reset the cluster peering connection, you need to generate a new peering token from the cluster where you created the `PeeringAcceptor` CRD. The only way to create or set a new peering token is to manually adjust the value of the annotation `consul.hashicorp.com/peering-version`. Creating a new token causes the previous token to expire.
1. In the `PeeringAcceptor` CRD, add the annotation `consul.hashicorp.com/peering-version`. If the annotation already exists, update its value to a higher version.
<CodeBlockConfig filename="acceptor.yml" highlight="6" hideClipboard>
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringAcceptor
metadata:
name: cluster-02
annotations:
consul.hashicorp.com/peering-version: "1" ## The peering version you want to set, must be in quotes
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
1. After updating `PeeringAcceptor`, repeat all of the steps to [establish a new peering connection](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering).
## List all peering connections
In Consul on Kubernetes deployments, you can list all active peering connections in a cluster using the Consul CLI.
1. If necessary, [configure your CLI to interact with the Consul cluster](/consul/tutorials/get-started-kubernetes/kubernetes-gs-deploy#configure-your-cli-to-interact-with-consul-cluster).
1. Run the [`consul peering list` CLI command](/consul/commands/peering/list).
```shell-session
$ consul peering list
Name State Imported Svcs Exported Svcs Meta
cluster-02 ACTIVE 0 2 env=production
cluster-03 PENDING 0 0
```
## Read a peering connection
In Consul on Kubernetes deployments, you can get information about individual peering connections between clusters using the Consul CLI.
1. If necessary, [configure your CLI to interact with the Consul cluster](/consul/tutorials/get-started-kubernetes/kubernetes-gs-deploy#configure-your-cli-to-interact-with-consul-cluster).
1. Run the [`consul peering read` CLI command](/consul/commands/peering/read).
```shell-session
$ consul peering read -name cluster-02
Name: cluster-02
ID: 3b001063-8079-b1a6-764c-738af5a39a97
State: ACTIVE
Meta:
env=production
Peer ID: e83a315c-027e-bcb1-7c0c-a46650904a05
Peer Server Name: server.dc1.consul
Peer CA Pems: 0
Peer Server Addresses:
10.0.0.1:8300
Imported Services: 0
Exported Services: 2
Create Index: 89
Modify Index: 89
```
## Delete peering connections
To end a peering connection in Kubernetes deployments, delete both the `PeeringAcceptor` and `PeeringDialer` resources.
1. Delete the `PeeringDialer` resource from the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT delete --filename dialer.yaml
```
1. Delete the `PeeringAcceptor` resource from the first cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT delete --filename acceptor.yaml
````
To confirm that you deleted your peering connection in `cluster-01`, query the the `/health` HTTP endpoint:
1. Exec into the server pod for the first cluster.
```shell-session
$ kubectl exec -it consul-server-0 --context $CLUSTER1_CONTEXT -- /bin/sh
```
1. If you've enabled ACLs, export an ACL token to access the `/health` HTP endpoint for services. The bootstrap token may be used if an ACL token is not already provisioned.
```shell-session
$ export CONSUL_HTTP_TOKEN=<INSERT BOOTSTRAP ACL TOKEN>
```
1. Query the the `/health` HTTP endpoint. Peered services with deleted connections should no longe appear.
```shell-session
$ curl "localhost:8500/v1/health/connect/backend?peer=cluster-02"
```

View File

@ -527,10 +527,6 @@
{
"title": "Enabling Peering Control Plane Traffic",
"path": "connect/gateways/mesh-gateway/peering-via-mesh-gateways"
},
{
"title": "Enabling Service-to-service Traffic Across Peered Clusters",
"path": "connect/gateways/mesh-gateway/service-to-service-traffic-peers"
}
]
},
@ -548,16 +544,29 @@
"title": "Cluster Peering",
"routes": [
{
"title": "What is Cluster Peering?",
"title": "Overview",
"path": "connect/cluster-peering"
},
{
"title": "Create and Manage Peering Connections",
"path": "connect/cluster-peering/create-manage-peering"
"title": "Technical Specifications",
"path": "connect/cluster-peering/tech-specs"
},
{
"title": "Cluster Peering on Kubernetes",
"path": "connect/cluster-peering/k8s"
"title": "Usage",
"routes": [
{
"title": "Establish Cluster Peering Connections",
"path": "connect/cluster-peering/usage/establish-cluster-peering"
},
{
"title": "Manage Cluster Peering Connections",
"path": "connect/cluster-peering/usage/manage-connections"
},
{
"title": "Manage L7 Traffic With Cluster Peering",
"path": "connect/cluster-peering/usage/peering-traffic-management"
}
]
}
]
},
@ -1006,6 +1015,36 @@
"title": "Admin Partitions",
"href": "/docs/enterprise/admin-partitions"
},
{
"title": "Cluster Peering",
"routes": [
{
"title": "Overview",
"href": "/docs/connect/cluster-peering"
},
{
"title": "Technical Specifications",
"path": "k8s/connect/cluster-peering/tech-specs"
},
{
"title": "Usage",
"routes": [
{
"title": "Establish Cluster Peering Connections",
"path": "k8s/connect/cluster-peering/usage/establish-peering"
},
{
"title": "Manage Cluster Peering Connections",
"path": "k8s/connect/cluster-peering/usage/manage-peering"
},
{
"title": "Manage L7 Traffic With Cluster Peering",
"path": "k8s/connect/cluster-peering/usage/l7-traffic"
}
]
}
]
},
{
"title": "Transparent Proxy",
"href": "/docs/connect/transparent-proxy"

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

View File

@ -4,4 +4,16 @@
// modify or delete existing redirects without first verifying internally.
// Next.js redirect documentation: https://nextjs.org/docs/api-reference/next.config.js/redirects
module.exports = []
module.exports = [
{
source: '/docs/connect/cluster-peering/create-manage-peering',
destination:
'/docs/connect/cluster-peering/usage/establish-cluster-peering',
permanent: true,
},
{
source: '/docs/connect/cluster-peering/k8s',
destination: '/docs/k8s/connect/cluster-peering/k8s-tech-specs',
permanent: true,
},
]