--- layout: docs page_title: Establish Cluster Peering Connections on Kubernetes description: >- To establish a cluster peering connection on Kubernetes, generate a peering token to establish communication. Then export services and authorize requests with service intentions. --- # Establish cluster peering connections on Kubernetes This page details the process for establishing a cluster peering connection between services in a Consul on Kubernetes deployment. The overall process for establishing a cluster peering connection consists of the following steps: 1. Create a peering token in one cluster. 1. Use the peering token to establish peering with a second cluster. 1. Export services between clusters. 1. Create intentions to authorize services for peers. Cluster peering between services cannot be established until all four steps are complete. Cluster peering between services cannot be established until all four steps are complete. If you want to establish cluster peering connections and create sameness groups at the same time, refer to the guidance in [create sameness groups](/consul/docs/k8s/connect/cluster-peering/usage/create-sameness-groups). For general guidance for establishing cluster peering connections, refer to [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering). ## Prerequisites You must meet the following requirements to use Consul's cluster peering features with Kubernetes: - Consul v1.14.1 or higher - Consul on Kubernetes v1.0.0 or higher - At least two Kubernetes clusters In Consul on Kubernetes, peers identify each other using the `metadata.name` values you establish when creating the `PeeringAcceptor` and `PeeringDialer` CRDs. For additional information about requirements for cluster peering on Kubernetes deployments, refer to [Cluster peering on Kubernetes technical specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs). ### Assign cluster IDs to environmental variables After you provision a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters, you can assign your clusters to environmental variables for future use. 1. Get the context names for your Kubernetes clusters using one of these methods: - Run the `kubectl config current-context` command to get the context for the cluster you are currently in. - Run the `kubectl config get-contexts` command to get all configured contexts in your kubeconfig file. 1. Use the `kubectl` command to export the Kubernetes context names and then set them to variables. For more information on how to use kubeconfig and contexts, refer to the [Kubernetes docs on configuring access to multiple clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). ```shell-session $ export CLUSTER1_CONTEXT= $ export CLUSTER2_CONTEXT= ``` ### Install Consul using Helm and configure peering over mesh gateways To use cluster peering with Consul on Kubernetes deployments, update the Helm chart with [the required values](/consul/docs/k8s/connect/cluster-peering/tech-specs#helm-requirements). After updating the Helm chart, you can use the `consul-k8s` CLI to apply `values.yaml` to each cluster. 1. In `cluster-01`, run the following commands: ```shell-session $ export HELM_RELEASE_NAME1=cluster-01 ``` ```shell-session $ helm install ${HELM_RELEASE_NAME1} hashicorp/consul --create-namespace --namespace consul --version "1.2.0" --values values.yaml --set global.datacenter=dc1 --kube-context $CLUSTER1_CONTEXT ``` 1. In `cluster-02`, run the following commands: ```shell-session $ export HELM_RELEASE_NAME2=cluster-02 ``` ```shell-session $ helm install ${HELM_RELEASE_NAME2} hashicorp/consul --create-namespace --namespace consul --version "1.2.0" --values values.yaml --set global.datacenter=dc2 --kube-context $CLUSTER2_CONTEXT ``` 1. For both clusters apply the `Mesh` configuration entry values provided in [Mesh Gateway Specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs#mesh-gateway-specifications) to allow establishing peering connections over mesh gateways. ### Configure the mesh gateway mode for traffic between services In Kubernetes deployments, you can configure mesh gateways to use `local` mode so that a service dialing a service in a remote peer dials the local mesh gateway instead of the remote mesh gateway. To configure the mesh gateway mode so that this traffic always leaves through the local mesh gateway, you can use the `ProxyDefaults` CRD. 1. In `cluster-01` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode. ```yaml apiVersion: consul.hashicorp.com/v1alpha1 kind: ProxyDefaults metadata: name: global spec: meshGateway: mode: local ``` ```shell-session $ kubectl --context $CLUSTER1_CONTEXT apply -f proxy-defaults.yaml ``` 1. In `cluster-02` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode. ```yaml apiVersion: consul.hashicorp.com/v1alpha1 kind: ProxyDefaults metadata: name: global spec: meshGateway: mode: local ``` ```shell-session $ kubectl --context $CLUSTER2_CONTEXT apply -f proxy-defaults.yaml ``` ## Create a peering token To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection. Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections. 1. In `cluster-01`, create the `PeeringAcceptor` custom resource. To ensure cluster peering connections are secure, the `metadata.name` field cannot be duplicated. Refer to the peer by a specific name. ```yaml apiVersion: consul.hashicorp.com/v1alpha1 kind: PeeringAcceptor metadata: name: cluster-02 ## The name of the peer you want to connect to spec: peer: secret: name: "peering-token" key: "data" backend: "kubernetes" ``` 1. Apply the `PeeringAcceptor` resource to the first cluster. ```shell-session $ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml ``` 1. Save your peering token so that you can export it to the other cluster. ```shell-session $ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml ``` ## Establish a connection between clusters Next, use the peering token to establish a secure connection between the clusters. 1. Apply the peering token to the second cluster. ```shell-session $ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml ``` 1. In `cluster-02`, create the `PeeringDialer` custom resource. To ensure cluster peering connections are secure, the `metadata.name` field cannot be duplicated. Refer to the peer by a specific name. ```yaml apiVersion: consul.hashicorp.com/v1alpha1 kind: PeeringDialer metadata: name: cluster-01 ## The name of the peer you want to connect to spec: peer: secret: name: "peering-token" key: "data" backend: "kubernetes" ``` 1. Apply the `PeeringDialer` resource to the second cluster. ```shell-session $ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml ``` ## Export services between clusters After you establish a connection between the clusters, you need to create an `exported-services` CRD that defines the services that are available to another admin partition. While the CRD can target admin partitions either locally or remotely, clusters peering always exports services to remote admin partitions. Refer to [exported service consumers](/consul/docs/connect/config-entries/exported-services#consumers-1) for more information. 1. For the service in `cluster-02` that you want to export, add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods prior to deploying. The annotation allows the workload to join the mesh. It is highlighted in the following example: ```yaml # Service to expose backend apiVersion: v1 kind: Service metadata: name: backend spec: selector: app: backend ports: - name: http protocol: TCP port: 80 targetPort: 9090 --- apiVersion: v1 kind: ServiceAccount metadata: name: backend --- # Deployment for backend apiVersion: apps/v1 kind: Deployment metadata: name: backend labels: app: backend spec: replicas: 1 selector: matchLabels: app: backend template: metadata: labels: app: backend annotations: "consul.hashicorp.com/connect-inject": "true" spec: serviceAccountName: backend containers: - name: backend image: nicholasjackson/fake-service:v0.22.4 ports: - containerPort: 9090 env: - name: "LISTEN_ADDR" value: "0.0.0.0:9090" - name: "NAME" value: "backend" - name: "MESSAGE" value: "Response from backend" ``` 1. Deploy the `backend` service to the second cluster. ```shell-session $ kubectl --context $CLUSTER2_CONTEXT apply --filename backend.yaml ``` 1. In `cluster-02`, create an `ExportedServices` custom resource. The name of the peer that consumes the service should be identical to the name set in the `PeeringDialer` CRD. ```yaml apiVersion: consul.hashicorp.com/v1alpha1 kind: ExportedServices metadata: name: default ## The name of the partition containing the service spec: services: - name: backend ## The name of the service you want to export consumers: - peer: cluster-01 ## The name of the peer that receives the service ``` 1. Apply the `ExportedServices` resource to the second cluster. ```shell-session $ kubectl --context $CLUSTER2_CONTEXT apply --filename exported-service.yaml ``` ## Authorize services for peers Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters. 1. Create service intentions for the second cluster. The name of the peer should match the name set in the `PeeringDialer` CRD. ```yaml apiVersion: consul.hashicorp.com/v1alpha1 kind: ServiceIntentions metadata: name: backend-deny spec: destination: name: backend sources: - name: "*" action: deny - name: frontend action: allow peer: cluster-01 ## The peer of the source service ``` 1. Apply the intentions to the second cluster. ```shell-session $ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yaml ``` 1. Add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods before deploying the workload so that the services in `cluster-01` can dial `backend` in `cluster-02`. To dial the upstream service from an application, configure the application so that that requests are sent to the correct DNS name as specified in [Service Virtual IP Lookups](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups). In the following example, the annotation that allows the workload to join the mesh and the configuration provided to the workload that enables the workload to dial the upstream service using the correct DNS name is highlighted. [Service Virtual IP Lookups for Consul Enterprise](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups-for-consul-enterprise) details how you would similarly format a DNS name including partitions and namespaces. ```yaml # Service to expose frontend apiVersion: v1 kind: Service metadata: name: frontend spec: selector: app: frontend ports: - name: http protocol: TCP port: 9090 targetPort: 9090 --- apiVersion: v1 kind: ServiceAccount metadata: name: frontend --- apiVersion: apps/v1 kind: Deployment metadata: name: frontend labels: app: frontend spec: replicas: 1 selector: matchLabels: app: frontend template: metadata: labels: app: frontend annotations: "consul.hashicorp.com/connect-inject": "true" spec: serviceAccountName: frontend containers: - name: frontend image: nicholasjackson/fake-service:v0.22.4 securityContext: capabilities: add: ["NET_ADMIN"] ports: - containerPort: 9090 env: - name: "LISTEN_ADDR" value: "0.0.0.0:9090" - name: "UPSTREAM_URIS" value: "http://backend.virtual.cluster-02.consul" - name: "NAME" value: "frontend" - name: "MESSAGE" value: "Hello World" - name: "HTTP_CLIENT_KEEP_ALIVES" value: "false" ``` 1. Apply the service file to the first cluster. ```shell-session $ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml ``` 1. Run the following command in `frontend` and then check the output to confirm that you peered your clusters successfully. ```shell-session $ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090 ``` ```json { "name": "frontend", "uri": "/", "type": "HTTP", "ip_addresses": [ "10.16.2.11" ], "start_time": "2022-08-26T23:40:01.167199", "end_time": "2022-08-26T23:40:01.226951", "duration": "59.752279ms", "body": "Hello World", "upstream_calls": { "http://backend.virtual.cluster-02.consul": { "name": "backend", "uri": "http://backend.virtual.cluster-02.consul", "type": "HTTP", "ip_addresses": [ "10.32.2.10" ], "start_time": "2022-08-26T23:40:01.223503", "end_time": "2022-08-26T23:40:01.224653", "duration": "1.149666ms", "headers": { "Content-Length": "266", "Content-Type": "text/plain; charset=utf-8", "Date": "Fri, 26 Aug 2022 23:40:01 GMT" }, "body": "Response from backend", "code": 200 } }, "code": 200 } ``` ### Authorize service reads with ACLs If ACLs are enabled on a Consul cluster, sidecar proxies that access exported services as an upstream must have an ACL token that grants read access. Read access to all imported services is granted using either of the following rules associated with an ACL token: - `service:write` permissions for any service in the sidecar's partition. - `service:read` and `node:read` for all services and nodes, respectively, in sidecar's namespace and partition. For Consul Enterprise, the permissions apply to all imported services in the service's partition. These permissions are satisfied when using a [service identity](/consul/docs/security/acl/acl-roles#service-identities). Refer to [Reading servers](/consul/docs/connect/config-entries/exported-services#reading-services) in the `exported-services` configuration entry documentation for example rules. For additional information about how to configure and use ACLs, refer to [ACLs system overview](/consul/docs/security/acl).