mirror of https://github.com/status-im/consul.git
474 lines
15 KiB
Plaintext
474 lines
15 KiB
Plaintext
---
|
|
layout: docs
|
|
page_title: Cluster Peering on Kubernetes
|
|
description: >-
|
|
If you use Consul on Kubernetes, learn how to enable cluster peering, create peering CRDs, and then manage peering connections in consul-k8s.
|
|
---
|
|
|
|
# Cluster Peering on Kubernetes
|
|
|
|
~> **Cluster peering is currently in beta:** Functionality associated
|
|
with cluster peering is subject to change. You should never use the beta release in secure environments or production scenarios. Features in
|
|
beta may have performance issues, scaling issues, and limited support.<br/><br/>Cluster peering is not currently available in the HCP Consul offering.
|
|
|
|
To establish a cluster peering connection on Kubernetes, you need to enable the feature in the Helm chart and create custom resource definitions (CRDs) for each side of the peering.
|
|
|
|
The following CRDs are used to create and manage a peering connection:
|
|
|
|
- `PeeringAcceptor`: Generates a peering token and accepts an incoming peering connection.
|
|
- `PeeringDialer`: Uses a peering token to make an outbound peering connection with the cluster that generated the token.
|
|
|
|
As of Consul v1.14, you can also [implement service failovers and redirects to control traffic](/consul/docs/connect/l7-traffic) between peers.
|
|
|
|
> To learn how to peer clusters and connect services across peers in AWS Elastic Kubernetes Service (EKS) environments, complete the [Consul Cluster Peering on Kubernetes tutorial](https://learn.hashicorp.com/tutorials/consul/cluster-peering-aws?utm_source=docs).
|
|
|
|
## Prerequisites
|
|
|
|
You must implement the following requirements to create and use cluster peering connections with Kubernetes:
|
|
|
|
- Consul v1.13.1 or later
|
|
- At least two Kubernetes clusters
|
|
- The installation must be running on Consul on Kubernetes version 0.47.1 or later
|
|
|
|
### Prepare for installation
|
|
|
|
Complete the following procedure after you have provisioned a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters.
|
|
|
|
1. Use the `kubectl` command to export the Kubernetes context names and then set them to variables for future use. For more information on how to use kubeconfig and contexts, refer to the [Kubernetes docs on configuring access to multiple clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
|
|
|
|
You can use the following methods to get the context names for your clusters:
|
|
|
|
- Use the `kubectl config current-context` command to get the context for the cluster you are currently in.
|
|
- Use the `kubectl config get-contexts` command to get all configured contexts in your kubeconfig file.
|
|
|
|
```shell-session
|
|
$ export CLUSTER1_CONTEXT=<CONTEXT for first Kubernetes cluster>
|
|
$ export CLUSTER2_CONTEXT=<CONTEXT for second Kubernetes cluster>
|
|
```
|
|
|
|
1. To establish cluster peering through Kubernetes, create a `values.yaml` file with the following Helm values.
|
|
|
|
<CodeBlockConfig filename="values.yaml">
|
|
|
|
```yaml
|
|
global:
|
|
name: consul
|
|
image: "hashicorp/consul:1.13.1"
|
|
peering:
|
|
enabled: true
|
|
connectInject:
|
|
enabled: true
|
|
dns:
|
|
enabled: true
|
|
enableRedirection: true
|
|
server:
|
|
exposeService:
|
|
enabled: true
|
|
controller:
|
|
enabled: true
|
|
meshGateway:
|
|
enabled: true
|
|
replicas: 1
|
|
```
|
|
|
|
</CodeBlockConfig>
|
|
|
|
These Helm values configure the servers in each cluster so that they expose ports over a Kubernetes load balancer service. For additional configuration options, refer to [`server.exposeService`](/docs/k8s/helm#v-server-exposeservice).
|
|
|
|
When generating a peering token from one of the clusters, Consul includes a load balancer address in the token so that the peering stream goes through the load balancer in front of the servers. For additional configuration options, refer to [`global.peering.tokenGeneration`](/docs/k8s/helm#v-global-peering-tokengeneration).
|
|
|
|
### Install Consul on Kubernetes
|
|
|
|
Install Consul on Kubernetes by using the CLI to apply `values.yaml` to each cluster.
|
|
|
|
1. In `cluster-01`, run the following commands:
|
|
|
|
```shell-session
|
|
$ export HELM_RELEASE_NAME=cluster-01
|
|
```
|
|
|
|
```shell-session
|
|
$ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "0.47.1" --values values.yaml --kube-context $CLUSTER1_CONTEXT
|
|
```
|
|
|
|
1. In `cluster-02`, run the following commands:
|
|
|
|
```shell-session
|
|
$ export HELM_RELEASE_NAME=cluster-02
|
|
```
|
|
|
|
```shell-session
|
|
$ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "0.47.1" --values values.yaml --kube-context $CLUSTER2_CONTEXT
|
|
```
|
|
|
|
## Create a peering connection for Consul on Kubernetes
|
|
|
|
To peer Kubernetes clusters running Consul, you need to create a peering token and share it with the other cluster. Complete the following steps to create the peer connection.
|
|
|
|
### Create a peering token
|
|
|
|
Peers identify each other using the `metadata.name` values you establish when creating the `PeeringAcceptor` and `PeeringDialer` CRDs.
|
|
|
|
1. In `cluster-01`, create the `PeeringAcceptor` custom resource.
|
|
|
|
<CodeBlockConfig filename="acceptor.yaml">
|
|
|
|
```yaml
|
|
apiVersion: consul.hashicorp.com/v1alpha1
|
|
kind: PeeringAcceptor
|
|
metadata:
|
|
name: cluster-02 ## The name of the peer you want to connect to
|
|
spec:
|
|
peer:
|
|
secret:
|
|
name: "peering-token"
|
|
key: "data"
|
|
backend: "kubernetes"
|
|
```
|
|
|
|
</CodeBlockConfig>
|
|
|
|
1. Apply the `PeeringAcceptor` resource to the first cluster.
|
|
|
|
```shell-session
|
|
$ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml
|
|
````
|
|
|
|
1. Save your peering token so that you can export it to the other cluster.
|
|
|
|
```shell-session
|
|
$ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml
|
|
```
|
|
|
|
### Establish a peering connection between clusters
|
|
|
|
1. Apply the peering token to the second cluster.
|
|
|
|
```shell-session
|
|
$ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml
|
|
```
|
|
|
|
1. In `cluster-02`, create the `PeeringDialer` custom resource.
|
|
|
|
<CodeBlockConfig filename="dialer.yaml">
|
|
|
|
```yaml
|
|
apiVersion: consul.hashicorp.com/v1alpha1
|
|
kind: PeeringDialer
|
|
metadata:
|
|
name: cluster-01 ## The name of the peer you want to connect to
|
|
spec:
|
|
peer:
|
|
secret:
|
|
name: "peering-token"
|
|
key: "data"
|
|
backend: "kubernetes"
|
|
```
|
|
|
|
</CodeBlockConfig>
|
|
|
|
1. Apply the `PeeringDialer` resource to the second cluster.
|
|
|
|
```shell-session
|
|
$ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml
|
|
```
|
|
|
|
### Export services between clusters
|
|
|
|
The examples described in this section demonstrate how to export a service named `backend`. You should change instances of `backend` in the example code to the name of the service you want to export.
|
|
|
|
1. For the service in `cluster-02` that you want to export, add the following [annotations](/docs/k8s/annotations-and-labels) to your service's pods.
|
|
|
|
<CodeBlockConfig filename="backend.yaml">
|
|
|
|
```yaml
|
|
# Service to expose backend
|
|
apiVersion: v1
|
|
kind: Service
|
|
metadata:
|
|
name: backend
|
|
spec:
|
|
selector:
|
|
app: backend
|
|
ports:
|
|
- name: http
|
|
protocol: TCP
|
|
port: 80
|
|
targetPort: 9090
|
|
---
|
|
apiVersion: v1
|
|
kind: ServiceAccount
|
|
metadata:
|
|
name: backend
|
|
---
|
|
# Deployment for backend
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: backend
|
|
labels:
|
|
app: backend
|
|
spec:
|
|
replicas: 1
|
|
selector:
|
|
matchLabels:
|
|
app: backend
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: backend
|
|
annotations:
|
|
"consul.hashicorp.com/connect-inject": "true"
|
|
spec:
|
|
serviceAccountName: backend
|
|
containers:
|
|
- name: backend
|
|
image: nicholasjackson/fake-service:v0.22.4
|
|
ports:
|
|
- containerPort: 9090
|
|
env:
|
|
- name: "LISTEN_ADDR"
|
|
value: "0.0.0.0:9090"
|
|
- name: "NAME"
|
|
value: "backend"
|
|
- name: "MESSAGE"
|
|
value: "Response from backend"
|
|
```
|
|
|
|
</CodeBlockConfig>
|
|
|
|
1. In `cluster-02`, create an `ExportedServices` custom resource.
|
|
|
|
<CodeBlockConfig filename="exportedsvc.yaml">
|
|
|
|
```yaml
|
|
apiVersion: consul.hashicorp.com/v1alpha1
|
|
kind: ExportedServices
|
|
metadata:
|
|
name: default ## The name of the partition containing the service
|
|
spec:
|
|
services:
|
|
- name: backend ## The name of the service you want to export
|
|
consumers:
|
|
- peer: cluster-01 ## The name of the peer that receives the service
|
|
```
|
|
|
|
</CodeBlockConfig>
|
|
|
|
1. Apply the service file and the `ExportedServices` resource to the second cluster.
|
|
|
|
```shell-session
|
|
$ kubectl apply --context $CLUSTER2_CONTEXT --filename backend.yaml --filename exportedsvc.yaml
|
|
```
|
|
|
|
### Authorize services for peers
|
|
|
|
1. Create service intentions for the second cluster.
|
|
|
|
<CodeBlockConfig filename="intention.yml">
|
|
|
|
```yaml
|
|
apiVersion: consul.hashicorp.com/v1alpha1
|
|
kind: ServiceIntentions
|
|
metadata:
|
|
name: backend-deny
|
|
spec:
|
|
destination:
|
|
name: backend
|
|
sources:
|
|
- name: "*"
|
|
action: deny
|
|
- name: frontend
|
|
action: allow
|
|
```
|
|
|
|
</CodeBlockConfig>
|
|
|
|
1. Apply the intentions to the second cluster.
|
|
|
|
```shell-session
|
|
$ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yml
|
|
```
|
|
|
|
1. For the services in `cluster-01` that you want to access `backend`, add the following annotations to the service file. To dial the upstream service from an application, ensure that the requests are sent to the correct DNS name as specified in [Service Virtual IP Lookups](/docs/discovery/dns#service-virtual-ip-lookups).
|
|
|
|
<CodeBlockConfig filename="frontend.yaml">
|
|
|
|
```yaml
|
|
# Service to expose frontend
|
|
apiVersion: v1
|
|
kind: Service
|
|
metadata:
|
|
name: frontend
|
|
spec:
|
|
selector:
|
|
app: frontend
|
|
ports:
|
|
- name: http
|
|
protocol: TCP
|
|
port: 9090
|
|
targetPort: 9090
|
|
---
|
|
apiVersion: v1
|
|
kind: ServiceAccount
|
|
metadata:
|
|
name: frontend
|
|
---
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: frontend
|
|
labels:
|
|
app: frontend
|
|
spec:
|
|
replicas: 1
|
|
selector:
|
|
matchLabels:
|
|
app: frontend
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: frontend
|
|
annotations:
|
|
"consul.hashicorp.com/connect-inject": "true"
|
|
spec:
|
|
serviceAccountName: frontend
|
|
containers:
|
|
- name: frontend
|
|
image: nicholasjackson/fake-service:v0.22.4
|
|
securityContext:
|
|
capabilities:
|
|
add: ["NET_ADMIN"]
|
|
ports:
|
|
- containerPort: 9090
|
|
env:
|
|
- name: "LISTEN_ADDR"
|
|
value: "0.0.0.0:9090"
|
|
- name: "UPSTREAM_URIS"
|
|
value: "http://backend.virtual.cluster-02.consul"
|
|
- name: "NAME"
|
|
value: "frontend"
|
|
- name: "MESSAGE"
|
|
value: "Hello World"
|
|
- name: "HTTP_CLIENT_KEEP_ALIVES"
|
|
value: "false"
|
|
```
|
|
|
|
</CodeBlockConfig>
|
|
|
|
1. Apply the service file to the first cluster.
|
|
|
|
```shell-session
|
|
$ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml
|
|
```
|
|
|
|
1. Run the following command in `frontend` and then check the output to confirm that you peered your clusters successfully.
|
|
|
|
```shell-session
|
|
$ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090
|
|
|
|
{
|
|
"name": "frontend",
|
|
"uri": "/",
|
|
"type": "HTTP",
|
|
"ip_addresses": [
|
|
"10.16.2.11"
|
|
],
|
|
"start_time": "2022-08-26T23:40:01.167199",
|
|
"end_time": "2022-08-26T23:40:01.226951",
|
|
"duration": "59.752279ms",
|
|
"body": "Hello World",
|
|
"upstream_calls": {
|
|
"http://backend.virtual.cluster-02.consul": {
|
|
"name": "backend",
|
|
"uri": "http://backend.virtual.cluster-02.consul",
|
|
"type": "HTTP",
|
|
"ip_addresses": [
|
|
"10.32.2.10"
|
|
],
|
|
"start_time": "2022-08-26T23:40:01.223503",
|
|
"end_time": "2022-08-26T23:40:01.224653",
|
|
"duration": "1.149666ms",
|
|
"headers": {
|
|
"Content-Length": "266",
|
|
"Content-Type": "text/plain; charset=utf-8",
|
|
"Date": "Fri, 26 Aug 2022 23:40:01 GMT"
|
|
},
|
|
"body": "Response from backend",
|
|
"code": 200
|
|
}
|
|
},
|
|
"code": 200
|
|
}
|
|
```
|
|
|
|
## End a peering connection
|
|
|
|
To end a peering connection, delete both the `PeeringAcceptor` and `PeeringDialer` resources.
|
|
|
|
To confirm that you deleted your peering connection, in `cluster-01`, query the `/health` HTTP endpoint. The peered services should no longer appear.
|
|
|
|
```shell-session
|
|
$ curl "localhost:8500/v1/health/connect/backend?peer=cluster-02"
|
|
```
|
|
|
|
## Recreate or reset a peering connection
|
|
|
|
To recreate or reset the peering connection, you need to generate a new peering token from the cluster where you created the `PeeringAcceptor`.
|
|
|
|
1. In the `PeeringAcceptor` CRD, add the annotation `consul.hashicorp.com/peering-version`. If the annotation already exists, update its value to a higher version.
|
|
|
|
<CodeBlockConfig filename="acceptor.yml" highlight="6" hideClipboard>
|
|
|
|
```yaml
|
|
apiVersion: consul.hashicorp.com/v1alpha1
|
|
kind: PeeringAcceptor
|
|
metadata:
|
|
name: cluster-02
|
|
annotations:
|
|
consul.hashicorp.com/peering-version: 1 ## The peering version you want to set.
|
|
spec:
|
|
peer:
|
|
secret:
|
|
name: "peering-token"
|
|
key: "data"
|
|
backend: "kubernetes"
|
|
```
|
|
|
|
</CodeBlockConfig>
|
|
|
|
1. After updating `PeeringAcceptor`, repeat the following steps to create a peering connection:
|
|
1. [Create a peering token](#create-a-peering-token)
|
|
1. [Establish a peering connection between clusters](#establish-a-peering-connection-between-clusters)
|
|
1. [Export services between clusters](#export-services-between-clusters)
|
|
1. [Authorize services for peers](#authorize-services-for-peers)
|
|
|
|
Your peering connection is re-established with the updated token.
|
|
|
|
~> **Note:** The only way to create or set a new peering token is to manually adjust the value of the annotation `consul.hashicorp.com/peering-version`. Creating a new token causes the previous token to expire.
|
|
|
|
## Traffic management between peers
|
|
|
|
As of Consul v1.14, you can use [dynamic traffic management](/consul/docs/connect/l7-traffic) to configure your service mesh so that services automatically failover and redirect between peers.
|
|
|
|
To configure automatic service failovers and redirect, edit the `ServiceResolver` CRD so that traffic resolves to a backup service instance on a peer. The following example updates the `ServiceResolver` CRD in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in `cluster-02` when it detects multiple connection failures to the primary instance.
|
|
|
|
<CodeBlockConfig filename="service-resolver.yaml" hideClipboard>
|
|
|
|
```yaml
|
|
apiVersion: consul.hashicorp.com/v1alpha1
|
|
kind: ServiceResolver
|
|
metadata:
|
|
name: frontend
|
|
spec:
|
|
connectTimeout: 15s
|
|
failover:
|
|
'*':
|
|
targets:
|
|
- peer: 'cluster-02'
|
|
service: 'backup'
|
|
namespace: 'default'
|
|
```
|
|
|
|
</CodeBlockConfig>
|