From 72a1aea56cb0fa0b70a2b367053bb42d9b66a125 Mon Sep 17 00:00:00 2001 From: Kyle Schochenmaier Date: Wed, 25 May 2022 13:34:56 -0500 Subject: [PATCH] update docs for single-dc-multi-k8s install (#13008) * update docs for single-dc-multi-k8s install Co-authored-by: David Yu Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> --- .../single-dc-multi-k8s.mdx | 30 +++++++++++++------ 1 file changed, 21 insertions(+), 9 deletions(-) diff --git a/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx b/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx index 4e47aeeda0..b6c680cb20 100644 --- a/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx +++ b/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx @@ -14,7 +14,23 @@ In this example, we will use two Kubernetes clusters, but this approach could be ~> **Note:** This deployment topology requires that your Kubernetes clusters have a flat network for both pods and nodes, so that pods or nodes from one cluster can connect -to pods or nodes in another. +to pods or nodes in another. If a flat network is not available across all Kubernetes clusters, follow the instructions for using [Admin Partitions](/docs/enterprise/admin-partitions), which is a Consul Enterprise feature. + +## Prepare Helm release name ahead of installs + +The Helm release name must be unique for each Kubernetes cluster. +The Helm chart uses the Helm release name as a prefix for the +ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases +are identical, subsequent Consul on Kubernetes clusters overwrite existing ACL resources and cause the clusters to fail. + +Before you proceed with installation, prepare the Helm release names as environment variables for both the server and client installs to use. + +```shell-session + $ export HELM_RELEASE_SERVER=server + $ export HELM_RELEASE_CLIENT=client + ... + $ export HELM_RELEASE_CLIENT2=client2 +``` ## Deploying Consul servers and clients in the first cluster @@ -60,14 +76,10 @@ $ kubectl create secret generic consul-gossip-encryption-key --from-literal=key= ``` Now we can install our Consul cluster with Helm: -```shell -$ helm install cluster1 --values cluster1-config.yaml hashicorp/consul +```shell-session +$ helm install ${HELM_RELEASE_SERVER} --values cluster1-config.yaml hashicorp/consul ``` -~> **Note:** The Helm release name must be unique for each Kubernetes cluster. -That is because the Helm chart will use the Helm release name as a prefix for the -ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases -are the same, the Helm installation in subsequent clusters will clobber existing ACL resources. Once the installation finishes and all components are running and ready, we need to extract the gossip encryption key we've created, the CA certificate @@ -75,7 +87,7 @@ and the ACL bootstrap token generated during installation, so that we can apply them to our second Kubernetes cluster. ```shell-session -$ kubectl get secret consul-gossip-encryption-key cluster1-consul-ca-cert cluster1-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml +$ kubectl get secret consul-gossip-encryption-key ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml ``` ## Deploying Consul clients in the second cluster @@ -183,7 +195,7 @@ for more details. Now we're ready to install! ```shell-session -$ helm install cluster2 --values cluster2-config.yaml hashicorp/consul +$ helm install ${HELM_RELEASE_CLIENT} --values cluster2-config.yaml hashicorp/consul ``` ## Verifying the Consul Service Mesh works