mirror of https://github.com/status-im/consul.git
Document how clients not on K8s can join a DC in K8s. (#9438)
This commit is contained in:
parent
a9285ad76a
commit
dd81e4a1cc
|
@ -11,8 +11,21 @@ description: >-
|
||||||
|
|
||||||
Consul clients running on non-Kubernetes nodes can join a Consul cluster running within Kubernetes.
|
Consul clients running on non-Kubernetes nodes can join a Consul cluster running within Kubernetes.
|
||||||
|
|
||||||
## Auto-join
|
## Networking
|
||||||
|
|
||||||
|
Within one datacenter, Consul typically requires a fully connected
|
||||||
|
[network](/docs/architecture). This means the IPs of every client and server
|
||||||
|
agent should be routable by every other client and server agent in the
|
||||||
|
datacenter. Clients need to be able to [gossip](/docs/architecture/gossip) with
|
||||||
|
every other agent and make RPC calls to servers. Servers need to be able to
|
||||||
|
gossip with every other agent. See [Architecture](/docs/architecture) for more details.
|
||||||
|
|
||||||
|
-> **Consul Enterprise customers** may use [network
|
||||||
|
segments](/docs/enterprise/network-segments) to enable non-fully-connected
|
||||||
|
topologies. However, out-of-cluster nodes must still be able to communicate
|
||||||
|
with the server pod or host IP addresses.
|
||||||
|
|
||||||
|
## Auto-join
|
||||||
The recommended way to join a cluster running within Kubernetes is to
|
The recommended way to join a cluster running within Kubernetes is to
|
||||||
use the ["k8s" cloud auto-join provider](/docs/agent/cloud-auto-join#kubernetes-k8s).
|
use the ["k8s" cloud auto-join provider](/docs/agent/cloud-auto-join#kubernetes-k8s).
|
||||||
|
|
||||||
|
@ -29,20 +42,122 @@ started using the [official Helm chart](/docs/k8s/helm):
|
||||||
```shell-session
|
```shell-session
|
||||||
$ consul agent -retry-join 'provider=k8s label_selector="app=consul,component=server"'
|
$ consul agent -retry-join 'provider=k8s label_selector="app=consul,component=server"'
|
||||||
```
|
```
|
||||||
|
-> **Note:** This auto-join command only connects on the default gossip port
|
||||||
|
8301, whether you are joining on the pod network or via host ports. Either a
|
||||||
|
consul server or client that is already a member of the datacenter should be
|
||||||
|
listening on this port for the external client agent to be able to use
|
||||||
|
auto-join.
|
||||||
|
|
||||||
By default, Consul will join the default gossip port. Pods may set an
|
### Auto-join on the Pod network
|
||||||
annotation `consul.hashicorp.com/auto-join-port` to an integer value or
|
In the default Consul Helm chart installation, Consul clients and servers are
|
||||||
a named port to specify the port for the auto-join to return. This enables
|
routable only via their pod IPs for server RPCs and gossip (HTTP
|
||||||
different pods to have different exposed ports.
|
API calls to Consul clients can also be made through host IPs). This means any
|
||||||
|
external client agents joining the Consul cluster running on Kubernetes would
|
||||||
|
need to be able to have connectivity to those pod IPs.
|
||||||
|
|
||||||
## Networking
|
In many hosted Kubernetes environments, you will need to explicitly configure
|
||||||
|
your hosting provider to ensure that pod IPs are routable from external VMs.
|
||||||
|
See [Azure AKS
|
||||||
|
CNI](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking),
|
||||||
|
[AWS EKS
|
||||||
|
CNI](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) and
|
||||||
|
[GKE VPC-native
|
||||||
|
clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips).
|
||||||
|
|
||||||
|
Given you have the [official Helm chart](/docs/k8s/helm) installed with the default values, do the following to join an external client agent.
|
||||||
|
|
||||||
|
1. Make sure the pod IPs of the clients and servers in Kubernetes are
|
||||||
|
routable from the VM and that the VM can access port 8301 (for gossip) and
|
||||||
|
port 8300 (for server RPC) on those pod IPs.
|
||||||
|
|
||||||
|
1. Make sure that the client and server pods running in Kubernetes can route
|
||||||
|
to the VM's advertise IP on its gossip port (default 8301).
|
||||||
|
|
||||||
|
2. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM.
|
||||||
|
|
||||||
|
2. On the external VM, run:
|
||||||
|
```bash
|
||||||
|
consul agent \
|
||||||
|
-advertise="$ADVERTISE_IP" \
|
||||||
|
-retry-join='provider=k8s label_selector="app=consul,component=server"'
|
||||||
|
-bind=0.0.0.0 \
|
||||||
|
-hcl='leave_on_terminate = true' \
|
||||||
|
-hcl='ports { grpc = 8502 }' \
|
||||||
|
-config-dir=$CONFIG_DIR \
|
||||||
|
-datacenter=$DATACENTER \
|
||||||
|
-data-dir=$DATA_DIR \
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Check if the join was successful by running `consul members`. Sample output:
|
||||||
|
```shell-session
|
||||||
|
/ $ consul members
|
||||||
|
Node Address Status Type Build Protocol DC Segment
|
||||||
|
consul-consul-server-0 10.138.0.43:9301 alive server 1.9.1 2 dc1 <all>
|
||||||
|
external-agent 10.138.0.38:8301 alive client 1.9.0 2 dc1 <default>
|
||||||
|
gke-external-agent-default-pool-32d15192-grs4 10.138.0.43:8301 alive client 1.9.1 2 dc1 <default>
|
||||||
|
gke-external-agent-default-pool-32d15192-otge 10.138.0.44:8301 alive client 1.9.1 2 dc1 <default>
|
||||||
|
gke-external-agent-default-pool-32d15192-vo7k 10.138.0.42:8301 alive client 1.9.1 2 dc1 <default>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Auto-join via host ports
|
||||||
|
If your external VMs can't connect to Kubernetes pod IPs, but they can connect
|
||||||
|
to the internal host IPs of the nodes in the Kubernetes cluster, you have the
|
||||||
|
option to expose the clients and server ports on the host IP instead.
|
||||||
|
|
||||||
|
1. Install the [official Helm chart](/docs/k8s/helm) with the following values:
|
||||||
|
```yaml
|
||||||
|
client:
|
||||||
|
exposeGossipPorts: true # exposes client gossip ports as hostPorts
|
||||||
|
server:
|
||||||
|
exposeGossipAndRPCPorts: true # exposes the server gossip and RPC ports as hostPorts
|
||||||
|
ports:
|
||||||
|
# Configures the server gossip port
|
||||||
|
serflan:
|
||||||
|
# Note that this needs to be different than 8301, to avoid conflicting with the client gossip hostPort
|
||||||
|
port: 9301
|
||||||
|
```
|
||||||
|
This will expose the client gossip ports, the server gossip ports and the server RPC port at `hostIP:hostPort`. Note that `hostIP` is the **internal** IP of the VM that the client/server pods are deployed on.
|
||||||
|
|
||||||
|
1. Make sure the IPs of the Kubernetes nodes are routable from the VM and
|
||||||
|
that the VM can access ports 8301 and 9301 (for gossip) and port 8300 (for
|
||||||
|
server RPC) on those node IPs.
|
||||||
|
|
||||||
|
1. Make sure the client and server pods running in Kubernetes can route to
|
||||||
|
the VM's advertise IP on its gossip port (default 8301).
|
||||||
|
|
||||||
|
3. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM.
|
||||||
|
|
||||||
|
4. On the external VM, run (note the addition of `host_network=true` in the retry-join argument):
|
||||||
|
```bash
|
||||||
|
consul agent \
|
||||||
|
-advertise="$ADVERTISE_IP" \
|
||||||
|
-retry-join='provider=k8s host_network=true label_selector="app=consul,component=server"'
|
||||||
|
-bind=0.0.0.0 \
|
||||||
|
-hcl='leave_on_terminate = true' \
|
||||||
|
-hcl='ports { grpc = 8502 }' \
|
||||||
|
-config-dir=$CONFIG_DIR \
|
||||||
|
-datacenter=$DATACENTER \
|
||||||
|
-data-dir=$DATA_DIR \
|
||||||
|
```
|
||||||
|
3. Check if the join was successful by running `consul members`. Sample output:
|
||||||
|
```shell-session
|
||||||
|
/ $ consul members
|
||||||
|
Node Address Status Type Build Protocol DC Segment
|
||||||
|
consul-consul-server-0 10.138.0.43:9301 alive server 1.9.1 2 dc1 <all>
|
||||||
|
external-agent 10.138.0.38:8301 alive client 1.9.0 2 dc1 <default>
|
||||||
|
gke-external-agent-default-pool-32d15192-grs4 10.138.0.43:8301 alive client 1.9.1 2 dc1 <default>
|
||||||
|
gke-external-agent-default-pool-32d15192-otge 10.138.0.44:8301 alive client 1.9.1 2 dc1 <default>
|
||||||
|
gke-external-agent-default-pool-32d15192-vo7k 10.138.0.42:8301 alive client 1.9.1 2 dc1 <default>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Manual join
|
||||||
|
If you are unable to use auto-join, you can also follow the instructions in
|
||||||
|
either of the auto-join sections but instead of using a `provider` key in the
|
||||||
|
`-retry-join` flag, you would need to pass the address of at least one
|
||||||
|
consul server, e.g: `-retry-join=$CONSUL_SERVER_IP:$SERVER_SERFLAN_PORT`.
|
||||||
|
|
||||||
|
However, rather than hardcoding the IP, it's recommended to set up a DNS entry
|
||||||
|
that would resolve to the consul servers' pod IPs (if using the pod network) or
|
||||||
|
host IPs that the server pods are running on (if using host ports).
|
||||||
|
|
||||||
Consul typically requires a fully connected network.
|
|
||||||
Because the Consul Helm chart currently doesn't allow exposing servers' gossip ports via a `hostPort`,
|
|
||||||
nodes outside of Kubernetes joining a cluster running within Kubernetes must be able to communicate
|
|
||||||
to pod IPs via the network. Note that the auto-join provider discussed above will use pod IPs by default.
|
|
||||||
|
|
||||||
-> **Consul Enterprise customers** may use
|
|
||||||
[network segments](/docs/enterprise/network-segments) to
|
|
||||||
enable non-fully-connected topologies. However, out-of-cluster nodes must still
|
|
||||||
be able to communicate with the server pod or host IP addresses.
|
|
||||||
|
|
Loading…
Reference in New Issue