mirror of https://github.com/status-im/consul.git
Merge branch 'master' of github.com:hashicorp/consul
This commit is contained in:
commit
0df516aa52
|
@ -56,7 +56,7 @@ The following guides are available:
|
|||
|
||||
* [Monitoring Consul with Telegraf](/docs/guides/monitoring-telegraf.html) - This guide demonstrates how to setup Consul for monitoring with Telegraf.
|
||||
|
||||
* [Network Segments](/docs/guides/segments.html) - Configuring Consul to support partial LAN connectivity using Network Segments.
|
||||
* [Network Segments](/docs/guides/network-segments.html) - Configuring Consul to support partial LAN connectivity using Network Segments.
|
||||
|
||||
* [Outage Recovery](https://learn.hashicorp.com/consul/day-2-operations/advanced-operations/outage) - This guide covers recovering a cluster that has become unavailable due to server failures.
|
||||
|
||||
|
|
|
@ -10,14 +10,9 @@ description: |-
|
|||
segment.
|
||||
---
|
||||
|
||||
# Network Segments
|
||||
# Network Segments [Enterprise Only]
|
||||
|
||||
## Partial LAN Connectivity with Network Segments
|
||||
|
||||
[//]: # ( ~> The network segment functionality described here is available only in )
|
||||
[//]: # ( [Consul Enterprise](https://www.hashicorp.com/products/consul/) version 0.9.3 and later. )
|
||||
|
||||
<%= enterprise_alert :consul %>
|
||||
~> Note, the network segment functionality described here is available only in [Consul Enterprise](https://www.hashicorp.com/products/consul/) version 0.9.3 and later.
|
||||
|
||||
Many advanced Consul users have the need to run clusters with segmented networks, meaning that
|
||||
not all agents can be in a full mesh. This is usually the result of business policies enforced
|
||||
|
@ -25,21 +20,30 @@ via network rules or firewalls. Prior to Consul 0.9.3 this was only possible thr
|
|||
which for some users is too heavyweight or expensive as it requires running multiple servers per
|
||||
segment.
|
||||
|
||||
This guide will cover the basic configuration for setting up multiple segments, as well as
|
||||
how to configure a prepared query to limit service discovery to the services in the local agent's
|
||||
network segment.
|
||||
|
||||
To complete this guide you will need to complete the
|
||||
[Deployment Guide](https://learn.hashicorp.com/consul/advanced/day-1-operations/deployment-guide).
|
||||
|
||||
|
||||
## Partial LAN Connectivity with Network Segments
|
||||
|
||||
By default, all Consul agents in one datacenter are part of a shared gossip pool over the LAN;
|
||||
this means that the partial connectivity caused by segmented networks would cause health flapping
|
||||
as nodes failed to communicate. In this guide we will cover the Network Segments feature, added
|
||||
in [Consul Enterprise](https://www.hashicorp.com/products/consul/) version 0.9.3, which allows users
|
||||
to configure Consul to support this kind of segmented network topology.
|
||||
|
||||
This guide will cover the basic configuration for setting up multiple segments, as well as
|
||||
how to configure a prepared query to limit service discovery to the services in the local agent's
|
||||
network segment.
|
||||
|
||||
## Configuring Network Segments
|
||||
### Network Segments Overview
|
||||
|
||||
All Consul agents are part of the default network segment, `""`, unless a segment is specified in
|
||||
their configuration. In a standard cluster setup all agents will normally be part of this default
|
||||
segment and as a result, part of one shared LAN gossip pool. Network segments can be used to break
|
||||
All Consul agents are part of the default network segment, unless a segment is specified in
|
||||
their configuration. In a standard cluster setup, all agents will normally be part of this default
|
||||
segment and as a result, part of one shared LAN gossip pool.
|
||||
|
||||
Network segments can be used to break
|
||||
up the LAN gossip pool into multiple isolated smaller pools by specifying the configuration for segments
|
||||
on the servers. Each desired segment must be given a name and port, as well as optionally a custom
|
||||
bind and advertise address for that segment's gossip listener to bind to on the server.
|
||||
|
@ -62,15 +66,16 @@ agent within the same segment. If joining to a Consul server, client will need t
|
|||
port for their segment along with the address of the server when performing the join (for example,
|
||||
`consul agent -retry-join "consul.domain.internal:1234"`).
|
||||
|
||||
## Getting Started
|
||||
## Setup Network Segments
|
||||
|
||||
To get started, follow the [bootstrapping guide](/docs/guides/bootstrapping.html) to
|
||||
start a server or group of servers, with the following section added to the configuration (you may need to
|
||||
adjust the bind/advertise addresses for your setup):
|
||||
### Configure Consul Servers
|
||||
|
||||
To get started,
|
||||
start a server or group of servers, with the following section added to the configuration. Note, you may need to
|
||||
adjust the bind/advertise addresses for your setup.
|
||||
|
||||
```json
|
||||
{
|
||||
...
|
||||
"segments": [
|
||||
{"name": "alpha", "bind": "{{GetPrivateIP}}", "advertise": "{{GetPrivateIP}}", "port": 8303},
|
||||
{"name": "beta", "bind": "{{GetPrivateIP}}", "advertise": "{{GetPrivateIP}}", "port": 8304}
|
||||
|
@ -78,9 +83,9 @@ adjust the bind/advertise addresses for your setup):
|
|||
}
|
||||
```
|
||||
|
||||
You should see a log message on the servers for each segment's listener as the agent starts up:
|
||||
You should see a log message on the servers for each segment's listener as the agent starts up.
|
||||
|
||||
```text
|
||||
```sh
|
||||
2017/08/30 19:05:13 [INFO] serf: EventMemberJoin: server1.dc1 192.168.0.4
|
||||
2017/08/30 19:05:13 [INFO] serf: EventMemberJoin: server1 192.168.0.4
|
||||
2017/08/30 19:05:13 [INFO] consul: Started listener for LAN segment "alpha" on 192.168.0.4:8303
|
||||
|
@ -89,38 +94,40 @@ You should see a log message on the servers for each segment's listener as the a
|
|||
2017/08/30 19:05:13 [INFO] serf: EventMemberJoin: server1 192.168.0.4
|
||||
```
|
||||
|
||||
Running `consul members` should show the server as being part of all segments:
|
||||
Running `consul members` should show the server as being part of all segments.
|
||||
|
||||
```text
|
||||
```sh
|
||||
(server1) $ consul members
|
||||
Node Address Status Type Build Protocol DC Segment
|
||||
server1 192.168.0.4:8301 alive server 0.9.3+ent 2 dc1 <all>
|
||||
```
|
||||
|
||||
Next, start a client agent in the 'alpha' segment, with `-join` set to the server's segment
|
||||
address/port for that segment:
|
||||
### Configure Consul Clients in Different Network Segments
|
||||
|
||||
```text
|
||||
Next, start a client agent in the 'alpha' segment, with `-join` set to the server's segment
|
||||
address/port for that segment.
|
||||
|
||||
```sh
|
||||
(client1) $ consul agent ... -join 192.168.0.4:8303 -node client1 -segment alpha
|
||||
```
|
||||
|
||||
After the join is successful, we should see the client show up by running the `consul members` command
|
||||
on the server again:
|
||||
on the server again.
|
||||
|
||||
```text
|
||||
```sh
|
||||
(server1) $ consul members
|
||||
Node Address Status Type Build Protocol DC Segment
|
||||
server1 192.168.0.4:8301 alive server 0.9.3+ent 2 dc1 <all>
|
||||
client1 192.168.0.5:8301 alive client 0.9.3+ent 2 dc1 alpha
|
||||
```
|
||||
|
||||
Now join another client in segment 'beta' and run the `consul members` command another time:
|
||||
Now join another client in segment 'beta' and run the `consul members` command another time.
|
||||
|
||||
```text
|
||||
```sh
|
||||
(client2) $ consul agent ... -join 192.168.0.4:8304 -node client2 -segment beta
|
||||
```
|
||||
|
||||
```text
|
||||
```sh
|
||||
(server1) $ consul members
|
||||
Node Address Status Type Build Protocol DC Segment
|
||||
server1 192.168.0.4:8301 alive server 0.9.3+ent 2 dc1 <all>
|
||||
|
@ -128,10 +135,12 @@ client1 192.168.0.5:8301 alive client 0.9.3+ent 2 dc1 alpha
|
|||
client2 192.168.0.6:8301 alive client 0.9.3+ent 2 dc1 beta
|
||||
```
|
||||
|
||||
If we pass the `-segment` flag when running `consul members`, we can limit the view to agents
|
||||
in a specific segment:
|
||||
### Filter Segmented Nodes
|
||||
|
||||
```text
|
||||
If we pass the `-segment` flag when running `consul members`, we can limit the view to agents
|
||||
in a specific segment.
|
||||
|
||||
```sh
|
||||
(server1) $ consul members -segment alpha
|
||||
Node Address Status Type Build Protocol DC Segment
|
||||
client1 192.168.0.5:8301 alive client 0.9.3+ent 2 dc1 alpha
|
||||
|
@ -139,9 +148,9 @@ server1 192.168.0.4:8303 alive server 0.9.3+ent 2 dc1 alpha
|
|||
```
|
||||
|
||||
Using the `consul catalog nodes` command, we can filter on an internal metadata key,
|
||||
`consul-network-segment`, which stores the network segment of the node:
|
||||
`consul-network-segment`, which stores the network segment of the node.
|
||||
|
||||
```text
|
||||
```sh
|
||||
(server1) $ consul catalog nodes -node-meta consul-network-segment=alpha
|
||||
Node ID Address DC
|
||||
client1 4c29819c 192.168.0.5 dc1
|
||||
|
@ -150,25 +159,31 @@ client1 4c29819c 192.168.0.5 dc1
|
|||
With this metadata key, we can construct a [Prepared Query](/api/query.html) that can be used
|
||||
for DNS to return only services within the same network segment as the local agent.
|
||||
|
||||
First, register a service on each of the client nodes:
|
||||
## Configure a Prepared Query to Limit Service Discovery
|
||||
|
||||
```text
|
||||
### Create Services
|
||||
|
||||
First, register a service on each of the client nodes.
|
||||
|
||||
```sh
|
||||
(client1) $ curl \
|
||||
--request PUT \
|
||||
--data '{"Name": "redis", "Port": 8000}' \
|
||||
localhost:8500/v1/agent/service/register
|
||||
```
|
||||
|
||||
```text
|
||||
```sh
|
||||
(client2) $ curl \
|
||||
--request PUT \
|
||||
--data '{"Name": "redis", "Port": 9000}' \
|
||||
localhost:8500/v1/agent/service/register
|
||||
```
|
||||
|
||||
Next, write the following to `query.json` and create the query using the HTTP endpoint:
|
||||
### Create the Prepared Query
|
||||
|
||||
```text
|
||||
Next, write the following to `query.json` and create the query using the HTTP endpoint.
|
||||
|
||||
```sh
|
||||
(server1) $ curl \
|
||||
--request POST \
|
||||
--data \
|
||||
|
@ -186,12 +201,14 @@ Next, write the following to `query.json` and create the query using the HTTP en
|
|||
{"ID":"6f49dd24-de9b-0b6c-fd29-525eca069419"}
|
||||
```
|
||||
|
||||
### Test the Segments with DNS Lookups
|
||||
|
||||
Now, we can replace any dns lookups of the form `<service>.service.consul` with
|
||||
`<service>.query.consul` to look up only services within the same network segment:
|
||||
`<service>.query.consul` to look up only services within the same network segment.
|
||||
|
||||
**Client 1:**
|
||||
|
||||
```text
|
||||
```sh
|
||||
(client1) $ dig @127.0.0.1 -p 8600 redis.query.consul SRV
|
||||
|
||||
; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 redis.query.consul SRV
|
||||
|
@ -214,7 +231,7 @@ client1.node.dc1.consul. 0 IN A 192.168.0.5
|
|||
|
||||
**Client 2:**
|
||||
|
||||
```text
|
||||
```sh
|
||||
(client2) $ dig @127.0.0.1 -p 8600 redis.query.consul SRV
|
||||
|
||||
; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 redis.query.consul SRV
|
||||
|
@ -234,3 +251,10 @@ redis.query.consul. 0 IN SRV 1 1 9000 client2.node.dc1.consul.
|
|||
;; ADDITIONAL SECTION:
|
||||
client2.node.dc1.consul. 0 IN A 192.168.0.6
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
In this guide you configured the Consul agents to participate in partial
|
||||
LAN gossip based on network segments. You then set up a couple services and
|
||||
a prepared query to test the segments.
|
||||
|
|
@ -36,7 +36,40 @@ EOF
|
|||
```
|
||||
|
||||
-> **Note:** The `stubDomain` can only point to a static IP. If the cluster IP
|
||||
of the `consul-dns` service changes, then it must be updated to continue
|
||||
of the `consul-dns` service changes, then it must be updated in the config map to
|
||||
match the new service IP for this to continue
|
||||
working. This can happen if the service is deleted and recreated, such as
|
||||
in full cluster rebuilds.
|
||||
|
||||
## CoreDNS Configuration
|
||||
|
||||
If you are using CoreDNS instead of kube-dns in your Kubernetes cluster, you will
|
||||
need to update your existing `coredns` ConfigMap in the `kube-system` namespace to
|
||||
include a proxy definition for `consul` that points to the cluster IP of the
|
||||
`consul-dns` service.
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
labels:
|
||||
addonmanager.kubernetes.io/mode: EnsureExists
|
||||
name: coredns
|
||||
namespace: kube-system
|
||||
data:
|
||||
Corefile: |
|
||||
.:53 {
|
||||
<Existing CoreDNS definition>
|
||||
}
|
||||
consul {
|
||||
errors
|
||||
cache 30
|
||||
proxy . <consul-dns service cluster ip>
|
||||
}
|
||||
```
|
||||
|
||||
-> **Note:** The consul proxy can only point to a static IP. If the cluster IP
|
||||
of the `consul-dns` service changes, then it must be updated to the new IP to continue
|
||||
working. This can happen if the service is deleted and recreated, such as
|
||||
in full cluster rebuilds.
|
||||
|
||||
|
|
|
@ -472,7 +472,7 @@
|
|||
<a href="/docs/guides/monitoring-telegraf.html">Monitoring Consul with Telegraf</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-guides-segments") %>>
|
||||
<a href="/docs/guides/segments.html">Network Segments</a>
|
||||
<a href="/docs/guides/network-segments.html">Network Segments</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-guides-outage") %>>
|
||||
<a href="https://learn.hashicorp.com/consul/day-2-operations/advanced-operations/outage">Outage Recovery</a>
|
||||
|
|
Loading…
Reference in New Issue