2014-04-11 19:32:34 +00:00
---
2020-04-07 18:55:19 +00:00
layout: docs
page_title: Adding & Removing Servers
description: >-
Consul is designed to require minimal operator involvement, however any
changes to the set of Consul servers must be handled carefully. To better
understand why, reading about the consensus protocol will be useful. In short,
the Consul servers perform leader election and replication. For changes to be
processed, a minimum quorum of servers (N/2)+1 must be available. That means
if there are 3 server nodes, at least 2 must be available.
2014-04-11 19:32:34 +00:00
---
2018-12-12 15:12:29 +00:00
# Adding & Removing Servers
2014-04-11 19:32:34 +00:00
Consul is designed to require minimal operator involvement, however any changes
to the set of Consul servers must be handled carefully. To better understand
2020-04-09 23:46:54 +00:00
why, reading about the [consensus protocol](/docs/internals/consensus) will
2014-04-11 19:32:34 +00:00
be useful. In short, the Consul servers perform leader election and replication.
For changes to be processed, a minimum quorum of servers (N/2)+1 must be available.
That means if there are 3 server nodes, at least 2 must be available.
In general, if you are ever adding and removing nodes simultaneously, it is better
to first add the new nodes and then remove the old nodes.
2018-12-12 15:12:29 +00:00
In this guide, we will cover the different methods for adding and removing servers.
2014-04-11 19:32:34 +00:00
2018-12-12 15:12:29 +00:00
## Manually Add a New Server
Manually adding new servers is generally straightforward, start the new
agent with the `-server` flag. At this point the server will not be a member of
2014-04-11 19:32:34 +00:00
any cluster, and should emit something like:
2020-04-07 23:56:08 +00:00
```shell
2018-12-12 15:12:29 +00:00
consul agent -server
2014-10-19 23:40:10 +00:00
[WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
```
2014-04-11 19:32:34 +00:00
This means that it does not know about any peers and is not configured to elect itself.
This is expected, and we can now add this node to the existing cluster using `join`.
From the new server, we can join any member of the existing cluster:
2020-04-07 23:56:08 +00:00
```shell
2018-12-12 15:12:29 +00:00
$ consul join <Existing Node Address>
2014-10-19 23:40:10 +00:00
Successfully joined cluster by contacting 1 nodes.
```
2014-04-11 19:32:34 +00:00
It is important to note that any node, including a non-server may be specified for
2018-12-12 15:12:29 +00:00
join. Generally, this method is good for testing purposes but not recommended for production
deployments. For production clusters, you will likely want to use the agent configuration
option to add additional servers.
## Add a Server with Agent Configuration
2020-04-06 20:27:35 +00:00
In production environments, you should use the [agent configuration](https://www.consul.io/docs/agent/options.html) option, `retry_join`. `retry_join` can be used as a command line flag or in the agent configuration file.
2018-12-12 15:12:29 +00:00
With the Consul CLI:
2020-04-07 23:56:08 +00:00
```shell
2018-12-12 15:12:29 +00:00
$ consul agent -retry-join=["52.10.110.11", "52.10.110.12", "52.10.100.13"]
```
In the agent configuration file:
2020-04-07 23:56:08 +00:00
```shell
2018-12-12 15:12:29 +00:00
{
"bootstrap": false,
"bootstrap_expect": 3,
"server": true,
"retry_join": ["52.10.110.11", "52.10.110.12", "52.10.100.13"]
}
```
[`retry_join`](https://www.consul.io/docs/agent/options.html#retry-join)
will ensure that if any server loses connection
with the cluster for any reason, including the node restarting, it can
2020-04-06 20:27:35 +00:00
rejoin when it comes back. In additon to working with static IPs, it
can also be useful for other discovery mechanisms, such as auto joining
2018-12-12 15:12:29 +00:00
based on cloud metadata and discovery. Both servers and clients can use this method.
### Server Coordination
To ensure Consul servers are joining the cluster properly, you should monitor
the server coordination. The gossip protocol is used to properly discover all
the nodes in the cluster. Once the node has joined, the existing cluster
leader should log something like:
2014-04-11 19:32:34 +00:00
2014-10-19 23:40:10 +00:00
```text
[INFO] raft: Added peer 127.0.0.2:8300, starting replication
```
2014-04-11 19:32:34 +00:00
This means that raft, the underlying consensus protocol, has added the peer and begun
replicating state. Since the existing cluster may be very far ahead, it can take some
time for the new node to catch up. To check on this, run `info` on the leader:
2014-10-19 23:40:10 +00:00
```text
2014-04-11 19:32:34 +00:00
$ consul info
...
raft:
applied_index = 47244
commit_index = 47244
fsm_pending = 0
last_log_index = 47244
last_log_term = 21
last_snapshot_index = 40966
last_snapshot_term = 20
2018-12-12 15:12:29 +00:00
num_peers = 4
2014-04-11 19:32:34 +00:00
state = Leader
term = 21
...
```
This will provide various information about the state of Raft. In particular
the `last_log_index` shows the last log that is on disk. The same `info` command
can be run on the new server to see how far behind it is. Eventually the server
will be caught up, and the values should match.
2014-04-17 21:45:53 +00:00
It is best to add servers one at a time, allowing them to catch up. This avoids
2014-04-11 19:32:34 +00:00
the possibility of data loss in case the existing servers fail while bringing
the new servers up-to-date.
2018-12-12 15:12:29 +00:00
## Manually Remove a Server
2014-04-11 19:32:34 +00:00
Removing servers must be done carefully to avoid causing an availability outage.
For a cluster of N servers, at least (N/2)+1 must be available for the cluster
2020-04-09 23:46:54 +00:00
to function. See this [deployment table](/docs/internals/consensus#toc_4).
2018-12-12 15:12:29 +00:00
If you have 3 servers and 1 of them is currently failing, removing any other servers
2014-04-11 19:32:34 +00:00
will cause the cluster to become unavailable.
To avoid this, it may be necessary to first add new servers to the cluster,
increasing the failure tolerance of the cluster, and then to remove old servers.
Even if all 3 nodes are functioning, removing one leaves the cluster in a state
that cannot tolerate the failure of any node.
Once you have verified the existing servers are healthy, and that the cluster
can handle a node leaving, the actual process is simple. You simply issue a
`leave` command to the server.
2020-04-07 23:56:08 +00:00
```shell
2018-12-12 15:12:29 +00:00
consul leave
```
2014-04-11 19:32:34 +00:00
The server leaving should contain logs like:
2014-10-19 23:40:10 +00:00
```text
...
[INFO] consul: server starting leave
...
[INFO] raft: Removed ourself, transitioning to follower
...
```
2014-04-11 19:32:34 +00:00
The leader should also emit various logs including:
2014-10-19 23:40:10 +00:00
```text
...
[INFO] consul: member 'node-10-0-1-8' left, deregistering
[INFO] raft: Removed peer 10.0.1.8:8300, stopping replication
...
```
2014-04-11 19:32:34 +00:00
At this point the node has been gracefully removed from the cluster, and
2014-04-17 21:45:53 +00:00
will shut down.
2014-06-06 05:46:11 +00:00
2018-12-17 15:16:07 +00:00
~> Running `consul leave` on a server explicitly will reduce the quorum size. Even if the cluster used `bootstrap_expect` to set a quorum size initially, issuing `consul leave` on a server will reconfigure the cluster to have fewer servers. This means you could end up with just one server that is still able to commit writes because quorum is only 1, but those writes might be lost if that server fails before more are added.
2018-10-03 21:37:36 +00:00
To remove all agents that accidentally joined the wrong set of servers, clear out the contents of the data directory (`-data-dir`) on both client and server nodes.
2020-04-06 20:27:35 +00:00
These graceful methods to remove servers assumes you have a healthly cluster.
If the cluster has no leader due to loss of quorum or data corruption, you should
2020-04-09 23:46:54 +00:00
plan for [outage recovery](/docs/guides/outage#manual-recovery-using-peers-json).
2018-12-12 15:12:29 +00:00
2018-10-03 21:37:36 +00:00
!> **WARNING** Removing data on server nodes will destroy all state in the cluster
2018-12-12 15:12:29 +00:00
## Manual Forced Removal
2014-06-06 05:46:11 +00:00
In some cases, it may not be possible to gracefully remove a server. For example,
if the server simply fails, then there is no ability to issue a leave. Instead,
the cluster will detect the failure and replication will continuously retry.
If the server can be recovered, it is best to bring it back online and then gracefully
leave the cluster. However, if this is not a possibility, then the `force-leave` command
can be used to force removal of a server.
2020-04-07 23:56:08 +00:00
```shell
2018-12-12 15:12:29 +00:00
consul force-leave <node>
```
2014-06-06 05:46:11 +00:00
This is done by invoking that command with the name of the failed node. At this point,
2014-10-19 23:40:10 +00:00
the cluster leader will mark the node as having left the cluster and it will stop attempting to replicate.
2018-12-12 15:12:29 +00:00
## Summary
In this guide we learned the straightforward process of adding and removing servers including;
manually adding servers, adding servers through the agent configuration, gracefully removing
servers, and forcing removal of servers. Finally, we should restate that manually adding servers
2020-04-06 20:27:35 +00:00
is good for testing purposes, however, for production it is recommended to add servers with
2018-12-12 15:12:29 +00:00
the agent configuration.