2014-02-08 00:41:03 +00:00
---
layout: "intro"
2014-04-14 19:47:55 +00:00
page_title: "Consul Cluster"
2014-02-08 00:41:03 +00:00
sidebar_current: "gettingstarted-join"
2015-12-27 15:45:59 +00:00
description: >
When a Consul agent is started, it begins as an isolated cluster of its own.
2016-01-05 17:48:04 +00:00
To learn about other cluster members, the agent must join one or more other
2015-12-27 15:45:59 +00:00
nodes using a provided join address. In this step, we will set up a two-node
cluster and join the nodes together.
2014-02-08 00:41:03 +00:00
---
2014-04-14 19:47:55 +00:00
# Consul Cluster
2014-02-08 00:41:03 +00:00
2015-03-14 18:13:41 +00:00
We've started our first agent and registered and queried a service on that
agent. This showed how easy it is to use Consul but didn't show how this could
be extended to a scalable, production-grade service discovery infrastructure.
In this step, we'll create our first real cluster with multiple members.
2014-02-08 00:41:03 +00:00
2015-03-14 18:13:41 +00:00
When a Consul agent is started, it begins without knowledge of any other node:
2016-11-25 18:25:09 +00:00
it is an isolated cluster of one. To learn about other cluster members, the
agent must _join_ an existing cluster. To join an existing cluster, it only
2014-11-26 11:49:39 +00:00
needs to know about a _single_ existing member. After it joins, the agent will
gossip with this member and quickly discover the other members in the cluster.
A Consul agent can join any other agent, not just agents in server mode.
2014-02-08 00:41:03 +00:00
## Starting the Agents
2015-03-14 18:13:41 +00:00
To simulate a more realistic cluster, we will start a two node cluster via
[Vagrant ](https://www.vagrantup.com/ ). The Vagrantfile we will be using can
be found in the [demo section of the Consul repo]
(https://github.com/hashicorp/consul/tree/master/demo/vagrant-cluster).
2014-02-08 00:41:03 +00:00
2015-03-14 18:13:41 +00:00
We first boot our two nodes:
2014-02-08 00:41:03 +00:00
2014-10-19 23:40:10 +00:00
```text
2015-03-14 18:13:41 +00:00
$ vagrant up
```
Once the systems are available, we can ssh into them to begin configuration
of our cluster. We start by logging in to the first node:
```text
$ vagrant ssh n1
```
2015-12-27 15:45:59 +00:00
In our previous examples, we used the [`-dev`
flag](/docs/agent/options.html#_dev) to quickly set up a development server.
However, this is not sufficient for use in a clustered environment. We will
omit the `-dev` flag from here on, and instead specify our clustering flags as
outlined below.
2015-03-14 18:13:41 +00:00
Each node in a cluster must have a unique name. By default, Consul uses the
hostname of the machine, but we'll manually override it using the [`-node`
command-line option](/docs/agent/options.html#_node).
We will also specify a [`bind` address ](/docs/agent/options.html#_bind ):
this is the address that Consul listens on, and it *must* be accessible by
all other nodes in the cluster. While a `bind` address is not strictly
2016-09-28 16:20:36 +00:00
necessary, it's always best to provide one. Consul will by default attempt to
2016-09-29 16:42:22 +00:00
listen on all IPv4 interfaces on a system, but will fail to start with an
2016-09-28 16:20:36 +00:00
error if multiple private IPs are found. Since production servers often
have multiple interfaces, specifying a `bind` address assures that you will
2015-03-14 18:13:41 +00:00
never bind Consul to the wrong interface.
The first node will act as our sole server in this cluster, and we indicate
2015-12-27 15:45:59 +00:00
this with the [`server` switch ](/docs/agent/options.html#_server ).
The [`-bootstrap-expect` flag ](/docs/agent/options.html#_bootstrap_expect )
hints to the Consul server the number of additional server nodes we are
expecting to join. The purpose of this flag is to delay the bootstrapping of
the replicated log until the expected number of servers has successfully joined.
You can read more about this in the [bootstrapping
guide](/docs/guides/bootstrapping.html).
Finally, we add the [`config-dir` flag ](/docs/agent/options.html#_config_dir ),
marking where service and check definitions can be found.
2015-03-14 18:13:41 +00:00
All together, these settings yield a
[`consul agent` ](/docs/commands/agent.html ) command like this:
```text
2016-09-27 02:28:39 +00:00
vagrant@n1:~$ consul agent -server -bootstrap-expect=1 \
-data-dir=/tmp/consul -node=agent-one -bind=172.20.20.10 \
-config-dir=/etc/consul.d
2014-02-08 00:41:03 +00:00
...
```
2015-03-14 18:13:41 +00:00
Now, in another terminal, we will connect to the second node:
```text
$ vagrant ssh n2
```
This time, we set the [`bind` address ](/docs/agent/options.html#_bind )
address to match the IP of the second node as specified in the Vagrantfile
and the [`node` name ](/docs/agent/options.html#_node ) to be `agent-two` .
Since this node will not be a Consul server, we don't provide a
[`server` switch ](/docs/agent/options.html#_server ).
All together, these settings yield a
[`consul agent` ](/docs/commands/agent.html ) command like this:
2014-02-08 00:41:03 +00:00
2014-10-19 23:40:10 +00:00
```text
2016-09-27 02:28:39 +00:00
vagrant@n2:~$ consul agent -data-dir=/tmp/consul -node=agent-two \
-bind=172.20.20.11 -config-dir=/etc/consul.d
2014-02-08 00:41:03 +00:00
...
```
2015-03-14 18:13:41 +00:00
At this point, you have two Consul agents running: one server and one client.
The two Consul agents still don't know anything about each other and are each
part of their own single-node clusters. You can verify this by running
[`consul members` ](/docs/commands/members.html ) against each agent and noting
that only one member is visible to each agent.
2014-02-08 00:41:03 +00:00
## Joining a Cluster
2015-03-14 18:13:41 +00:00
Now, we'll tell the first agent to join the second agent by running
the following commands in a new terminal:
2014-02-08 00:41:03 +00:00
2014-10-19 23:40:10 +00:00
```text
2015-03-14 18:13:41 +00:00
$ vagrant ssh n1
...
vagrant@n1:~$ consul join 172.20.20.11
2014-02-08 00:41:03 +00:00
Successfully joined cluster by contacting 1 nodes.
```
You should see some log output in each of the agent logs. If you read
carefully, you'll see that they received join information. If you
2015-03-14 18:21:54 +00:00
run [`consul members` ](/docs/commands/members.html ) against each agent,
you'll see that both agents now know about each other:
2014-02-08 00:41:03 +00:00
2014-10-19 23:40:10 +00:00
```text
2015-03-14 18:13:41 +00:00
vagrant@n2:~$ consul members
Node Address Status Type Build Protocol
agent-two 172.20.20.11:8301 alive client 0.5.0 2
agent-one 172.20.20.10:8301 alive server 0.5.0 2
2014-02-08 00:41:03 +00:00
```
2014-11-26 11:49:39 +00:00
-> **Remember:** To join a cluster, a Consul agent only needs to
2014-02-08 00:41:03 +00:00
learn about < em > one existing member< / em > . After joining the cluster, the
agents gossip with each other to propagate full membership information.
2015-03-05 00:12:14 +00:00
## Auto-joining a Cluster on Start
2017-02-08 21:05:07 +00:00
Ideally, whenever a new node is brought up in your datacenter, it should automatically join the Consul cluster without human intervention. Consul facilitates auto-join by enabling the auto-discovery of instances in AWS or Google Cloud with a given tag key/value. To use the integration, add the [`retry_join_ec2` ](/docs/agent/options.html?#retry_join_ec2 ) or the [`retry_join_gce` ](/docs/agent/options.html?#retry_join_gce ) nested object to your Consul configuration file. This will allow a new node to join the cluster without any hardcoded configuration. Alternatively, you can join a cluster at startup using the [`-join` flag ](/docs/agent/options.html#_join ) or [`start_join` setting ](/docs/agent/options.html#start_join ) with hardcoded addresses of other known Consul agents.
2014-02-08 00:41:03 +00:00
2014-04-14 19:47:55 +00:00
## Querying Nodes
Just like querying services, Consul has an API for querying the
nodes themselves. You can do this via the DNS or HTTP API.
For the DNS API, the structure of the names is `NAME.node.consul` or
2015-03-05 23:04:40 +00:00
`NAME.node.DATACENTER.consul` . If the datacenter is omitted, Consul
2014-04-14 19:47:55 +00:00
will only search the local datacenter.
2015-03-14 18:13:41 +00:00
For example, from "agent-one", we can query for the address of the
node "agent-two":
2014-04-14 19:47:55 +00:00
```
2015-03-14 18:13:41 +00:00
vagrant@n1:~$ dig @127 .0.0.1 -p 8600 agent-two.node.consul
2014-04-14 19:47:55 +00:00
...
;; QUESTION SECTION:
;agent-two.node.consul. IN A
;; ANSWER SECTION:
agent-two.node.consul. 0 IN A 172.20.20.11
```
The ability to look up nodes in addition to services is incredibly
useful for system administration tasks. For example, knowing the address
2015-03-14 18:13:41 +00:00
of the node to SSH into is as easy as making the node a part of the
Consul cluster and querying it.
2014-04-14 19:47:55 +00:00
2014-02-08 00:41:03 +00:00
## Leaving a Cluster
To leave the cluster, you can either gracefully quit an agent (using
`Ctrl-C` ) or force kill one of the agents. Gracefully leaving allows
2015-03-14 18:13:41 +00:00
the node to transition into the _left_ state; otherwise, other nodes
2014-02-08 00:41:03 +00:00
will detect it as having _failed_ . The difference is covered
2015-03-14 18:13:41 +00:00
in more detail [here ](/intro/getting-started/agent.html#stopping ).
## Next Steps
We now have a multi-node Consul cluster up and running. Let's make
our services more robust by giving them [health checks ](checks.html )!