2020-09-14 17:18:08 +00:00
---
layout: docs
2022-07-01 12:47:24 +00:00
page_title: Upgrading to Latest 1.2.x
2020-09-14 17:18:08 +00:00
description: >-
Specific versions of Consul may have additional information about the upgrade
process beyond the standard flow.
---
2022-07-01 12:47:24 +00:00
# Upgrading to Latest 1.2.x
2020-09-14 17:18:08 +00:00
## Introduction
2020-09-15 21:03:17 +00:00
This guide explains how to best upgrade a multi-datacenter Consul deployment that is using
a version of Consul >= 0.8.5 and < 1.2.4 while maintaining replication. If you are on a version
2020-09-14 17:18:08 +00:00
older than 0.8.5, but are in the 0.8.x series, please upgrade to 0.8.5 by following our
2020-09-15 21:03:17 +00:00
[General Upgrade Process](/docs/upgrading/instructions/general-process). If you are on a version
older than 0.8.0, please [contact support](https://support.hashicorp.com).
2020-09-14 17:18:08 +00:00
2020-09-15 21:03:17 +00:00
In this guide, we will be using an example with two datacenters (DCs) and will be
2020-09-14 17:18:08 +00:00
referring to them as DC1 and DC2. DC1 will be the primary datacenter.
## Requirements
- All Consul servers should be on a version of Consul >= 0.8.5 and < 1.2.4.
- You need a Consul cluster with at least 3 nodes to perform this upgrade as documented. If
you either have a single node cluster or several single node clusters joined via WAN, the
2020-09-15 21:03:17 +00:00
servers will come up in a `No cluster leader` loop after upgrading. If that happens, you will
2022-08-26 05:49:29 +00:00
need to recover the cluster using the method described [here](/consul/tutorials/datacenter-operations/recovery-outage?utm_source=docs#manual-recovery-using-peers-json).
2020-09-14 17:18:08 +00:00
You can avoid this issue entirely by growing your cluster to 3 nodes prior to upgrading.
## Assumptions
2020-09-15 21:03:17 +00:00
This guide makes the following assumptions:
2020-09-14 17:18:08 +00:00
2020-09-15 21:03:17 +00:00
- You have at least two datacenters configured and have ACL replication enabled. If you are
2020-09-14 17:18:08 +00:00
not using multiple datacenters, you can follow along and simply skip the instructions related
to replication.
## Considerations
2020-09-15 21:03:17 +00:00
There are not too many major changes that might cause issues upgrading from 1.0.8, but notable changes
2020-09-14 17:18:08 +00:00
are called out in our [Specific Version Details](/docs/upgrading/upgrade-specific#consul-1-1-0)
2021-06-29 21:04:24 +00:00
page. You can find more granular details in the full [changelog](https://github.com/hashicorp/consul/blob/main/CHANGELOG.md#124-november-27-2018).
2020-09-14 17:18:08 +00:00
Looking through these changes prior to upgrading is highly recommended.
## Procedure
2020-09-15 21:03:17 +00:00
**1.** Check replication status in DC1 by issuing the following curl command from a
2020-09-14 17:18:08 +00:00
consul server in that DC:
2022-01-12 23:05:01 +00:00
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
2020-09-14 17:18:08 +00:00
```
2020-09-15 21:03:17 +00:00
You should receive output similar to this:
2020-09-14 17:18:08 +00:00
```json
{
"Enabled": false,
"Running": false,
"SourceDatacenter": "",
"ReplicatedIndex": 0,
"LastSuccess": "0001-01-01T00:00:00Z",
"LastError": "0001-01-01T00:00:00Z"
}
```
-> The primary datacenter (indicated by `acl_datacenter`) will always show as having replication
disabled, so this is normal even if replication is happening.
2020-09-15 21:03:17 +00:00
**2.** Check replication status in DC2 by issuing the following curl command from a
2020-09-14 17:18:08 +00:00
consul server in that DC:
2022-01-12 23:05:01 +00:00
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
2020-09-14 17:18:08 +00:00
```
2020-09-15 21:03:17 +00:00
You should receive output similar to this:
2020-09-14 17:18:08 +00:00
```json
{
"Enabled": true,
"Running": true,
"SourceDatacenter": "dc1",
"ReplicatedIndex": 9,
"LastSuccess": "2020-09-10T21:16:15Z",
"LastError": "0001-01-01T00:00:00Z"
}
```
**3.** Upgrade the Consul agents in all DCs to version 1.2.4 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process).
This should be done one DC at a time, leaving the primary DC for last.
2020-09-15 21:03:17 +00:00
**4.** Confirm that replication is still working in DC2 by issuing the following curl command from a
2020-09-14 17:18:08 +00:00
consul server in that DC:
2022-01-12 23:05:01 +00:00
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
2020-09-14 17:18:08 +00:00
```
2020-09-15 21:03:17 +00:00
You should receive output similar to this:
2020-09-14 17:18:08 +00:00
```json
{
"Enabled": true,
"Running": true,
"SourceDatacenter": "dc1",
"ReplicatedIndex": 9,
"LastSuccess": "2020-09-10T21:16:15Z",
"LastError": "0001-01-01T00:00:00Z"
}
```
## Post-Upgrade Configuration Changes
2020-09-15 21:03:17 +00:00
If you moved from a pre-1.0.0 version of Consul, you will find that _many_ of the configuration
2020-09-14 17:18:08 +00:00
options were renamed. Backwards compatibility has been maintained, so your old config options
2020-09-15 21:03:17 +00:00
will continue working after upgrading, but you will want to update those now to avoid issues when
2020-09-14 17:18:08 +00:00
moving to newer versions.
2020-09-15 21:03:17 +00:00
The full list of changes is available here:
2020-09-14 17:18:08 +00:00
2022-01-10 23:42:51 +00:00
- [Upgrade Specific Versions: Consul 1.0 - Deprecated Options](/docs/upgrading/upgrade-specific#deprecated-options-have-been-removed)
2020-09-14 17:18:08 +00:00
You can make sure your config changes are valid by copying your existing configuration files,
2021-06-16 20:13:32 +00:00
making the changes, and then verifying them by using `consul validate $CONFIG_FILE1_PATH $CONFIG_FILE2_PATH ...`.
2020-09-14 17:18:08 +00:00
Once your config is passing the validation check, replace your old config files with the new ones
and slowly roll your cluster again one server at a time – leaving the leader agent for last in each
datacenter.