--- layout: docs page_title: 'Commands: Keyring' sidebar_title: 'keyring' sidebar_current: docs-commands-keyring --- # Consul Keyring Command: `consul keyring` The `keyring` command is used to examine and modify the encryption keys used in Consul's [Gossip Pools](/docs/internals/gossip.html). It is capable of distributing new encryption keys to the cluster, retiring old encryption keys, and changing the keys used by the cluster to encrypt messages. Consul allows multiple encryption keys to be in use simultaneously. This is intended to provide a transition state while the cluster converges. It is the responsibility of the operator to ensure that only the required encryption keys are installed on the cluster. You can review the installed keys using the `-list` argument, and remove unneeded keys with `-remove`. All operations performed by this command can only be run against server nodes, and affect both the LAN and WAN keyrings in lock-step. The only exception to this is for the `-list` operation, you may set the `-local-only` flag to only query against local server nodes. All variations of the `keyring` command return 0 if all nodes reply and there are no errors. If any node fails to reply or reports failure, the exit code will be 1. ## Usage Usage: `consul keyring [options]` Only one actionable argument may be specified per run, including `-list`, `-install`, `-remove`, and `-use`. #### API Options @include 'http_api_options_client.mdx' #### Command Options - `-list` - List all keys currently in use within the cluster. - `-install` - Install a new encryption key. This will broadcast the new key to all members in the cluster. - `-use` - Change the primary encryption key, which is used to encrypt messages. The key must already be installed before this operation can succeed. - `-remove` - Remove the given key from the cluster. This operation may only be performed on keys which are not currently the primary key. - `-relay-factor` - Added in Consul 0.7.4, setting this to a non-zero value will cause nodes to relay their response to the operation through this many randomly-chosen other nodes in the cluster. The maximum allowed value is 5. - `-local-only` - Setting this to true will force a keyring list query to only hit local servers (no WAN traffic). Setting `local-only` is only allowed for list queries. ## Output The output of the `consul keyring -list` command consolidates information from all nodes and all datacenters to provide a simple and easy to understand view of the cluster. The following is some example output from a cluster with two datacenters, each which consist of one server and one client: ```text ==> Gathering installed encryption keys... ==> Done! WAN: a1i101sMY8rxB+0eAKD/gw== [2/2] dc2 (LAN): a1i101sMY8rxB+0eAKD/gw== [2/2] dc1 (LAN): a1i101sMY8rxB+0eAKD/gw== [2/2] dc1 (LAN) [alpha]: a1i101sMY8rxB+0eAKD/gw== [2/2] ``` As you can see, the output above is divided first by gossip pool, including any network segments, and then by encryption key. The indicator to the right of each key displays the number of nodes the key is installed on over the total number of nodes in the pool. ## Errors If any errors are encountered while performing a keyring operation, no key information is displayed, but instead only error information. The error information is arranged in a similar fashion, organized first by datacenter, followed by a simple list of nodes which had errors, and the actual text of the error. Below is sample output from the same cluster as above, if we try to do something that causes an error; in this case, trying to remove the primary key: ```text ==> Removing gossip encryption key... dc1 (LAN) error: 2/2 nodes reported failure server1: Removing the primary key is not allowed client1: Removing the primary key is not allowed WAN error: 2/2 nodes reported failure server1.dc1: Removing the primary key is not allowed server2.dc2: Removing the primary key is not allowed dc2 (LAN) error: 2/2 nodes reported failure server2: Removing the primary key is not allowed client2: Removing the primary key is not allowed ``` As you can see, each node with a failure reported what went wrong.