diff --git a/website/content/api-docs/acl-legacy.mdx b/website/content/api-docs/acl-legacy.mdx
index 2cdf0e0491..678b60ce32 100644
--- a/website/content/api-docs/acl-legacy.mdx
+++ b/website/content/api-docs/acl-legacy.mdx
@@ -210,7 +210,7 @@ $ curl \
### Sample Response
-```json
+```text
true
```
diff --git a/website/content/api-docs/acl/index.mdx b/website/content/api-docs/acl/index.mdx
index e31bf14723..ecfac75e11 100644
--- a/website/content/api-docs/acl/index.mdx
+++ b/website/content/api-docs/acl/index.mdx
@@ -204,7 +204,7 @@ The table below shows this endpoint's support for
### Sample Payload
-```text
+```hcl
agent "" {
policy = "read"
}
@@ -218,7 +218,7 @@ $ curl -X POST -d @rules.hcl http://127.0.0.1:8500/v1/acl/rules/translate
### Sample Response
-```text
+```hcl
agent_prefix "" {
policy = "read"
}
@@ -257,7 +257,7 @@ $ curl -X GET http://127.0.0.1:8500/v1/acl/rules/translate/4f48f7e6-9359-4890-8e
### Sample Response
-```text
+```hcl
agent_prefix "" {
policy = "read"
}
diff --git a/website/content/api-docs/agent/index.mdx b/website/content/api-docs/agent/index.mdx
index 3332814fc7..cc4d903696 100644
--- a/website/content/api-docs/agent/index.mdx
+++ b/website/content/api-docs/agent/index.mdx
@@ -601,7 +601,7 @@ $ curl \
### Sample Response
-```text
+```log
YYYY/MM/DD HH:MM:SS [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:127.0.0.1:8300 Address:127.0.0.1:8300}]
YYYY/MM/DD HH:MM:SS [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
YYYY/MM/DD HH:MM:SS [INFO] serf: EventMemberJoin: machine-osx 127.0.0.1
diff --git a/website/content/api-docs/coordinate.mdx b/website/content/api-docs/coordinate.mdx
index 9f47924764..f3ff23bae7 100644
--- a/website/content/api-docs/coordinate.mdx
+++ b/website/content/api-docs/coordinate.mdx
@@ -213,7 +213,7 @@ The table below shows this endpoint's support for
### Sample Payload
-```text
+```json
{
"Node": "agent-one",
"Segment": "",
diff --git a/website/content/commands/snapshot/agent.mdx b/website/content/commands/snapshot/agent.mdx
index 2be0cf4367..2be5a849b2 100644
--- a/website/content/commands/snapshot/agent.mdx
+++ b/website/content/commands/snapshot/agent.mdx
@@ -33,7 +33,7 @@ leader and starting saving snapshots.
As snapshots are saved, they will be reported in the log produced by the agent:
-```text
+```log
2016/11/16 21:21:13 [INFO] Snapshot agent running
2016/11/16 21:21:13 [INFO] Waiting to obtain leadership...
2016/11/16 21:21:13 [INFO] Obtained leadership
@@ -67,8 +67,7 @@ If ACLs are enabled the following privileges are required:
The following is a example least privilege policy which allows the snapshot agent
to run on a node named `server-1234`.
-
-
+
```hcl
# Required to read and snapshot ACL data
@@ -89,9 +88,6 @@ service "consul-snapshot" {
}
```
-
-
-
```json
{
"acl": "write",
@@ -113,8 +109,7 @@ service "consul-snapshot" {
}
```
-
-
+
Additional `session` rules should be created, or `session_prefix` used, if the
snapshot agent is deployed across more than one host.
diff --git a/website/content/docs/agent/config-entries.mdx b/website/content/docs/agent/config-entries.mdx
index 000a2db7f0..e95ea16f29 100644
--- a/website/content/docs/agent/config-entries.mdx
+++ b/website/content/docs/agent/config-entries.mdx
@@ -63,7 +63,9 @@ create and update configuration entries. This command will load either a JSON or
HCL file holding the configuration entry definition and then will push this
configuration to Consul.
-Example HCL Configuration File - `proxy-defaults.hcl`:
+Example HCL Configuration File:
+
+
```hcl
Kind = "proxy-defaults"
@@ -74,6 +76,8 @@ Config {
}
```
+
+
Then to apply this configuration, run:
```shell-session
diff --git a/website/content/docs/agent/options.mdx b/website/content/docs/agent/options.mdx
index ab0240cfd2..feca4c9c8d 100644
--- a/website/content/docs/agent/options.mdx
+++ b/website/content/docs/agent/options.mdx
@@ -538,7 +538,7 @@ definitions support being updated during a reload.
#### Example Configuration File
-```javascript
+```json
{
"datacenter": "east-aws",
"data_dir": "/opt/consul",
@@ -1099,24 +1099,19 @@ Valid time units are 'ns', 'us' (or 'µs'), 'ms', 's', 'm', 'h'."
within a double quoted string value must be escaped with a backslash `\`.
Some example templates:
-
-
+
```hcl
bind_addr = "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}"
```
-
-
-
```json
{
"bind_addr": "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}"
}
```
-
-
+
- `cache` configuration for client agents. The configurable values are the following:
@@ -2312,7 +2307,7 @@ signed by the CA can be used to gain full access to Consul.
encryption and authentication. Failing to set [`verify_incoming`](#verify_incoming) or [`verify_outgoing`](#verify_outgoing)
will result in TLS not being enabled at all, even when specifying a [`ca_file`](#ca_file), [`cert_file`](#cert_file), and [`key_file`](#key_file).
-```javascript
+```json
{
"datacenter": "east-aws",
"data_dir": "/opt/consul",
@@ -2336,7 +2331,7 @@ will result in TLS not being enabled at all, even when specifying a [`ca_file`](
See, especially, the use of the `ports` setting:
-```javascript
+```json
"ports": {
"https": 8501
}
diff --git a/website/content/docs/agent/telemetry.mdx b/website/content/docs/agent/telemetry.mdx
index 8d24385696..9d4e0c4b4a 100644
--- a/website/content/docs/agent/telemetry.mdx
+++ b/website/content/docs/agent/telemetry.mdx
@@ -41,7 +41,7 @@ format or using [Prometheus](https://prometheus.io/) format.
Below is sample output of a telemetry dump:
-```text
+```log
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.num_goroutines': 19.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.alloc_bytes': 755960.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.malloc_count': 7550.000
diff --git a/website/content/docs/connect/proxies/built-in.mdx b/website/content/docs/connect/proxies/built-in.mdx
index 0d31688fea..21b84d42cb 100644
--- a/website/content/docs/connect/proxies/built-in.mdx
+++ b/website/content/docs/connect/proxies/built-in.mdx
@@ -23,7 +23,7 @@ To get started with the built-in proxy and see a working example you can follow
Below is a complete example of all the configuration options available
for the built-in proxy.
-```javascript
+```json
{
"service": {
...
diff --git a/website/content/docs/connect/registration/service-registration.mdx b/website/content/docs/connect/registration/service-registration.mdx
index d22c895dab..bbb53e7e97 100644
--- a/website/content/docs/connect/registration/service-registration.mdx
+++ b/website/content/docs/connect/registration/service-registration.mdx
@@ -385,8 +385,7 @@ To connect to a service via local Unix Domain Socket instead of a
port, add `local_bind_socket_path` and optionally `local_bind_socket_mode`
to the upstream config for a service:
-
-
+
```hcl
upstreams = [
@@ -398,9 +397,6 @@ upstreams = [
]
```
-
-
-
```json
"upstreams": [
{
@@ -411,8 +407,7 @@ upstreams = [
]
```
-
-
+
This will cause Envoy to create a socket with the path and mode
provided, and connect that to service-1.
@@ -431,8 +426,8 @@ mesh, use either the `socket_path` field in the service definition or the
`local_service_socket_path` field in the proxy definition. These
fields are analogous to the `port` and `service_port` fields in their
respective locations.
-
-
+
+
```hcl
services {
@@ -441,9 +436,6 @@ services {
}
```
-
-
-
```json
"services": {
"name": "service-2",
@@ -451,11 +443,11 @@ services {
}
```
-
-
+
+
Or in the proxy definition:
-
-
+
+
```hcl
services {
@@ -472,9 +464,6 @@ services {
}
```
-
-
-
```json
"services": {
"name": "socket_service_2",
@@ -490,8 +479,7 @@ services {
}
```
-
-
+
There is no mode field since the service is expected to create the
socket it is listening on, not the Envoy proxy.
diff --git a/website/content/docs/discovery/checks.mdx b/website/content/docs/discovery/checks.mdx
index 336e587fae..e4967e2ef5 100644
--- a/website/content/docs/discovery/checks.mdx
+++ b/website/content/docs/discovery/checks.mdx
@@ -141,7 +141,7 @@ There are several different kinds of checks:
A script check:
-```javascript
+```json
{
"check": {
"id": "mem-util",
@@ -155,7 +155,7 @@ A script check:
A HTTP check:
-```javascript
+```json
{
"check": {
"id": "api",
@@ -174,7 +174,7 @@ A HTTP check:
A TCP check:
-```javascript
+```json
{
"check": {
"id": "ssh",
@@ -188,7 +188,7 @@ A TCP check:
A TTL check:
-```javascript
+```json
{
"check": {
"id": "web-app",
@@ -201,7 +201,7 @@ A TTL check:
A Docker check:
-```javascript
+```json
{
"check": {
"id": "mem-util",
@@ -216,7 +216,7 @@ A Docker check:
A gRPC check for the whole application:
-```javascript
+```json
{
"check": {
"id": "mem-util",
@@ -230,7 +230,7 @@ A gRPC check for the whole application:
A gRPC check for the specific `my_service` service:
-```javascript
+```json
{
"check": {
"id": "mem-util",
@@ -244,7 +244,7 @@ A gRPC check for the specific `my_service` service:
A h2ping check:
-```javascript
+```json
{
"check": {
"id": "h2ping-check",
@@ -257,7 +257,7 @@ A h2ping check:
An alias check for a local service:
-```javascript
+```json
{
"check": {
"id": "web-alias",
@@ -340,7 +340,7 @@ to be healthy. In certain cases, it may be desirable to specify the initial
state of a health check. This can be done by specifying the `status` field in a
health check definition, like so:
-```javascript
+```json
{
"check": {
"id": "mem",
@@ -361,7 +361,7 @@ that the status of the health check will only affect the health status of the
given service instead of the entire node. Service-bound health checks may be
provided by adding a `service_id` field to a check configuration:
-```javascript
+```json
{
"check": {
"id": "web-app",
@@ -387,7 +387,7 @@ to use the agent's credentials when configured for TLS.
Multiple check definitions can be defined using the `checks` (plural)
key in your configuration file.
-```javascript
+```json
{
"checks": [
{
diff --git a/website/content/docs/discovery/services.mdx b/website/content/docs/discovery/services.mdx
index f9e549c9c1..d15d9d018d 100644
--- a/website/content/docs/discovery/services.mdx
+++ b/website/content/docs/discovery/services.mdx
@@ -33,7 +33,7 @@ using the [HTTP API](/api).
A service definition is a configuration that looks like the following. This
example shows all possible fields, but note that only a few are required.
-```javascript
+```json
{
"service": {
"id": "redis",
@@ -281,7 +281,7 @@ Multiple services definitions can be provided at once when registering services
via the agent configuration by using the plural `services` key (registering
multiple services in this manner is not supported using the HTTP API).
-```javascript
+```json
{
"services": [
{
diff --git a/website/content/docs/dynamic-app-config/watches.mdx b/website/content/docs/dynamic-app-config/watches.mdx
index 560f04fc2f..f56595c680 100644
--- a/website/content/docs/dynamic-app-config/watches.mdx
+++ b/website/content/docs/dynamic-app-config/watches.mdx
@@ -126,7 +126,7 @@ This maps to the `/v1/kv/` API internally.
Here is an example configuration:
-```javascript
+```json
{
"type": "key",
"key": "foo/bar/baz",
diff --git a/website/content/docs/enterprise/audit-logging.mdx b/website/content/docs/enterprise/audit-logging.mdx
index 1fae8845c6..2c7f8b367b 100644
--- a/website/content/docs/enterprise/audit-logging.mdx
+++ b/website/content/docs/enterprise/audit-logging.mdx
@@ -45,8 +45,7 @@ The following example configures a destination called "My Sink" which stores aud
events at the file `/tmp/audit.json`. The log file will be rotated either every
24 hours, or when the log file size is greater than 25165824 bytes (24 megabytes).
-
-
+
```hcl
audit {
@@ -62,8 +61,6 @@ audit {
}
}
```
-
-
```json
{
@@ -83,8 +80,8 @@ audit {
}
}
```
-
-
+
+
@@ -93,8 +90,7 @@ audit {
The following example configures a destination called "My Sink" which emits audit
logs to standard out.
-
-
+
```hcl
audit {
@@ -107,8 +103,6 @@ audit {
}
}
```
-
-
```json
{
@@ -126,8 +120,8 @@ audit {
}
```
-
-
+
+
@@ -143,6 +137,8 @@ request.
The value of the `payload.auth.accessor_id` field is the accessor ID of the
[ACL token](/docs/security/acl/acl-system#acl-tokens) which issued the request.
+
+
```json
{
"created_at": "2020-12-08T12:30:29.196365-05:00",
@@ -169,10 +165,14 @@ The value of the `payload.auth.accessor_id` field is the accessor ID of the
}
```
+
+
After the request is processed, a corresponding log entry is written for the HTTP
response. The `stage` field is set to `OperationComplete` which indicates the agent
has completed processing the request.
+
+
```json
{
"created_at": "2020-12-08T12:30:29.202935-05:00",
@@ -201,3 +201,5 @@ has completed processing the request.
}
}
```
+
+
diff --git a/website/content/docs/enterprise/namespaces.mdx b/website/content/docs/enterprise/namespaces.mdx
index b6a7d4a507..cac17b8e71 100644
--- a/website/content/docs/enterprise/namespaces.mdx
+++ b/website/content/docs/enterprise/namespaces.mdx
@@ -29,7 +29,7 @@ The HTTP API accepts only JSON formatted definitions while the CLI will parse ei
An example namespace definition looks like the following:
-JSON:
+
```json
{
@@ -59,8 +59,6 @@ JSON:
}
```
-HCL:
-
```hcl
Name = "team-1"
Description = "Namespace for Team 1"
@@ -87,6 +85,8 @@ Meta {
}
```
+
+
### Fields
- `Name` `(string: )` - The namespaces name must be a valid DNS hostname label.
diff --git a/website/content/docs/install/bootstrapping.mdx b/website/content/docs/install/bootstrapping.mdx
index 749bf2c7a8..7f9c355763 100644
--- a/website/content/docs/install/bootstrapping.mdx
+++ b/website/content/docs/install/bootstrapping.mdx
@@ -45,7 +45,7 @@ or specify no value at all on all the servers. Only servers that specify a value
Suppose we are starting a three server cluster. We can start `Node A`, `Node B`, and `Node C` with each
providing the `-bootstrap-expect 3` flag. Once the nodes are started, you should see a warning message in the service output.
-```text
+```log
[WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
```
@@ -82,7 +82,7 @@ Successfully joined cluster by contacting 3 nodes.
Since a join operation is symmetric, it does not matter which node initiates it. Once the join is successful, one of the nodes will output something like:
-```text
+```log
[INFO] consul: adding server foo (Addr: 127.0.0.2:8300) (DC: dc1)
[INFO] consul: adding server bar (Addr: 127.0.0.1:8300) (DC: dc1)
[INFO] consul: Attempting bootstrap with nodes: [127.0.0.3:8300 127.0.0.2:8300 127.0.0.1:8300]
diff --git a/website/content/docs/install/cloud-auto-join.mdx b/website/content/docs/install/cloud-auto-join.mdx
index 3362aa9936..077affac71 100644
--- a/website/content/docs/install/cloud-auto-join.mdx
+++ b/website/content/docs/install/cloud-auto-join.mdx
@@ -41,8 +41,9 @@ segment you wish to join.
For example, given the following segment configuration on the server agents:
+
+
```hcl
-# server-config.hcl
segments = [
{
name = "alpha"
@@ -59,6 +60,8 @@ segments = [
]
```
+
+
A Consul client agent wishing to join the "alpha" segment would need to be configured
to use port `8303` as its Serf LAN port prior to attempting to join the cluster.
@@ -68,13 +71,16 @@ to use port `8303` as its Serf LAN port prior to attempting to join the cluster.
The following example configuration overrides the default Serf LAN port using the
[`ports.serf_lan`](/docs/agent/options#serf_lan_port) configuration option.
+
+
```hcl
-# client-config.hcl
ports {
serf_lan = 8303
}
```
+
+
diff --git a/website/content/docs/install/manual-bootstrap.mdx b/website/content/docs/install/manual-bootstrap.mdx
index f5ff5f77c1..63c238456f 100644
--- a/website/content/docs/install/manual-bootstrap.mdx
+++ b/website/content/docs/install/manual-bootstrap.mdx
@@ -30,7 +30,7 @@ This is necessary because at this point, there are no other servers running in
the datacenter! Lets call this first server `Node A`. When starting `Node A`
something like the following will be logged:
-```text
+```log
2014/02/22 19:23:32 [INFO] consul: cluster leadership acquired
```
@@ -43,7 +43,7 @@ in a failure scenario. We start the next servers **without** specifying
bootstrap mode. Once `Node B` and `Node C` are started, you should see a
message to the effect of:
-```text
+```log
[WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
```
@@ -66,7 +66,7 @@ Successfully joined cluster by contacting 2 nodes.
Once the join is successful, `Node A` should output something like:
-```text
+```log
[INFO] raft: Added peer 127.0.0.2:8300, starting replication
....
[INFO] raft: Added peer 127.0.0.3:8300, starting replication
diff --git a/website/content/docs/install/performance.mdx b/website/content/docs/install/performance.mdx
index 42317f05a1..85fa3161c7 100644
--- a/website/content/docs/install/performance.mdx
+++ b/website/content/docs/install/performance.mdx
@@ -32,7 +32,7 @@ expense of some performance in leader failure detection and leader election time
The default performance configuration is equivalent to this:
-```javascript
+```json
{
"performance": {
"raft_multiplier": 5
@@ -50,7 +50,7 @@ timeouts by a factor of 5, so it can be quite slow during these events.
The high performance configuration is simple and looks like this:
-```javascript
+```json
{
"performance": {
"raft_multiplier": 1
diff --git a/website/content/docs/k8s/connect/connect-ca-provider.mdx b/website/content/docs/k8s/connect/connect-ca-provider.mdx
index 930b13d762..9eaad0bd04 100644
--- a/website/content/docs/k8s/connect/connect-ca-provider.mdx
+++ b/website/content/docs/k8s/connect/connect-ca-provider.mdx
@@ -89,8 +89,9 @@ $ kubectl create secret generic vault-config --from-file=config=vault-config.jso
We will provide this secret and the Vault CA secret, to the Consul server via the
`server.extraVolumes` Helm value.
+
+
```yaml
-# config.yaml
global:
name: consul
server:
@@ -108,6 +109,8 @@ connectInject:
enabled: true
```
+
+
Finally, [install](/docs/k8s/installation/install#installing-consul) the Helm chart using the above config file:
```shell-session
diff --git a/website/content/docs/k8s/connect/ingress-gateways.mdx b/website/content/docs/k8s/connect/ingress-gateways.mdx
index dfae0bef24..54c561c3d7 100644
--- a/website/content/docs/k8s/connect/ingress-gateways.mdx
+++ b/website/content/docs/k8s/connect/ingress-gateways.mdx
@@ -26,6 +26,8 @@ Adding an ingress gateway is a multi-step process that consists of the following
When deploying the Helm chart you must provide Helm with a custom YAML file that contains your environment configuration.
+
+
```yaml
global:
name: consul
@@ -41,6 +43,8 @@ ingressGateways:
type: LoadBalancer
```
+
+
~> **Note:** this will create a public unauthenticated LoadBalancer in your cluster, please take appropriate security considerations.
The YAML snippet is the launching point for a valid configuration that must be supplied when installing using the [official consul-helm chart](https://hub.helm.sh/charts/hashicorp/consul).
@@ -66,6 +70,8 @@ you can configure the gateways via the [`IngressGateway`](/docs/connect/config-e
Here is an example `IngressGateway` resource:
+
+
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: IngressGateway
@@ -79,6 +85,8 @@ spec:
- name: static-server
```
+
+
Apply the `IngressGateway` resource with `kubectl apply`:
```shell-session
@@ -89,6 +97,8 @@ ingressgateway.consul.hashicorp.com/ingress-gateway created
Since we're using `protocol: http`, we also need to set the protocol of our service
`static-server` to http. To do that, we create a [`ServiceDefaults`](/docs/connect/config-entries/service-defaults) custom resource:
+
+
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
@@ -98,6 +108,8 @@ spec:
protocol: http
```
+
+
Apply the `ServiceDefaults` resource with `kubectl apply`:
```shell-session
@@ -137,6 +149,8 @@ to allow the ingress gateway to route to the upstream services defined in the `I
To create an intention that allows the ingress gateway to route to the service `static-server`, create a [`ServiceIntentions`](/docs/connect/config-entries/service-intentions)
resource:
+
+
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
@@ -150,6 +164,8 @@ spec:
action: allow
```
+
+
Apply the `ServiceIntentions` resource with `kubectl apply`:
```shell-session
@@ -163,6 +179,8 @@ For detailed instructions on how to configure zero-trust networking with intenti
Now you will deploy a sample application which echoes “hello world”
+
+
```yaml
apiVersion: v1
kind: Service
@@ -210,6 +228,8 @@ spec:
serviceAccountName: static-server
```
+
+
```shell-session
$ kubectl apply -f static-server.yaml
```
@@ -233,7 +253,9 @@ $ curl -H "Host: static-server.ingress.consul" "http://$EXTERNAL_IP:8080"
~> **Security Warning:** Please be sure to delete the application and services created here as they represent a security risk through
leaving an open and unauthenticated load balancer alive in your cluster.
-To delete the ingress gateway, set enabled to false in your Helm configuration:
+To delete the ingress gateway, set enabled to `false` in your Helm configuration:
+
+
```yaml
global:
@@ -250,6 +272,8 @@ ingressGateways:
type: LoadBalancer
```
+
+
And run Helm upgrade:
```shell-session
diff --git a/website/content/docs/k8s/connect/terminating-gateways.mdx b/website/content/docs/k8s/connect/terminating-gateways.mdx
index b2daf51c0d..800b5b3912 100644
--- a/website/content/docs/k8s/connect/terminating-gateways.mdx
+++ b/website/content/docs/k8s/connect/terminating-gateways.mdx
@@ -21,6 +21,8 @@ Adding a terminating gateway is a multi-step process:
Minimum required Helm options:
+
+
```yaml
global:
name: consul
@@ -32,6 +34,8 @@ terminatingGateways:
enabled: true
```
+
+
## Deploying the Helm chart
Ensure you have the latest consul-helm chart and install Consul via helm using the following
@@ -91,6 +95,8 @@ service to that node.
Create a sample external service and register it with Consul.
+
+
```json
{
"Node": "example_com",
@@ -108,6 +114,8 @@ Create a sample external service and register it with Consul.
}
```
+
+
- `"Node": "example_com"` is our made up node name.
- `"Address": "example.com"` is the address of our node. Services registered to that node will use this address if
their own address isn't specified. If you're registering multiple external services, ensure you
@@ -141,12 +149,16 @@ being represented by the gateway:
~> The CLI command should be run with the `-merge-policies`, `-merge-roles` and `-merge-service-identities` so
nothing is removed from the terminating gateway token
+
+
```hcl
service "example-https" {
policy = "write"
}
```
+
+
```shell-session
$ consul acl policy create -name "example-https-write-policy" -rules @write-policy.hcl
ID: xxxxxxxxxxxxxxx
@@ -186,7 +198,9 @@ Policies:
Once the tokens have been updated, create the [TerminatingGateway](/docs/connect/config-entries/terminating-gateway)
resource to configure the terminating gateway:
-```hcl
+
+
+```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: TerminatingGateway
metadata:
@@ -197,6 +211,8 @@ spec:
caFile: /etc/ssl/cert.pem
```
+
+
~> If TLS is enabled a `caFile` must be provided, it must point to the system trust store of the terminating gateway
container (`/etc/ssl/cert.pem`).
@@ -208,6 +224,8 @@ $ kubectl apply -f terminating-gateway.yaml
If using ACLs and TLS, create a [`ServiceIntentions`](/docs/connect/config-entries/service-intentions) resource to allow access from services in the mesh to the external service
+
+
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
@@ -221,6 +239,8 @@ spec:
action: allow
```
+
+
Apply the `ServiceIntentions` resource with `kubectl apply`:
```shell-session
@@ -232,6 +252,8 @@ $ kubectl apply -f service-intentions.yaml
Finally define and deploy the external services as upstreams for the internal mesh services that wish to talk to them.
An example deployment is provided which will serve as a static client for the terminating gateway service.
+
+
```yaml
apiVersion: v1
kind: Service
@@ -274,6 +296,8 @@ spec:
serviceAccountName: static-client
```
+
+
Run the service via `kubectl apply`:
```shell-session
diff --git a/website/content/docs/k8s/crds/index.mdx b/website/content/docs/k8s/crds/index.mdx
index eaec361b2f..3ce2e06754 100644
--- a/website/content/docs/k8s/crds/index.mdx
+++ b/website/content/docs/k8s/crds/index.mdx
@@ -49,6 +49,8 @@ Update Complete. ⎈Happy Helming!⎈
Next, you must configure consul-helm via your `values.yaml` to install the custom resource definitions
and enable the controller that acts on them:
+
+
```yaml
global:
name: consul
@@ -60,6 +62,8 @@ connectInject:
enabled: true
```
+
+
Note that:
1. `controller.enabled: true` installs the CRDs and enables the controller.
@@ -215,6 +219,8 @@ name of the resource doesn't matter. For other resources, the name of the resour
determines which service it configures. For example, this resource configures
the service `web`:
+
+
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
@@ -224,11 +230,15 @@ spec:
protocol: http
```
+
+
For `ServiceIntentions`, because we need to support the ability to create
wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to **any** service),
and because `*` is not a valid Kubernetes resource name, we instead use the field `spec.destination.name`
to configure the destination service for the intention:
+
+
```yaml
# foo => * (allow)
apiVersion: consul.hashicorp.com/v1alpha1
@@ -255,6 +265,8 @@ spec:
action: allow
```
+
+
~> **NOTE:** If two `ServiceIntentions` resources set the same `spec.destination.name`, the
last one created will not be synced.
@@ -275,6 +287,8 @@ The details on each configuration are:
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
+
+
```yaml
global:
name: consul
@@ -285,6 +299,8 @@ The details on each configuration are:
mirroringK8S: true
```
+
+
1. **Mirroring with prefix** - The Kubernetes namespace will be "mirrored" into Consul
with a prefix added to the Consul namespace, i.e.
if the prefix is `k8s-` then service `web` in Kubernetes namespace `web-ns` will be registered as service `web`
@@ -293,6 +309,8 @@ The details on each configuration are:
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
+
+
```yaml
global:
name: consul
@@ -304,6 +322,8 @@ The details on each configuration are:
mirroringK8SPrefix: k8s-
```
+
+
1. **Single destination namespace** - The Kubernetes namespace is ignored and all services
will be registered into the same Consul namespace, i.e. if the destination Consul
namespace is `my-ns` then service `web` in Kubernetes namespace `web-ns` will
@@ -317,6 +337,8 @@ The details on each configuration are:
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
+
+
```yaml
global:
name: consul
@@ -327,6 +349,8 @@ The details on each configuration are:
consulDestinationNamespace: 'my-ns'
```
+
+
~> **NOTE:** In this configuration, if two custom resources of the same kind **and** the same name are attempted to
be created in two Kubernetes namespaces, the last one created will not be synced.
@@ -337,6 +361,8 @@ name of the resource doesn't matter. For other resources, the name of the resour
determines which service it configures. For example, this resource configures
the service `web`:
+
+
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
@@ -346,11 +372,15 @@ spec:
protocol: http
```
+
+
For `ServiceIntentions`, because we need to support the ability to create
wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to **any** service),
and because `*` is not a valid Kubernetes resource name, we instead use the field `spec.destination.name`
to configure the destination service for the intention:
+
+
```yaml
# foo => * (allow)
apiVersion: consul.hashicorp.com/v1alpha1
@@ -377,6 +407,8 @@ spec:
action: allow
```
+
+
In addition, we support the field `spec.destination.namespace` to configure
the destination service's Consul namespace. If `spec.destination.namespace`
is empty, then the Consul namespace used will be the same as the other
diff --git a/website/content/docs/k8s/dns.mdx b/website/content/docs/k8s/dns.mdx
index 5163e438c1..0dac3a2eae 100644
--- a/website/content/docs/k8s/dns.mdx
+++ b/website/content/docs/k8s/dns.mdx
@@ -141,6 +141,8 @@ in full cluster rebuilds.
To verify DNS works, run a simple job to query DNS. Save the following
job to the file `job.yaml` and run it:
+
+
```yaml
apiVersion: batch/v1
kind: Job
@@ -157,6 +159,8 @@ spec:
backoffLimit: 4
```
+
+
```shell-session
$ kubectl apply -f job.yaml
```
diff --git a/website/content/docs/k8s/installation/deployment-configurations/consul-enterprise.mdx b/website/content/docs/k8s/installation/deployment-configurations/consul-enterprise.mdx
index 49e48e72d8..3ec152f5e8 100644
--- a/website/content/docs/k8s/installation/deployment-configurations/consul-enterprise.mdx
+++ b/website/content/docs/k8s/installation/deployment-configurations/consul-enterprise.mdx
@@ -21,16 +21,20 @@ kubectl create secret generic consul-ent-license --from-literal="key=${secret}"
In your `config.yaml`, change the value of `global.image` to one of the enterprise [release tags](https://hub.docker.com/r/hashicorp/consul-enterprise/tags).
+
+
```yaml
-# config.yaml
global:
image: 'hashicorp/consul-enterprise:1.10.0-ent'
```
+
+
Add the name and key of the secret you just created to `server.enterpriseLicense`, if using Consul version 1.10+.
+
+
```yaml
-# config.yaml
global:
image: 'hashicorp/consul-enterprise:1.10.0-ent'
server:
@@ -39,12 +43,15 @@ server:
secretKey: 'key'
```
+
+
If the version of Consul is < 1.10, use the following config with the name and key of the secret you just created.
-> **Note:** The value of `server.enterpriseLicense.enableLicenseAutoload` must be set to `false`.
+
+
```yaml
-# config.yaml
global:
image: 'hashicorp/consul-enterprise:1.8.3-ent'
server:
@@ -54,6 +61,8 @@ server:
enableLicenseAutoload: false
```
+
+
Now run `helm install`:
```shell-session
diff --git a/website/content/docs/k8s/installation/deployment-configurations/servers-outside-kubernetes.mdx b/website/content/docs/k8s/installation/deployment-configurations/servers-outside-kubernetes.mdx
index cc74f5af10..dd3031d0f3 100644
--- a/website/content/docs/k8s/installation/deployment-configurations/servers-outside-kubernetes.mdx
+++ b/website/content/docs/k8s/installation/deployment-configurations/servers-outside-kubernetes.mdx
@@ -27,8 +27,9 @@ example above, a fake [cloud auto-join](/docs/agent/cloud-auto-join)
value is specified. This should be set to resolve to the proper addresses of
your existing Consul cluster.
+
+
```yaml
-# config.yaml
global:
enabled: false
@@ -41,6 +42,8 @@ client:
- 'provider=my-cloud config=val ...'
```
+
+
-> **Networking:** Note that for the Kubernetes nodes to join an existing
cluster, the nodes (and specifically the agent pods) must be able to connect
to all other server and client agents inside and _outside_ of Kubernetes over [LAN](/docs/glossary#lan-gossip).
@@ -63,6 +66,8 @@ If you would like to use this feature with external Consul servers, you need to
so that it can retrieve the clients' CA to use for securing the rest of the cluster.
To do that, you must add the following values, in addition to the values mentioned above:
+
+
```yaml
global:
tls:
@@ -74,6 +79,8 @@ externalServers:
- 'provider=my-cloud config=val ...'
```
+
+
In most cases, `externalServers.hosts` will be the same as `client.join`, however, both keys must be set because
they are used for different purposes: one for Serf LAN and the other for HTTPS connections.
Please see the [reference documentation](/docs/k8s/helm#v-externalservers-hosts)
@@ -99,6 +106,8 @@ kubectl create secret generic bootstrap-token --from-literal='token=
+
```yaml
global:
acls:
@@ -108,6 +117,8 @@ global:
secretKey: token
```
+
+
The bootstrap token requires the following minimal permissions:
- `acl:write`
@@ -120,6 +131,8 @@ to create policies, tokens, and an auth method. If you are [enabling Consul Conn
so that the Consul servers can validate a Kubernetes service account token when using the [Kubernetes auth method](/docs/acl/auth-methods/kubernetes)
with `consul login`.
+
+
```yaml
externalServers:
enabled: true
@@ -128,8 +141,12 @@ externalServers:
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
```
+
+
Your resulting Helm configuration will end up looking similar to this:
+
+
```yaml
global:
enabled: false
@@ -152,11 +169,15 @@ externalServers:
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
```
+
+
### Bootstrapping ACLs via the Helm chart
If you would like the Helm chart to call the bootstrapping API and set the server tokens for you, then the steps are similar.
The only difference is that you don't need to set the bootstrap token. The Helm chart will save the bootstrap token as a Kubernetes secret.
+
+
```yaml
global:
enabled: false
@@ -175,3 +196,5 @@ externalServers:
- 'provider=my-cloud config=val ...'
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
```
+
+
diff --git a/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx b/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx
index 158252e9e7..22950ab96a 100644
--- a/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx
+++ b/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx
@@ -21,8 +21,9 @@ to pods or nodes in another.
First, we will deploy the Consul servers with Consul clients in the first cluster.
For that, we will use the following Helm configuration:
+
+
```yaml
-# cluster1-config.yaml
global:
datacenter: dc1
tls:
@@ -42,6 +43,8 @@ ui:
type: NodePort
```
+
+
Note that we are deploying in a secure configuration, with gossip encryption,
TLS for all components, and ACLs. We are enabling the Consul Service Mesh and the controller for CRDs
so that we can use them to later verify that our services can connect with each other across clusters.
@@ -88,8 +91,9 @@ $ kubectl apply -f cluster1-credentials.yaml
```
To deploy in the second cluster, we will use the following Helm configuration:
+
+
```yaml
-# cluster2-config.yaml
global:
enabled: false
datacenter: dc1
@@ -127,6 +131,8 @@ connectInject:
enabled: true
```
+
+
Note that we're referencing secrets from the first cluster in ACL, gossip, and TLS configuration.
Next, we need to set up the `externalServers` configuration.
@@ -182,13 +188,15 @@ helm install cluster2 -f cluster2-config.yaml hashicorp/consul
## Verifying the Consul Service Mesh works
-~> When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have a explicit upstream configured through the ["consul.hashicorp.com/connect-service-upstreams"](https://www.consul.io/docs/k8s/connect#consul-hashicorp-com-connect-service-upstreams) annotation.
+~> When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have a explicit upstream configured through the ["consul.hashicorp.com/connect-service-upstreams"](https://www.consul.io/docs/k8s/connect#consul-hashicorp-com-connect-service-upstreams) annotation.
Now that we have our Consul cluster in multiple k8s clusters up and running, we will
deploy two services and verify that they can connect to each other.
First, we'll deploy `static-server` service in the first cluster:
+
+
```yaml
---
apiVersion: consul.hashicorp.com/v1alpha1
@@ -249,10 +257,14 @@ spec:
serviceAccountName: static-server
```
+
+
Note that we're defining a Service intention so that our services are allowed to talk to each other.
Then we'll deploy `static-client` in the second cluster with the following configuration:
+
+
```yaml
apiVersion: v1
kind: Service
@@ -295,6 +307,8 @@ spec:
serviceAccountName: static-client
```
+
+
Once both services are up and running, we can connect to the `static-server` from `static-client`:
```shell
diff --git a/website/content/docs/k8s/installation/install.mdx b/website/content/docs/k8s/installation/install.mdx
index 146311095f..71068a087f 100644
--- a/website/content/docs/k8s/installation/install.mdx
+++ b/website/content/docs/k8s/installation/install.mdx
@@ -98,8 +98,9 @@ or by reading the [Helm Chart Reference](/docs/k8s/helm).
For example, if you want to enable the [Consul Connect](/docs/k8s/connect) feature,
use the following config file:
+
+
```yaml
-# config.yaml
global:
name: consul
connectInject:
@@ -108,6 +109,8 @@ controller:
enabled: true
```
+
+
Once you've created your `config.yaml` file, run `helm install` with the `-f` flag:
```shell-session
@@ -218,6 +221,8 @@ spec:
An example `Deployment` is also shown below to show how the host IP can
be accessed from nested pod specifications:
+
+
```yaml
apiVersion: apps/v1
kind: Deployment
@@ -249,6 +254,8 @@ spec:
consul kv put hello world
```
+
+
## Architecture
Consul runs on Kubernetes with the same
diff --git a/website/content/docs/k8s/installation/multi-cluster/kubernetes.mdx b/website/content/docs/k8s/installation/multi-cluster/kubernetes.mdx
index 76cdd9cdfa..acf96338a4 100644
--- a/website/content/docs/k8s/installation/multi-cluster/kubernetes.mdx
+++ b/website/content/docs/k8s/installation/multi-cluster/kubernetes.mdx
@@ -34,6 +34,8 @@ support federation, see [Upgrading An Existing Cluster](#upgrading-an-existing-c
You will need to use the following `config.yaml` file for your primary cluster,
with the possible modifications listed below.
+
+
```yaml
global:
name: consul
@@ -77,11 +79,15 @@ meshGateway:
enabled: true
```
+
+
Modifications:
1. The Consul datacenter name is `dc1`. The datacenter name in each federated
cluster **must be unique**.
1. ACLs are enabled in the above config file. They can be disabled by setting:
+
+
```yaml
global:
acls:
@@ -125,6 +131,8 @@ to install Consul on your primary cluster.
If you have an existing cluster, you will need to upgrade it to ensure it has
the following config:
+
+
```yaml
global:
tls:
@@ -139,6 +147,8 @@ meshGateway:
enabled: true
```
+
+
1. `global.tls.enabled` must be `true`. See [Configuring TLS on an Existing Cluster](/docs/k8s/operations/tls-on-existing-cluster)
for more information on safely upgrading a cluster to use TLS.
@@ -200,12 +210,16 @@ The federation secret is a Kubernetes secret containing information needed
for secondary datacenters/clusters to federate with the primary. This secret is created
automatically by setting:
+
+
```yaml
global:
federation:
createFederationSecret: true
```
+
+
After the installation into your primary cluster you will need to export
this secret:
@@ -291,6 +305,8 @@ with the possible modifications listed below.
-> **NOTE: ** You must use a separate Helm config file for each cluster (primary and secondaries) since their
settings are different.
+
+
```yaml
global:
name: consul
@@ -340,6 +356,8 @@ server:
load: true
```
+
+
Modifications:
1. The Consul datacenter name is `dc2`. The primary datacenter's name was `dc1`.
diff --git a/website/content/docs/k8s/installation/multi-cluster/vms-and-kubernetes.mdx b/website/content/docs/k8s/installation/multi-cluster/vms-and-kubernetes.mdx
index 2dc4af4d82..8eb7b9798a 100644
--- a/website/content/docs/k8s/installation/multi-cluster/vms-and-kubernetes.mdx
+++ b/website/content/docs/k8s/installation/multi-cluster/vms-and-kubernetes.mdx
@@ -74,13 +74,16 @@ The following sections detail how to export this data.
1. These certificates can be used in your server config file:
+
+
```hcl
- # server.hcl
cert_file = "vm-dc-server-consul-0.pem"
key_file = "vm-dc-server-consul-0-key.pem"
ca_file = "consul-agent-ca.pem"
```
+
+
1. For clients, you can generate TLS certs with:
```shell-session
diff --git a/website/content/docs/k8s/upgrade/index.mdx b/website/content/docs/k8s/upgrade/index.mdx
index e500ef7dfb..1a0d4d5fe0 100644
--- a/website/content/docs/k8s/upgrade/index.mdx
+++ b/website/content/docs/k8s/upgrade/index.mdx
@@ -117,11 +117,16 @@ to update to the new version.
1. Read our [Compatibility Matrix](/docs/k8s/upgrade/compatibility) to ensure
your current Helm chart version supports this Consul version. If it does not,
you may need to also upgrade your Helm chart version at the same time.
-1. Set `global.consul` in your `values.yaml` to the desired version:
+1. Set `global.image` in your `values.yaml` to the desired version:
+
+
+
```yaml
global:
image: consul:1.8.3
```
+
+
1. Determine your current installed chart version:
```bash
diff --git a/website/content/docs/nia/terraform-modules.mdx b/website/content/docs/nia/terraform-modules.mdx
index b877bd9d4b..5ba3b42f07 100644
--- a/website/content/docs/nia/terraform-modules.mdx
+++ b/website/content/docs/nia/terraform-modules.mdx
@@ -36,6 +36,8 @@ First, create a directory to organize Terraform configuration files that make up
The `main.tf` is the entry point of the module and this is where you can begin authoring your module. It can contain multiple Terraform resources related to an automation task that uses Consul service discovery information, particularly the required [`services` input variable](#services-variable). The code example below shows a resource using the `services` variable. When this example is used in automation with Consul-Terraform-Sync, the content of the local file would dynamically update as Consul service discovery information changes.
+
+
```hcl
# Create a file with service names and their node addresses
resource "local_file" "consul_services" {
@@ -46,12 +48,16 @@ resource "local_file" "consul_services" {
}
```
+
+
Something important to consider before authoring your module is deciding the [condition under which it will execute](/docs/nia/tasks#task-execution). This will allow you to potentially include other types of Consul-Terraform-Sync provided input variables in your module. It will also help inform your documentation and how users should configure their task for your module.
### Services Variable
To satisfy the specification requirements for a compatible module, copy the `services` variable declaration to the `variables.tf` file. Your module can optionally have other [variable declarations](#module-input-variables) and [Consul-Terraform-Sync provided input variables](/docs/nia/terraform-modules#optional-input-variables) in addition to `var.services`.
+
+
```hcl
variable "services" {
description = "Consul services monitored by Consul-Terraform-Sync"
@@ -80,6 +86,8 @@ variable "services" {
}
```
+
+
Keys of the `services` map are unique identifiers of the service across Consul agents and data centers. Keys follow the format `service-id.node.datacenter` (or `service-id.node.namespace.datacenter` for Consul Enterprise). A complete list of attributes available for the `services` variable is included in the [documentation for Consul-Terraform-Sync tasks](/docs/nia/tasks#services-condition).
Terraform variables when passed as module arguments can be [lossy for object types](https://www.terraform.io/docs/configuration/types.html#conversion-of-complex-types). This allows Consul-Terraform-Sync to declare the full variable with every object attribute in the generated root module, and pass the variable to a child module that contains a subset of these attributes for its variable declaration. Modules compatible with Consul-Terraform-Sync may simplify the `var.services` declaration within the module by omitting unused attributes. For example, the following services variable has 4 attributes with the rest omitted.
diff --git a/website/content/docs/security/acl/acl-legacy.mdx b/website/content/docs/security/acl/acl-legacy.mdx
index e8f60c7f90..d2e63c5334 100644
--- a/website/content/docs/security/acl/acl-legacy.mdx
+++ b/website/content/docs/security/acl/acl-legacy.mdx
@@ -325,7 +325,7 @@ Once the ACL system is bootstrapped, ACL tokens can be managed through the
After the servers are restarted above, you will see new errors in the logs of the Consul
servers related to permission denied errors:
-```text
+```log
2017/07/08 23:38:24 [WARN] agent: Node info update blocked by ACLs
2017/07/08 23:38:44 [WARN] agent: Coordinate update blocked by ACLs
```
@@ -379,7 +379,7 @@ $ curl \
With that ACL agent token set, the servers will be able to sync themselves with the
catalog:
-```text
+```log
2017/07/08 23:42:59 [INFO] agent: Synced node info
```
@@ -640,7 +640,7 @@ operator = "read"
This is equivalent to the following JSON input:
-```javascript
+```json
{
"key": {
"": {
@@ -784,15 +784,15 @@ recursive reads via [the KV API](/api/kv#recurse) with an invalid token result i
```hcl
key "" {
- policy = "deny"
+ policy = "deny"
}
key "bar" {
- policy = "list"
+ policy = "list"
}
key "baz" {
- policy = "read"
+ policy = "read"
}
```
@@ -828,7 +828,7 @@ The `keyring` policy controls access to keyring operations in the
Keyring rules look like this:
-```text
+```hcl
keyring = "write"
```
@@ -900,7 +900,7 @@ The `operator` policy controls access to cluster-level operations in the
Operator rules look like this:
-```text
+```hcl
operator = "read"
```
diff --git a/website/content/docs/upgrading/instructions/upgrade-to-1-6-x.mdx b/website/content/docs/upgrading/instructions/upgrade-to-1-6-x.mdx
index f05b3eff52..5251b5968e 100644
--- a/website/content/docs/upgrading/instructions/upgrade-to-1-6-x.mdx
+++ b/website/content/docs/upgrading/instructions/upgrade-to-1-6-x.mdx
@@ -125,6 +125,8 @@ Take note of the `ReplicatedIndex` value.
Create a new file containing the payload for creating a new token named `test-ui-token.json`
with the following contents:
+
+
```json
{
"Name": "UI Token",
@@ -133,6 +135,8 @@ with the following contents:
}
```
+
+
From a Consul server in DC1, create a new token using that file:
```shell
diff --git a/website/content/docs/upgrading/upgrade-specific.mdx b/website/content/docs/upgrading/upgrade-specific.mdx
index e2f5b946f1..baaba52b4d 100644
--- a/website/content/docs/upgrading/upgrade-specific.mdx
+++ b/website/content/docs/upgrading/upgrade-specific.mdx
@@ -21,7 +21,7 @@ upgrade flow.
Consul Enterprise 1.10 has removed temporary licensing capabilities from the binaries
found on https://releases.hashicorp.com. Servers will no longer load a license previously
set through the CLI or API. Instead the license must be present in the server's configuration
-or environment prior to starting. See the [licensing documentation](/docs/enterprise/license/overview)
+or environment prior to starting. See the [licensing documentation](/docs/enterprise/license/overview)
for more information about how to configure the license. Client agents previously retrieved their
license from the servers in the cluster within 30 minutes of starting and the snapshot agent
would similarly retrieve its license from the server or client agent it was configured to use. As
@@ -820,7 +820,7 @@ defaulting [`allow_stale`](/docs/agent/options#allow_stale) to true for
better utilization of available servers. If you want to retain the previous
behavior, set the following configuration:
-```javascript
+```json
{
"dns_config": {
"allow_stale": false
@@ -837,7 +837,7 @@ To continue to use the high-performance settings that were the default prior to
Consul 0.7 (recommended for production servers), add the following
configuration to all Consul servers when upgrading:
-```javascript
+```json
{
"performance": {
"raft_multiplier": 1