mirror of https://github.com/status-im/consul.git
Merge pull request #270 from lra/another_typo
website: Missing "it" and other small fixes
This commit is contained in:
commit
508c6ede42
|
@ -19,7 +19,7 @@ inconsistencies and split-brain situations, all servers should specify the same
|
||||||
or specify no value at all. Any server that does not specify a value will not attempt to
|
or specify no value at all. Any server that does not specify a value will not attempt to
|
||||||
bootstrap the cluster.
|
bootstrap the cluster.
|
||||||
|
|
||||||
There is a [deployment table](/docs/internals/consensus.html#toc_3) that covers various options,
|
There is a [deployment table](/docs/internals/consensus.html#toc_4) that covers various options,
|
||||||
but it is recommended to have 3 or 5 total servers per data center. A single server deployment is _**highly**_
|
but it is recommended to have 3 or 5 total servers per data center. A single server deployment is _**highly**_
|
||||||
discouraged as data loss is inevitable in a failure scenario.
|
discouraged as data loss is inevitable in a failure scenario.
|
||||||
|
|
||||||
|
|
|
@ -24,7 +24,7 @@ When starting `Node A` something like the following will be logged:
|
||||||
|
|
||||||
2014/02/22 19:23:32 [INFO] consul: cluster leadership acquired
|
2014/02/22 19:23:32 [INFO] consul: cluster leadership acquired
|
||||||
|
|
||||||
Once `Node A` is running, we can start the next set of servers. There is a [deployment table](/docs/internals/consensus.html#toc_3)
|
Once `Node A` is running, we can start the next set of servers. There is a [deployment table](/docs/internals/consensus.html#toc_4)
|
||||||
that covers various options, but it is recommended to have 3 or 5 total servers per data center.
|
that covers various options, but it is recommended to have 3 or 5 total servers per data center.
|
||||||
A single server deployment is _**highly**_ discouraged as data loss is inevitable in a failure scenario.
|
A single server deployment is _**highly**_ discouraged as data loss is inevitable in a failure scenario.
|
||||||
We start the next servers **without** specifying `-bootstrap`. This is critical, since only one server
|
We start the next servers **without** specifying `-bootstrap`. This is critical, since only one server
|
||||||
|
|
|
@ -7,7 +7,7 @@ sidebar_current: "docs-guides-outage"
|
||||||
# Outage Recovery
|
# Outage Recovery
|
||||||
|
|
||||||
Do not panic! This is a critical first step. Depending on your
|
Do not panic! This is a critical first step. Depending on your
|
||||||
[deployment configuration](/docs/internals/consensus.html#toc_3), it may
|
[deployment configuration](/docs/internals/consensus.html#toc_4), it may
|
||||||
take only a single server failure for cluster unavailability. Recovery
|
take only a single server failure for cluster unavailability. Recovery
|
||||||
requires an operator to intervene, but is straightforward.
|
requires an operator to intervene, but is straightforward.
|
||||||
|
|
||||||
|
|
|
@ -71,7 +71,7 @@ the new servers up-to-date.
|
||||||
|
|
||||||
Removing servers must be done carefully to avoid causing an availability outage.
|
Removing servers must be done carefully to avoid causing an availability outage.
|
||||||
For a cluster of N servers, at least (N/2)+1 must be available for the cluster
|
For a cluster of N servers, at least (N/2)+1 must be available for the cluster
|
||||||
to function. See this [deployment table](/docs/internals/consensus.html#toc_3).
|
to function. See this [deployment table](/docs/internals/consensus.html#toc_4).
|
||||||
If you have 3 servers, and 1 of them is currently failed, removing any servers
|
If you have 3 servers, and 1 of them is currently failed, removing any servers
|
||||||
will cause the cluster to become unavailable.
|
will cause the cluster to become unavailable.
|
||||||
|
|
||||||
|
|
|
@ -14,7 +14,7 @@ real cluster with multiple members.
|
||||||
|
|
||||||
When starting a Consul agent, it begins without knowledge of any other node, and is
|
When starting a Consul agent, it begins without knowledge of any other node, and is
|
||||||
an isolated cluster of one. To learn about other cluster members, the agent must
|
an isolated cluster of one. To learn about other cluster members, the agent must
|
||||||
_join_ an existing cluster. To join an existing cluster, only needs to know
|
_join_ an existing cluster. To join an existing cluster, it only needs to know
|
||||||
about a _single_ existing member. After it joins, the agent will gossip with this
|
about a _single_ existing member. After it joins, the agent will gossip with this
|
||||||
member and quickly discover the other members in the cluster. A Consul
|
member and quickly discover the other members in the cluster. A Consul
|
||||||
agent can join any other agent, it doesn't have to be an agent in server mode.
|
agent can join any other agent, it doesn't have to be an agent in server mode.
|
||||||
|
|
|
@ -85,7 +85,7 @@ $ curl http://localhost:8500/v1/kv/web?recurse
|
||||||
|
|
||||||
A key can be updated by setting a new value by issuing the same PUT request.
|
A key can be updated by setting a new value by issuing the same PUT request.
|
||||||
Additionally, Consul provides a Check-And-Set operation, enabling atomic
|
Additionally, Consul provides a Check-And-Set operation, enabling atomic
|
||||||
key updates. This is done by providing the `?cas=` paramter with the last
|
key updates. This is done by providing the `?cas=` parameter with the last
|
||||||
`ModifyIndex` value from the GET request. For example, suppose we wanted
|
`ModifyIndex` value from the GET request. For example, suppose we wanted
|
||||||
to update "web/key1":
|
to update "web/key1":
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue