diff --git a/website/source/docs/agent/basics.html.markdown b/website/source/docs/agent/basics.html.markdown
index 2d2df6a937..5fb96ae39f 100644
--- a/website/source/docs/agent/basics.html.markdown
+++ b/website/source/docs/agent/basics.html.markdown
@@ -50,7 +50,7 @@ There are several important components that `consul agent` outputs:
* **Datacenter**: This is the datacenter the agent is configured to run
in. Consul has first-class support for multiple datacenters, but to work efficiently
- each node must be configured to correctly report it's datacenter. The `-dc` flag
+ each node must be configured to correctly report its datacenter. The `-dc` flag
can be used to set the datacenter. For single-DC configurations, the agent
will default to "dc1".
@@ -61,7 +61,7 @@ There are several important components that `consul agent` outputs:
servers to join the cluster. Multiple servers cannot be in bootstrap mode,
otherwise the cluster state will be inconsistent.
-* **Client Addr**: This is the addressused for client interfaces to the agent.
+* **Client Addr**: This is the address used for client interfaces to the agent.
This includes the ports for the HTTP, DNS, and RPC interfaces. The RPC
address is used for other `consul` commands. Other Consul commands such
as `consul members` connect to a running agent and use RPC to query and
diff --git a/website/source/docs/agent/dns.html.markdown b/website/source/docs/agent/dns.html.markdown
index 2ee62c13b7..73c36fa9d4 100644
--- a/website/source/docs/agent/dns.html.markdown
+++ b/website/source/docs/agent/dns.html.markdown
@@ -42,7 +42,7 @@ the following format:
So, for example, if we have a "foo" node with default settings, we could look for
"foo.node.dc1.consul." The datacenter is an optional part of the FQDN, and if not
-provided defaults to the datacenter of the agent. So if know "foo" is running in our
+provided defaults to the datacenter of the agent. So if we know "foo" is running in our
same datacenter, we can instead use "foo.node.consul." Alternatively, we can do a
DNS lookup for nodes in other datacenters, with no additional effort.
diff --git a/website/source/docs/agent/http.html.markdown b/website/source/docs/agent/http.html.markdown
index 44c317713c..ec616ea7a1 100644
--- a/website/source/docs/agent/http.html.markdown
+++ b/website/source/docs/agent/http.html.markdown
@@ -22,7 +22,7 @@ Each of the categories and their respective endpoints are documented below.
## Blocking Queries
-Certain endpoints support a feature called a "blocking query". A blocking query
+Certain endpoints support a feature called a "blocking query." A blocking query
is used to wait for a change to potentially take place using long polling.
Queries that support this will mention it specifically, however the use of this
@@ -52,7 +52,7 @@ a single endpoint:
/v1/kv/
This is the only endpoint that is used with the Key/Value store.
-It's use depends on the HTTP method. The `GET`, `PUT` and `DELETE` methods
+Its use depends on the HTTP method. The `GET`, `PUT` and `DELETE` methods
are all supported. It is important to note that each datacenter has its
own K/V store, and that there is no replication between datacenters.
By default the datacenter of the agent is queried, however the dc can
diff --git a/website/source/docs/commands/leave.html.markdown b/website/source/docs/commands/leave.html.markdown
index f897867529..4d44397a17 100644
--- a/website/source/docs/commands/leave.html.markdown
+++ b/website/source/docs/commands/leave.html.markdown
@@ -15,8 +15,8 @@ This is used to ensure other nodes see the agent as "left" instead of
on restarting with a snapshot.
For nodes in server mode, the node is removed from the Raft peer set
-in a graceful manner. This is critical, as in certain situation a
-non-graceful can affect cluster availability.
+in a graceful manner. This is critical, as in certain situations a
+non-graceful leave can affect cluster availability.
## Usage
diff --git a/website/source/docs/internals/architecture.html.markdown b/website/source/docs/internals/architecture.html.markdown
index 366eec833c..1154433454 100644
--- a/website/source/docs/internals/architecture.html.markdown
+++ b/website/source/docs/internals/architecture.html.markdown
@@ -25,7 +25,7 @@ clarify what is being discussed:
* Agent - An agent is the long running daemon on every member of the Consul cluster.
It is started by running `consul agent`. The agent is able to run in either *client*,
or *server* mode. Since all nodes must be running an agent, it is simpler to refer to
-the node as either being a client or server, but other are instances of the agent. All
+the node as either being a client or server, but there are other instances of the agent. All
agents can run the DNS or HTTP interfaces, and are responsible for running checks and
keeping services in sync.
@@ -34,12 +34,12 @@ stateless. The only background activity a client performs is taking part of LAN
This has a minimal resource overhead and consumes only a small amount of network bandwidth.
* Server - An agent that is server mode. When in server mode, there is an expanded set
-of responsibilities including participated in the Raft quorum, maintaining cluster state,
-responding to RPC queries, WAN gossip to other datacenters, forwarding of queries to leaders
+of responsibilities including participating in the Raft quorum, maintaining cluster state,
+responding to RPC queries, WAN gossip to other datacenters, and forwarding queries to leaders
or remote datacenters.
-* Datacenter - A data center seems obvious, but there are subtle details such as multiple
-availability zones in EC2. We define a data center to be a networking environment that is
+* Datacenter - A datacenter seems obvious, but there are subtle details such as multiple
+availability zones in EC2. We define a datacenter to be a networking environment that is
private, low latency, and high badwidth. This excludes communication that would traverse
the public internet.
@@ -53,7 +53,7 @@ as well as our [implementation here](/docs/internals/consensus.html).
[gossip protocol](http://en.wikipedia.org/wiki/Gossip_protocol) that is used for multiple purposes.
Serf provides membership, failure detection, and event broadcast mechanisms. Our use of these
is described more in the [gossip documentation](/docs/internals/gossip.html). It is enough to know
-gossip involves random node-to-node communication, primary over UDP.
+gossip involves random node-to-node communication, primarily over UDP.
* LAN Gossip - This is used to mean that there is a gossip pool, containing nodes that
are all located on the same local area network or datacenter.
@@ -73,16 +73,16 @@ From a 10,000 foot altitude the architecture of Consul looks like this:
Lets break down this image and describe each piece. First of all we can see
that there are two datacenters, one and two respectively. Consul has first
-class support for multiple data centers and expects this to be the common case.
+class support for multiple datacenters and expects this to be the common case.
-Within each datacenter we have a mixture of clients, and servers. It is expected
+Within each datacenter we have a mixture of clients and servers. It is expected
that there be between three and five servers. This strikes a balance between
availability in the case of failure and performance, as consensus gets progressively
slower as more machines are added. However, there is no limit to the number of clients,
and they can easily scale into the thousands or tens of thousands.
All the nodes that are in a datacenter participate in a [gossip protocol](/docs/internals/gossip.html).
-This means is there is a gossip pool that contains all the nodes for a given datacenter. This serves
+This means there is a gossip pool that contains all the nodes for a given datacenter. This serves
a few purposes: first, there is no need to configure clients with the addresses of servers,
discovery is done automatically. Second, the work of detecting node failures
is not placed on the servers but is distributed. This makes the failure detection much more
@@ -111,7 +111,7 @@ connection caching and multiplexing, cross-datacenter requests are relatively fa
At this point we've covered the high level architecture of Consul, but there are much
more details to each of the sub-systems. The [consensus protocol](/docs/internals/consensus.html) is
documented in detail, as is the [gossip protocol](/docs/internals/gossip.html). The [documentation](/docs/internals/security.html)
-for the security model and protocols used for is also available.
+for the security model and protocols used are also available.
For other details, either consult the code, ask in IRC or reach out to the mailing list.
diff --git a/website/source/docs/internals/consensus.html.markdown b/website/source/docs/internals/consensus.html.markdown
index ebd9d5a13f..ec52296593 100644
--- a/website/source/docs/internals/consensus.html.markdown
+++ b/website/source/docs/internals/consensus.html.markdown
@@ -32,8 +32,7 @@ the entries and their order.
* FSM - [Finite State Machine](http://en.wikipedia.org/wiki/Finite-state_machine).
An FSM is a collection of finite states with transitions between them. As new logs
are applied, the FSM is allowed to transition between states. Application of the
-same sequence of logs must result in the same state, meaning non-deterministic
-behavior is not permitted.
+same sequence of logs must result in the same state, meaning behavior must be deterministic.
* Peer set - The peer set is the set of all members participating in log replication.
For Consul's purposes, all server nodes are in the peer set of the local datacenter.
@@ -97,7 +96,7 @@ run 3 or 5 Consul servers per datacenter. This maximizes availability without
greatly sacrificing performance. See below for a deployment table.
In terms of performance, Raft is comprable to Paxos. Assuming stable leadership,
-a committing a log entry requires a single round trip to half of the cluster.
+committing a log entry requires a single round trip to half of the cluster.
Thus performance is bound by disk I/O and network latency. Although Consul is
not designed to be a high-throughput write system, it should handle on the order
of hundreds to thousands of transactions per second depending on network and