diff --git a/website/source/intro/index.html.markdown b/website/source/intro/index.html.markdown index d3658a373e..e9884d9c1c 100644 --- a/website/source/intro/index.html.markdown +++ b/website/source/intro/index.html.markdown @@ -14,7 +14,7 @@ of a reference for all available features. ## What is Consul? -Consul has multiple components, but as a whole, it is tool for discovering +Consul has multiple components, but as a whole, it is a tool for discovering and configuring services in your infrastructure. It provides several key features: @@ -30,7 +30,7 @@ key features: discovery components to route traffic away from unhealthy hosts. * **Key/Value Store**: Applications can make use of Consul's hierarchical key/value - store for any number of purposes including dynamic configuration, feature flagging, + store for any number of purposes including: dynamic configuration, feature flagging, coordination, leader election, etc. The simple HTTP API makes it easy to use. * **Multi Datacenter**: Consul supports multiple datacenters out of the box. This @@ -70,6 +70,6 @@ forward the request to the remote datacenter and return the result. ## Next Steps See the page on [how Consul compares to other software](/intro/vs/index.html) -to see just how it fits into your existing infrastructure. Or continue onwards with +to see how it fits into your existing infrastructure. Or continue onwards with the [getting started guide](/intro/getting-started/install.html) to get Consul up and running and see how it works. diff --git a/website/source/intro/vs/chef-puppet.html.markdown b/website/source/intro/vs/chef-puppet.html.markdown index 4f007eb681..dca6b7bb54 100644 --- a/website/source/intro/vs/chef-puppet.html.markdown +++ b/website/source/intro/vs/chef-puppet.html.markdown @@ -24,8 +24,8 @@ Consul is designed specifically as a service discovery tool. As such, it is much more dynamic and responsive to the state of the cluster. Nodes can register and deregister the services they provide, enabling dependent applications and services to rapidly discover all providers. By using the -integrating health checking, Consul can route traffic away from unhealthy -nodes, and allowing systems and services to gracefully recover. Static configuration +integrated health checking, Consul can route traffic away from unhealthy +nodes, allowing systems and services to gracefully recover. Static configuration that may be provided by configuraiton management tools can be moved into the dynamic key/value store. This allows application configuration to be updated without a slow convergence run. Lastly, because each datacenter runs indepedently, diff --git a/website/source/intro/vs/nagios-sensu.html.markdown b/website/source/intro/vs/nagios-sensu.html.markdown index 760c5d5b5f..d3bad9284f 100644 --- a/website/source/intro/vs/nagios-sensu.html.markdown +++ b/website/source/intro/vs/nagios-sensu.html.markdown @@ -12,7 +12,7 @@ to quickly notify operators when an issue occurs. Nagios uses a group of central servers that are configured to perform checks on remote hosts. This design makes it difficult to scale Nagios, as large fleets quickly reach the limit of vertical scaling, and Nagios -does not easily horizontal scale either. Nagios is also notoriously +does not easily scale horizontally. Nagios is also notoriously difficult to use with modern DevOps and configuration management tools, as local configurations must be updated when remote servers are added or removed. @@ -31,10 +31,10 @@ a burden on central servers. The status of checks is maintained by the Consul servers, which are fault tolerant and have no single point of failure. Lastly, Consul can scale to vastly more checks because it relies on edge triggered updates. This means only when a check transitions from "passing" to "failing" -or visa versa an update is triggered. +or vice versa an update is triggered. In a large fleet, the majority of checks are passing, and even the minority -that are failing are persistent. By capturing only changes, Consul reduces +that are failing are persistent. By capturing changes only, Consul reduces the amount of networking and compute resources used by the health checks, allowing the system to be much more scalable. diff --git a/website/source/intro/vs/smartstack.html.markdown b/website/source/intro/vs/smartstack.html.markdown index f32cadef7f..f28f2fadcd 100644 --- a/website/source/intro/vs/smartstack.html.markdown +++ b/website/source/intro/vs/smartstack.html.markdown @@ -34,7 +34,7 @@ matching tags. The systems also differ in how they manage health checking. Nerve's performs local health checks in a manner similar to Consul agents. -However, Consul maintains seperate catalog and health systems, which allow +However, Consul maintains separate catalog and health systems, which allow operators to see which nodes are in each service pool, as well as providing insight into failing checks. Nerve simply deregisters nodes on failed checks, providing limited operator insight. Synapse also configures HAProxy to perform diff --git a/website/source/intro/vs/zookeeper.html.markdown b/website/source/intro/vs/zookeeper.html.markdown index d6a313e046..6355897652 100644 --- a/website/source/intro/vs/zookeeper.html.markdown +++ b/website/source/intro/vs/zookeeper.html.markdown @@ -18,7 +18,7 @@ as well as a more complex gossip system that links server nodes and clients. If any of these systems are used for pure key/value storage, then they all roughly provide the same semantics. Reads are strongly consistent, and availability -is sacraficed for consistency in the face of a network partition. However, the differences +is sacrificed for consistency in the face of a network partition. However, the differences become more apparent when these systems are used for advanced cases. The semantics provided by these systems are attractive for building @@ -47,15 +47,15 @@ These clients are part of a [gossip pool](/docs/internals/gossip.html), which serves several functions including distributed health checking. The gossip protocol implements an efficient failure detector that can scale to clusters of any size without concentrating the work on any select group of servers. The clients also enable a much richer set of health checks to be run locally, -where ZooKeeper ephemeral nodes are a very primitve check of liveness. Clients can check that +whereas ZooKeeper ephemeral nodes are a very primitve check of liveness. Clients can check that a web server is returning 200 status codes, that memory utilization is not critical, there is sufficient disk space, etc. The Consul clients expose a simple HTTP interface and avoid exposing the complexity of the system is to clients in the same way as ZooKeeper. Consul provides first class support for service discovery, health checking, -K/V storage, and multiple datacenters. To support anything more that simple K/V storage, +K/V storage, and multiple datacenters. To support anything more than simple K/V storage, all these other systems require additional tools and libraries to be built on -top. By using client nodes, Consul provides a simple API than only requires thin clients. +top. By using client nodes, Consul provides a simple API that only requires thin clients. Additionally, the API can be avoided entirely by using configuration files and the DNS interface to have a complete service discovery solution with no development at all.