diff --git a/website/source/intro/vs/index.html.markdown b/website/source/intro/vs/index.html.markdown index e262e5ef69..3100524cda 100644 --- a/website/source/intro/vs/index.html.markdown +++ b/website/source/intro/vs/index.html.markdown @@ -1,16 +1,16 @@ --- layout: "intro" -page_title: "Serf vs. Other Software" +page_title: "Consul vs. Other Software" sidebar_current: "vs-other" --- -# Serf vs. Other Software +# Consul vs. Other Software -The problems Serf solves are not new; they've existed for a long time. -It should come as no surprise then that there are other options available -to solve some of these problems. In this section, we compare Serf to some -other options. In most cases, Serf can be used alongside these other systems, strengthening -them in areas they are weak. +The problems Consul solves are varied, but each individual feature has been +solved by many different systems. Although there is no single system that provides +all the features of Consul, there are other options available to solve some of these problems. +In this section, we compare Consul to some other options. In most cases, Consul is not +mutually exclusive with any other system. -Use the navigation to the left to read the comparison of Serf to specific +Use the navigation to the left to read the comparison of Consul to specific systems. diff --git a/website/source/intro/vs/zookeeper.html.markdown b/website/source/intro/vs/zookeeper.html.markdown index b3a8090036..b3c8b604b6 100644 --- a/website/source/intro/vs/zookeeper.html.markdown +++ b/website/source/intro/vs/zookeeper.html.markdown @@ -1,44 +1,61 @@ --- layout: "intro" -page_title: "Serf vs. ZooKeeper, doozerd, etcd" +page_title: "Consul vs. ZooKeeper, doozerd, etcd" sidebar_current: "vs-other-zk" --- -# Serf vs. ZooKeeper, doozerd, etcd +# Consul vs. ZooKeeper, doozerd, etcd -ZooKeeper, doozerd and etcd are all similar in their client/server -architecture. All three have server nodes that require a quorum of -nodes to operate (usually a simple majority). They are strongly consistent, -and expose various primitives that can be used through client libraries within -applications to build complex distributed systems. +ZooKeeper, doozerd and etcd are all similar in their architecture. +All three have server nodes that require a quorum of nodes to operate (usually a simple majority). +They are strongly consistent, and expose various primitives that can be used +through client libraries within applications to build complex distributed systems. -Serf has a radically different architecture based on gossip and provides a -smaller feature set. Serf only provides membership, failure detection, -and user events. Serf is designed to operate under network partitions -and embraces eventual consistency. Designed as a tool, it is friendly -for both system administrators and application developers. +Consul works in a similar way within a single datacenter with only server nodes. +In each datacenter, Consul servers require a quorum to operate +and provide strong consistency. However, Consul has native support for multiple datacenters, +as well as a more complex gossip system that links server nodes and clients. -ZooKeeper et al. by contrast are much more complex, and cannot be used directly -as a tool. Application developers must use libraries to build the features -they need, although some libraries exist for common patterns. Most failure -detection schemes built on these systems also have intrinsic scalability issues. -Most naive failure detection schemes depend on heartbeating, which use -periodic updates and timeouts. These schemes require work linear to -the number of nodes and place the demand on a fixed number of servers. -Additionally, the failure detection window is at least as long as the timeout, -meaning that in many cases failures may not be detected for a long time. -Additionally, ZooKeeper ephemeral nodes require that many active connections -be maintained to a few nodes. +If any of these systems are used for pure key/value storage, then they all +roughly provide the same semantics. Reads are strongly consistent, and availability +is sacraficed for consistency in the face of a network partition. However, the differences +become more apparent when these systems are used for advanced cases. -The strong consistency provided by these systems is essential for building leader -election and other types of coordination for distributed systems, but it limits -their ability to operate under network partitions. At a minimum, if a majority of -nodes are not available, writes are disallowed. Since a failure is indistinguishable -from a slow response, the performance of these systems may rapidly degrade -under certain network conditions. All of these issues can be highly -problematic when partition tolerance is needed, for example in a service -discovery layer. +The semantics provided by these systems are attractive for building +service discovery systems. ZooKeeper et al. provide only a primitive K/V store, +and require that application developers build their own system to provide service +discovery. Consul provides an opinionated framework for service discovery, and +eliminates the guess work and development effort. Clients simply register services +and then perform discovery using a DNS or HTTP interface. Other systems +require a home-rolled solution. + +A compelling service discovery framework must incorporate health checking and the +possibility of failures as well. It is not useful to know that Node A +provides the Foo service if that node has failed or the service crashed. Naive systems +make use of heartbeating, using periodic updates and TTLs. These schemes require work linear +to the number of nodes and place the demand on a fixed number of servers. Additionally, the +failure detection window is at least as long as the TTL. ZooKeeper provides ephemeral +nodes which are K/V entries that are removed when a client disconnects. These are more +sophisticated than a heartbeat system, but also have inherent scalability issues and add +client side complexity. All clients must maintain active connecitons to the ZooKeeper servers, +and perform keep-alives. Additionally, this requires "thick clients", which are difficult +to write and often result in difficult to debug issues. + +Consul uses a very different architecture for health checking. Instead of only +having server nodes, Consul clients run on every node in the cluster. +These clients are part of a [gossip pool](/docs/internals/gossip.html), which +serves several functions including distributed health checking. The gossip protocol implements +an efficient failure detector that can scale to clusters of any size without concentrating +the work on any select group of servers. The clients also enable a much richer set of health checks to be run locally, +where ZooKeeper ephemeral nodes are a very primitve check of liveness. Clients can check that +a web server is return 200, that memory utilization is not critical, there is sufficient +disk space, etc. The Consul clients expose a simple HTTP interface and avoid exposing the complexity +of the system is to clients in the same way as ZooKeeper. + +Consul provides first class support for service discovery, health checking, +K/V storage, and multiple datacenters. To support anything more that simple K/V storage, +all these other systems require additional tools and libraries to be built on +top. By using client nodes, Consul provides a simple API than only requires thin clients. +Additionally, the API can be avoided entirely by using configuration files and the +DNS interface to have a complete service discovery solution with no development at all. -Additionally, Serf is not mutually exclusive with any of these strongly -consistent systems. Instead, they can be used in combination to create systems -that are more scalable and fault tolerant, without sacrificing features. diff --git a/website/source/layouts/intro.erb b/website/source/layouts/intro.erb index cdbabb5bd8..771a91302d 100644 --- a/website/source/layouts/intro.erb +++ b/website/source/layouts/intro.erb @@ -3,7 +3,7 @@