diff --git a/website/source/intro/vs/skydns.html.markdown b/website/source/intro/vs/skydns.html.markdown
new file mode 100644
index 0000000000..125f9e3b01
--- /dev/null
+++ b/website/source/intro/vs/skydns.html.markdown
@@ -0,0 +1,39 @@
+---
+layout: "intro"
+page_title: "Consul vs. SkyDNS"
+sidebar_current: "vs-other-skydns"
+---
+
+# Consul vs. SkyDNS
+
+SkyDNS is a relatively new tool designed to solve service discovery.
+It uses multiple central servers that are strongly consistent and
+fault tolerant. Nodes register services using an HTTP API, and
+queries can be made over HTTP or DNS to perform discovery.
+
+Consul is very similar, but provides a super-set of features. Consul
+also relies on multiple central servers to provide strong consistency
+and fault tolerance. Nodes can use an HTTP API or use an agent to
+register services, and queries are made over HTTP or DNS.
+
+However, the systems differ in many ways. Consul provides a much richer
+health checking framework, with support for arbitrary checks and
+a highly scalable failure detection scheme. SkyDNS relies on naive
+heartbeating and TTLs, which have known scalability issues. Additionally,
+the heartbeat only provides a limited liveness check.
+
+Multiple datacenters can be supported by using "regions" in SkyDNS,
+however the data is managed and queried from a single cluster. If servers
+are split between datacenters the replication protocol will suffer from
+very long commit times. If all the SkyDNS servers are in a central datacenter, then
+connectivity issues can cause entire datacenters to lose availability.
+Additionally, even without a connectivity issue, query performance will
+suffer as requests must always be performed in a remote datacenter.
+
+Consul supports multiple datacenters out of the box, and it purposely
+scopes the managed data to be per-datacenter. This means each datacenter
+runs an independent cluster of servers. Requests are forwarded to remote
+datacenters if necessary. This means requests for services within a datacenter
+never go over the WAN, and connectivity issues between datacenters do not
+affect availability within a datacenter.
+
diff --git a/website/source/intro/vs/smartstack.html.markdown b/website/source/intro/vs/smartstack.html.markdown
new file mode 100644
index 0000000000..f32cadef7f
--- /dev/null
+++ b/website/source/intro/vs/smartstack.html.markdown
@@ -0,0 +1,57 @@
+---
+layout: "intro"
+page_title: "Consul vs. SmartStack"
+sidebar_current: "vs-other-smartstack"
+---
+
+# Consul vs. SmartStack
+
+SmartStack is another tool which tackles the service discovery problem.
+It has a rather unique architecture, and has 4 major components: ZooKeeper,
+HAProxy, Synapse, and Nerve. The ZooKeeper servers are responsible for storing cluster
+state in a consistent and fault tolerant manner. Each node in the SmartStack
+cluster then runs both Nerves and Synapses. The Nerve is responsible for running
+health checks against a service, and registering with the ZooKeeper servers.
+Synapse queries ZooKeeper for service providers and dynamically configures
+HAProxy. Finally, clients speak to HAProxy, which does health checking and
+load balancing across service providers.
+
+Consul is a much simpler and more contained system, as it does not rely on any external
+components. Consul uses an integrated [gossip protocol](/docs/internals/gossip.html)
+to track all nodes and perform server discovery. This means that server addresses
+do not need to be hardcoded and updated fleet wide on changes, unlike SmartStack.
+
+Service registration for both Consul and Nerves can be done with a configuration file,
+but Consul also supports an API to dynamically change the services and checks that are in use.
+
+For discovery, SmartStack clients must use HAProxy, requiring that Synapse be
+configured with all desired endpoints in advance. Consul clients instead
+use the DNS or HTTP APIs without any configuration needed in advance. Consul
+also provides a "tag" abstraction, allowing services to provide metadata such
+as versions, primary/secondary designations, or opaque labels that can be used for
+filtering. Clients can then request only the service providers which have
+matching tags.
+
+The systems also differ in how they manage health checking.
+Nerve's performs local health checks in a manner similar to Consul agents.
+However, Consul maintains seperate catalog and health systems, which allow
+operators to see which nodes are in each service pool, as well as providing
+insight into failing checks. Nerve simply deregisters nodes on failed checks,
+providing limited operator insight. Synapse also configures HAProxy to perform
+additional health checks. This causes all potential service clients to check for
+liveness. With large fleets, this N-to-N style health checking may be prohibitively
+expensive.
+
+Consul generally provides a much richer health checking system. Consul supports
+Nagios style plugins, enabling a vast catalog of checks to be used. It also
+allows for service and host-level checks. There is also a "dead man's switch"
+check that allows applications to easily integrate custom health checks. All of this
+is also integrated into a Health and Catalog system with APIs enabling operator
+to gain insight into the broader system.
+
+In addition to the service discovery and health checking, Consul also provides
+an integrated key/value store for configuration and multi-datacenter support.
+While it may be possible to configure SmartStack for multiple datacenters,
+the central ZooKeeper cluster would be a serious impediment to a fault tolerant
+deployment.
+