40336fd353
The observed bug was that a full restart of a consul datacenter (servers and clients) in conjunction with a restart of a connect-flavored application with bring-your-own-service-registration logic would very frequently cause the envoy sidecar service check to never reflect the aliased service. Over the course of investigation several bugs and unfortunate interactions were corrected: (1) local.CheckState objects were only shallow copied, but the key piece of data that gets read and updated is one of the things not copied (the underlying Check with a Status field). When the stock code was run with the race detector enabled this highly-relevant-to-the-test-scenario field was found to be racy. Changes: a) update the existing Clone method to include the Check field b) copy-on-write when those fields need to change rather than incrementally updating them in place. This made the observed behavior occur slightly less often. (2) If anything about how the runLocal method for node-local alias check logic was ever flawed, there was no fallback option. Those checks are purely edge-triggered and failure to properly notice a single edge transition would leave the alias check incorrect until the next flap of the aliased check. The change was to introduce a fallback timer to act as a control loop to double check the alias check matches the aliased check every minute (borrowing the duration from the non-local alias check logic body). This made the observed behavior eventually go away when it did occur. (3) Originally I thought there were two main actions involved in the data race: A. The act of adding the original check (from disk recovery) and its first health evaluation. B. The act of the HTTP API requests coming in and resetting the local state when re-registering the same services and checks. It took awhile for me to realize that there's a third action at work: C. The goroutines associated with the original check and the later checks. The actual sequence of actions that was causing the bad behavior was that the API actions result in the original check to be removed and re-added _without waiting for the original goroutine to terminate_. This means for brief windows of time during check definition edits there are two goroutines that can be sending updates for the alias check status. In extremely unlikely scenarios the original goroutine sees the aliased check start up in `critical` before being removed but does not get the notification about the nearly immediate update of that check to `passing`. This is interlaced wit the new goroutine coming up, initializing its base case to `passing` from the current state and then listening for new notifications of edge triggers. If the original goroutine "finishes" its update, it then commits one more write into the local state of `critical` and exits leaving the alias check no longer reflecting the underlying check. The correction here is to enforce that the old goroutines must terminate before spawning the new one for alias checks. |
||
---|---|---|
.circleci | ||
.github | ||
acl | ||
agent | ||
api | ||
bench | ||
build-support | ||
command | ||
connect | ||
demo | ||
ipaddr | ||
lib | ||
logger | ||
sdk | ||
sentinel | ||
service_os | ||
snapshot | ||
terraform | ||
test | ||
testrpc | ||
tlsutil | ||
types | ||
ui-v2 | ||
vendor | ||
version | ||
website | ||
.dockerignore | ||
.gitignore | ||
.travis.yml | ||
CHANGELOG.md | ||
GNUmakefile | ||
INTERNALS.md | ||
LICENSE | ||
NOTICE.md | ||
README.md | ||
Vagrantfile | ||
go.mod | ||
go.sum | ||
main.go | ||
main_test.go |
README.md
Consul
- Website: https://www.consul.io
- Chat: Gitter
- Mailing list: Google Groups
Consul is a tool for service discovery and configuration. Consul is distributed, highly available, and extremely scalable.
Consul provides several key features:
-
Service Discovery - Consul makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface. External services such as SaaS providers can be registered as well.
-
Health Checking - Health Checking enables Consul to quickly alert operators about any issues in a cluster. The integration with service discovery prevents routing traffic to unhealthy hosts and enables service level circuit breakers.
-
Key/Value Storage - A flexible key/value store enables storing dynamic configuration, feature flagging, coordination, leader election and more. The simple HTTP API makes it easy to use anywhere.
-
Multi-Datacenter - Consul is built to be datacenter aware, and can support any number of regions without complex configuration.
-
Service Segmentation - Consul Connect enables secure service-to-service communication with automatic TLS encryption and identity-based authorization.
Consul runs on Linux, Mac OS X, FreeBSD, Solaris, and Windows. A commercial version called Consul Enterprise is also available.
Please note: We take Consul's security and our users' trust very seriously. If you believe you have found a security issue in Consul, please responsibly disclose by contacting us at security@hashicorp.com.
Quick Start
An extensive quick start is viewable on the Consul website:
https://www.consul.io/intro/getting-started/install.html
Documentation
Full, comprehensive documentation is viewable on the Consul website:
Contributing
Thank you for your interest in contributing! Please refer to CONTRIBUTING.md for guidance.