Add support for an X-Consul-Token HTTP request header to specify the
token with which this request should be fulfilled. The header would have
precedence over the responding Agent's default token, but would have
lower precedence than a token specified in the query string.
Two of the changes are in tests; the one of consequence is in the API.
As explained in #1308 this can cause conflicts with downstream programs.
Fixes#1308.
see: https://github.com/hashicorp/consul/issues/1173#1173
Reasoning: somewhere during consul development Pause()/Resume() and
PauseSync()/ResumeSync() were added to protect larger changes to
agent's localState. A few of the places that it tries to protect are:
- (a *Agent) AddService(...) # part of the method
- (c *Command) handleReload(...) # almost the whole method
- (l *localState) antiEntropy(...)# isPaused() prevents syncChanges()
The main problem is, that in the middle of handleReload(...)'s
critical section it indirectly (loadServices()) calls AddService(...).
AddService() in turn calls Pause() to protect itself against
syncChanges(). At the end of AddService() a defered call to Resume() is
made.
With the current implementation, this releases
isPaused() "lock" in the middle of handleReload() allowing antiEntropy
to kick in while configuration reload is still in progress.
Specifically almost all services and probably all check are unloaded
when syncChanges() is allowed to run.
This in turn can causes massive service/check de-/re-registration,
and since checks are by default registered in the critical state,
a majority of services on a node can be marked as failing.
It's made worse with automation, often calling `consul reload` in close
proximity on many nodes in the cluster.
This change basically turns Pause()/Resume() into P()/V() of
a garden-variety semaphore. Allowing Pause() to be called multiple times,
and releasing isPaused() only after all matching/defered Resumes() are
called as well.
TODO/NOTE: as with many semaphore implementations, it might be reasonable
to panic() if l.paused ever becomes negative.
The previous fix to `consul lock` (commit 6875e8d) didn't completely
eliminate the race that could occur if the lock was acquired around the
same time SIGTERM was received: It was still possible for
Run() to spawn the process via startChild() after killChild() had
released the shared mutex.
Now, when SIGTERM is received, we acquire a mutex that prevents
spawning a new process and never release it.
We've tested this fix pretty thoroughly and believe it completely
resolves the issue.