Commit Graph

1439 Commits

Author SHA1 Message Date
Paul Banks 0b5a078b95
Optimize health watching to single chan/goroutine. (#5449)
Refs #4984.

Watching chans for every node we touch in a health query is wasteful. In #4984 it shows that if there are more than 682 service instances we always fallback to watching all services which kills performance.

We already have a record in MemDB that is reliably update whenever the service health result should change thanks to per-service watch indexes.

So in general, provided there is at least one service instances and we actually have a service index for it (we always do now) we only ever need to watch a single channel.

This saves us from ever falling back to the general index and causing the performance cliff in #4984, but it also means fewer goroutines and work done for every blocking health query.

It also saves some allocations made during the query because we no longer have to populate a WatchSet with 3 chans per service instance which saves the internal map allocation.

This passes all state store tests except the one that explicitly checked for the fallback behaviour we've now optimized away and in general seems safe.
2019-03-15 20:18:48 +00:00
Pierre Souchay 88d4383410 Ensure we remove Connect proxy before deregistering the service itself (#5482)
This will fix https://github.com/hashicorp/consul/issues/5351
2019-03-15 20:14:46 +00:00
Valentin Fritz 21f149de8b Fix checks removal when removing service (#5457)
Fix my recently discovered issue described here: #5456
2019-03-14 11:02:49 -04:00
R.B. Boyer cd96af4fc0
acl: reduce complexity of token resolution process with alternative singleflighting (#5480)
acl: reduce complexity of token resolution process with alternative singleflighting

Switches acl resolution to use golang.org/x/sync/singleflight. For the
identity/legacy lookups this is a drop-in replacement with the same
overall approach to request coalescing.

For policies this is technically a change in behavior, but when
considered holistically is approximately performance neutral (with the
benefit of less code).

There are two goals with this blob of code (speaking specifically of
policy resolution here):

  1) Minimize cross-DC requests.
  2) Minimize client-to-server LAN requests.

The previous iteration of this code was optimizing for the case of many
possibly different tokens being resolved concurrently that have a
significant overlap in linked policies such that deduplication would be
worth the complexity. While this is laudable there are some things to
consider that can help to adjust expectations:

  1) For v1.4+ policies are always replicated, and once a single policy
  shows up in a secondary DC the replicated data is considered
  authoritative for requests made in that DC. This means that our
  earlier concerns about minimizing cross-DC requests are irrelevant
  because there will be no cross-DC policy reads that occur.

  2) For Server nodes the in-memory ACL policy cache is capped at zero,
  meaning it has no caching. Only Client nodes run with a cache. This
  means that instead of having an entire DC's worth of tokens (what a
  Server might see) that can have policy resolutions coalesced these
  nodes will only ever be seeing node-local token resolutions. In a
  reasonable worst-case scenario where a scheduler like Kubernetes has
  "filled" a node with Connect services, even that will only schedule
  ~100 connect services per node. If every service has a unique token
  there will only be 100 tokens to coalesce and even then those requests
  have to occur concurrently AND be hitting an empty consul cache.

Instead of seeing a great coalescing opportunity for cutting down on
redundant Policy resolutions, in practice it's far more likely given
node densities that you'd see requests for the same token concurrently
than you would for two tokens sharing a policy concurrently (to a degree
that would warrant the overhead of the current variation of
singleflighting.

Given that, this patch switches the Policy resolution process to only
singleflight by requesting token (but keeps the cache as by-policy).
2019-03-14 09:35:34 -05:00
Hans Hasselberg 7e11dd82aa
agent: enable reloading of tls config (#5419)
This PR introduces reloading tls configuration. Consul will now be able to reload the TLS configuration which previously required a restart. It is not yet possible to turn TLS ON or OFF with these changes. Only when TLS is already turned on, the configuration can be reloaded. Most importantly the certificates and CAs.
2019-03-13 10:29:06 +01:00
R.B. Boyer 2e175be41b
acl: correctly extend the cache for acl identities during resolution (#5475) 2019-03-12 10:23:43 -05:00
Aestek 4bea29f15a [catalog] Update the node's services indexes on update (#5458)
Node updates were not updating the service indexes, which are used for
service related queries. This caused the X-Consul-Index to stay the same
after a node update as seen from a service query even though the node
data is returned in heath queries. If that happened in between queries
the client would miss this change.
We now update the indexes of the services on the node when it is
updated.

Fixes: #5450
2019-03-11 14:48:19 +00:00
Alvin Huang 8cb8108b1b fix typos 2019-03-06 14:47:33 -05:00
R.B. Boyer f4a3b9d518
fix typos reported by golangci-lint:misspell (#5434) 2019-03-06 11:13:28 -06:00
R.B. Boyer 2ffbea41c8 improve flaky LANReap tests by expliciting configuring the tombstone timeout
In TestServer_LANReap autopilot is running, so the alternate flow
through the serf reaping function is possible. In that situation the
ReconnectTimeout is not relevant so for parity also override the
TombstoneTimeout value as well.

For additional parity update the TestServer_WANReap and
TestClient_LANReap versions of this test in the same way even though
autopilot is irrelevant here .
2019-03-05 14:34:03 -06:00
R.B. Boyer 5bea49ecb0 tests: avoid leaking child processes from agent/proxyprocess package 2019-03-05 14:29:25 -06:00
Matt Keeler 567e41ff6b
Release v1.4.3 2019-03-04 19:21:20 +00:00
Matt Keeler 90040f8bff Fixes for CVE-2019-8336
Fix error in detecting raft replication errors.

Detect redacted token secrets and prevent attempting to insert.

Add a Redacted field to the TokenBatchRead and TokenRead RPC endpoints

This will indicate whether token secrets have been redacted.

Ensure any token with a redacted secret in secondary datacenters is removed.

Test that redacted tokens cannot be replicated.
2019-03-04 19:13:24 +00:00
Hans Hasselberg d35824b1fa default to tls 1.2 as promised. (#5340) 2019-03-04 09:42:04 -05:00
Aestek 2aac4d5168 Register and deregisters services and their checks atomically in the local state (#5012)
Prevent race between register and deregister requests by saving them
together in the local state on registration.
Also adds more cleaning in case of failure when registering services
/ checks.
2019-03-04 09:34:05 -05:00
Matt Keeler 6e6910ea11
Dont modify memdb owned token data for get/list requests of tokens (#5412)
Previously we were fixing up the token links directly on the *ACLToken returned by memdb. This invalidated some assumptions that a snapshot is immutable as well as potentially being able to cause a crash.

The fix here is to give the policy link fixing function copy on write semantics. When no fixes are necessary we can return the memdb object directly, otherwise we copy it and create a new list of links.

Eventually we might find a better way to keep those policy links in sync but for now this fixes the issue.
2019-03-04 09:28:46 -05:00
Aestek 02f991843f Fix race condition in DNS when using cache (#5398)
* Fix race condition in DNS when using cache

The healty node filtering was modifying the result from the cache, which
caused a crash when multiple queries were made to the same service
simultaneously.
We now copy the node slice before filtering to ensure we do not modify
the data stored in the cache.

* Fix wording in dns cache config doc

s/dns_max_age/cache_max_age/
2019-03-04 09:22:01 -05:00
Matt Keeler 200c0fb3e9
Call RemoveServer for reap events (#5317)
This ensures that servers are removed from RPC routing when they are reaped.
2019-03-04 09:19:35 -05:00
R.B. Boyer 409c901f8e test: fix concurrent map access when setting up test vault 2019-03-01 14:30:19 -06:00
R.B. Boyer 6955186239 fix ignored errors in state store internals as reported by errcheck 2019-03-01 14:18:00 -06:00
R.B. Boyer c7067645dd fix a few leap-year related clock math inaccuracies and failing tests 2019-03-01 13:51:49 -06:00
Matt Keeler 118adbb123
ACL Token Persistence and Reloading (#5328)
This PR adds two features which will be useful for operators when ACLs are in use.

1. Tokens set in configuration files are now reloadable.
2. If `acl.enable_token_persistence` is set to `true` in the configuration, tokens set via the `v1/agent/token` endpoint are now persisted to disk and loaded when the agent starts (or during configuration reload)

Note that token persistence is opt-in so our users who do not want tokens on the local disk will see no change.

Some other secondary changes:

* Refactored a bunch of places where the replication token is retrieved from the token store. This token isn't just for replicating ACLs and now it is named accordingly.
* Allowed better paths in the `v1/agent/token/` API. Instead of paths like: `v1/agent/token/acl_replication_token` the path can now be just `v1/agent/token/replication`. The old paths remain to be valid. 
* Added a couple new API functions to set tokens via the new paths. Deprecated the old ones and pointed to the new names. The names are also generally better and don't imply that what you are setting is for ACLs but rather are setting ACL tokens. There is a minor semantic difference there especially for the replication token as again, its no longer used only for ACL token/policy replication. The new functions will detect 404s and fallback to using the older token paths when talking to pre-1.4.3 agents.
* Docs updated to reflect the API additions and to show using the new endpoints.
* Updated the ACL CLI set-agent-tokens command to use the non-deprecated APIs.
2019-02-27 14:28:31 -05:00
Kyle Havlovitz f07e928afc
Merge pull request #5325 from hashicorp/consul-ca-panic
connect/ca: fix a potential panic in the Consul provider
2019-02-27 09:43:44 -08:00
Hans Hasselberg 80e7d63fc2
Centralise tls configuration part 2 (#5374)
This PR is based on #5366 and continues to centralise the tls configuration in order to be reloadable eventually!

This PR is another refactoring. No tests are changed, beyond calling other functions or cosmetic stuff. I added a bunch of tests, even though they might be redundant.
2019-02-27 10:14:59 +01:00
Hans Hasselberg 786b3b1095
Centralise tls configuration part 1 (#5366)
In order to be able to reload the TLS configuration, we need one way to generate the different configurations.

This PR introduces a `tlsutil.Configurator` which holds a `tlsutil.Config`. Afterwards it is responsible for rendering every `tls.Config`. In this particular PR I moved `IncomingHTTPSConfig`, `IncomingTLSConfig`, and `OutgoingTLSWrapper` into `tlsutil.Configurator`.

This PR is a pure refactoring - not a single feature added. And not a single test added. I only slightly modified existing tests as necessary.
2019-02-26 16:52:07 +01:00
Aestek f1cdfbe40e Allow DNS interface to use agent cache (#5300)
Adds two new configuration parameters "dns_config.use_cache" and
"dns_config.cache_max_age" controlling how DNS requests use the agent
cache when querying servers.
2019-02-25 14:06:01 -05:00
Alvin Huang 77eecf1046 add wait to TestClient_JoinLAN 2019-02-22 17:34:45 -05:00
Alvin Huang 136df63e2c add retry to TestResetSessionTimerLocked 2019-02-22 17:34:45 -05:00
Alvin Huang a7180f715a add serf check to testDNSServiceLookupResponseLimits, checkDNSService 2019-02-22 17:34:45 -05:00
Alvin Huang d10b5a396b add wait to TestOperator_AutopilotCASConfiguration 2019-02-22 17:34:45 -05:00
Alvin Huang dc200daf21 add wait to TestSnapshot 2019-02-22 17:34:45 -05:00
Alvin Huang c2a19e5090 add wait to TestAgent_RPCPing 2019-02-22 17:34:45 -05:00
Alvin Huang c23eb91262 fix TestAgent_CheckCriticalTime and better error output 2019-02-22 17:34:45 -05:00
Alvin Huang 6c9b516a29 skip TestCheckTCPPassing on CircleCI 2019-02-22 17:34:45 -05:00
R.B. Boyer c2a30c5fdd fix incorrect body of TestACLEndpoint_PolicyBatchRead
Lifted from PR #5307 as it was an unrelated drive-by fix on that PR anyway.

s/token/policy/
2019-02-22 09:32:51 -06:00
R.B. Boyer b569f222f9 update agent/agent_endpoint_test.go to use V2 tokens with attached policies 2019-02-20 11:11:47 -06:00
Nicholas Jackson 99fe9dabce Envoy config cluster (#5308)
* Start adding tests for cluster override

* Refactor tests for clusters

* Passing tests for custom upstream cluster override

* Added capability to customise local app cluster

* Rename config for local cluster override
2019-02-19 13:45:33 +00:00
Kainoa Seto b2af8862c7 Deferred updating response meta with consul headers (#5355) 2019-02-19 11:45:36 +00:00
R.B. Boyer ef8258cd4e test: switch test file from assert -> require for consistency
Also in acl_endpoint_test.go:

* convert logical blocks in some token tests to subtests
* remove use of require.New

This removes a lot of noise in a later PR.
2019-02-14 14:21:19 -06:00
Matt Keeler 766d771017
Pass a testing.T into NewTestAgent and TestAgent.Start (#5342)
This way we can avoid unnecessary panics which cause other tests not to run.

This doesn't remove all the possibilities for panics causing other tests not to run, it just fixes the TestAgent
2019-02-14 10:59:14 -05:00
R.B. Boyer adbe8ed370 correct some typos 2019-02-13 13:02:12 -06:00
R.B. Boyer 88bb53d001 ensure that we plumb our configured logger into all parts of the raft library 2019-02-13 13:02:09 -06:00
R.B. Boyer 2c983902be reduce the local scope of variable 2019-02-13 11:54:28 -06:00
R.B. Boyer de0f585583
agent: only enable TLS on gRPC if the HTTPS API port is enabled (#5287)
Currently the gRPC server assumes that if you have configured TLS
certs on the agent (for RPC) that you want gRPC to be encrypted.
If gRPC is bound to localhost this can be overkill. For the API we
let the user choose to offer HTTP or HTTPS API endpoints
independently of the TLS cert configuration for a similar reason.

This setting will let someone encrypt RPC traffic with TLS but avoid
encrypting local gRPC traffic if that is what they want to do by only
enabling TLS on gRPC if the HTTPS API port is enabled.
2019-02-13 11:49:54 -06:00
R.B. Boyer f2ed3a3777
clarify the ACL.PolicyDelete endpoint (#5337)
There was an errant early-return in PolicyDelete() that bypassed the
rest of the function.  This was ok because the only caller of this
function ignores the results.

This removes the early-return making it structurally behave like
TokenDelete() and for both PolicyDelete and TokenDelete clarify the lone
callers to indicate that the return values are ignored.

We may wish to avoid the entire return value as well, but this patch
doesn't go that far.
2019-02-13 09:16:30 -06:00
R.B. Boyer 324ba5df17
update TestStateStore_ACLBootstrap to not rely upon request mutation (#5335) 2019-02-12 16:09:26 -06:00
Matt Keeler 7073ba4ed2
Move autopilot initialization to prevent race (#5322)
`establishLeadership` invoked during leadership monitoring may use autopilot to do promotions etc. There was a race with doing that and having autopilot initialized and this fixes it.
2019-02-11 11:12:24 -05:00
Kyle Havlovitz 29e4c17b07
connect/ca: fix a potential panic in the Consul provider 2019-02-07 10:43:54 -08:00
Matt Keeler acfd87c673
Improve Connect with Prepared Queries (#5291)
Given a query like:

```
{
   "Name": "tagged-connect-query",
   "Service": {
      "Service": "foo",
      "Tags": ["tag"],
      "Connect": true
   }
}
```

And a Consul configuration like:

```
{
   "services": [
      "name": "foo",
      "port": 8080,
      "connect": { "sidecar_service": {} },
      "tags": ["tag"]
   ]
}
```

If you executed the query it would always turn up with 0 results. This was because the sidecar service was being created without any tags. You could instead make your config look like:

```
{
   "services": [
      "name": "foo",
      "port": 8080,
      "connect": { "sidecar_service": {
         "tags": ["tag"]
      } },
      "tags": ["tag"]
   ]
}
```

However that is a bit redundant for most cases. This PR ensures that the tags and service meta of the parent service get copied to the sidecar service. If there are any tags or service meta set in the sidecar service definition then this copying does not take place. After the changes, the query will now return the expected results.

A second change was made to prepared queries in this PR which is to allow filtering on ServiceMeta just like we allow for filtering on NodeMeta.
2019-02-04 09:36:51 -05:00
R.B. Boyer e1e4249e90
testutil: redirect some test agent logs to testing.T.Logf (#5304)
When tests fail, only the logs for the failing run are dumped to the
console which helps in diagnosis. This is easily added to other test
scenarios as they come up.
2019-02-01 09:21:54 -06:00
R.B. Boyer db8a871309
Merge pull request #5237 from hashicorp/term-grpc-stream-on-token-failure
Check ACLs more often for xDS endpoints.
2019-01-29 14:52:26 -06:00
mkeeler c97c712e96
Release v1.4.2 2019-01-28 21:46:00 +00:00
Kyle Havlovitz 7118f42950
Fix failing TestAgent_PurgeCheckOnDuplicate after merge 2019-01-28 13:19:38 -08:00
Matt Keeler 1736e24fb3
Don't generate TXT records just to discard them (#5272)
* Don't generate TXT records just to discard them

* Add unit test for formatNodeRecord to ensure it prevents returning TXT records
2019-01-28 14:59:58 -05:00
Kyle Havlovitz 928b7ec60d
Merge branch 'healthcheck-duration-fix' 2019-01-28 10:34:34 -08:00
Kyle Havlovitz 1a4978fb94
Re-add ReadableDuration types to health check definition
This is to fix the backwards-incompatible change made in 1.4.1 by
changing these fields to time.Duration.
2019-01-25 14:47:35 -08:00
R.B. Boyer e9a2eab316
speed up TestHTTPAPI_MethodNotAllowed_OSS from 11s -> 0.5s (#5268) 2019-01-25 10:01:21 -06:00
Hans Hasselberg 552e150536 correct name 2019-01-25 11:00:56 +01:00
Hans Hasselberg aebb50d47d simpler fix 2019-01-24 17:12:08 +01:00
Hans Hasselberg 5db185a7e4 do not export that type 2019-01-24 17:05:57 +01:00
Hans Hasselberg 7f44100101 fix marshalling 2019-01-24 17:03:26 +01:00
Hans Hasselberg d4790b2827 demo nomad problem 2019-01-24 16:45:54 +01:00
banks 65d2c9b51d
Release v1.4.1 2019-01-23 20:53:20 +00:00
Matt Keeler d5a3ba6cda
Disregard rules when set on a management token (#5261)
* Disregard rules when set on a management token

* Add unit test for legacy mgmt token with rules
2019-01-23 15:48:38 -05:00
Kyle Havlovitz 88c044759f
connect: Forward intention RPCs if this isn't the primary 2019-01-22 11:29:21 -08:00
Kyle Havlovitz 6b28434f8a
Merge pull request #5249 from hashicorp/ca-fixes-oss
Minor CA fixes
2019-01-22 11:25:09 -08:00
Kyle Havlovitz 5bdf130767
Merge pull request #4869 from hashicorp/txn-checks
Add node/service/check operations to transaction api
2019-01-22 11:16:09 -08:00
Kyle Havlovitz a28ba4687d
connect/ca: return a better error message if the CA isn't fully initialized when signing 2019-01-22 11:15:09 -08:00
Matt Keeler 579a8b32ed
Fix several ACL token/policy resolution issues. (#5246)
* Fix 2 remote ACL policy resolution issues

1 - Use the right method to fire async not found errors when the ACL.PolicyResolve RPC returns that error. This was previously accidentally firing a token result instead of a policy result which would have effectively done nothing (unless there happened to be a token with a secret id == the policy id being resolved.

2. When concurrent policy resolution is being done we single flight the requests. The bug before was that for the policy resolution that was going to piggy back on anothers RPC results it wasn’t waiting long enough for the results to come back due to looping with the wrong variable.

* Fix a handful of other edge case ACL scenarios

The main issue was that token specific issues (not able to access a particular policy or the token being deleted after initial fetching) were poisoning the policy cache.

A second issue was that for concurrent token resolutions, the first resolution to get started would go fetch all the policies. If before the policies were retrieved a second resolution request came in, the new request would register watchers for those policies but then never block waiting for them to complete. This resulted in using the default policy when it shouldn't have.
2019-01-22 13:14:43 -05:00
Paul Banks ef9f27cbc8
connect: tame thundering herd of CSRs on CA rotation (#5228)
* Support rate limiting and concurrency limiting CSR requests on servers; handle CA rotations gracefully with jitter and backoff-on-rate-limit in client

* Add CSR rate limiting docs

* Fix config naming and add tests for new CA configs
2019-01-22 17:19:36 +00:00
R.B. Boyer d3eb781384 Check ACLs more often for xDS endpoints.
For established xDS gRPC streams recheck ACLs for each DiscoveryRequest
or DiscoveryResponse. If more than 5 minutes has elapsed since the last
ACL check, recheck even without an incoming DiscoveryRequest or
DiscoveryResponse. ACL failures will terminate the stream.
2019-01-22 11:12:40 -06:00
Kyle Havlovitz ddc4a8d848
oss: add the enterprise server stub for intention replication check 2019-01-18 17:32:10 -08:00
R.B. Boyer 2dea3e2bd7 Fix some test typos. 2019-01-18 16:12:43 -06:00
Matt Keeler 7e6b3e6a0c
Implement prepared query upstreams watching for envoy (#5224)
Fixes #4969 

This implements non-blocking request polling at the cache layer which is currently only used for prepared queries. Additionally this enables the proxycfg manager to poll prepared queries for use in envoy proxy upstreams.
2019-01-18 12:44:04 -05:00
Kyle Havlovitz 21380021af txn: update existing txn api docs with new operations 2019-01-15 16:54:07 -08:00
Matt Keeler 1ec5f2a27f
Store leaf cert indexes in raft and use for the ModifyIndex on the returned certs (#5211)
* Store leaf cert indexes in raft and use for the ModifyIndex on the returned certs

This ensures that future certificate signings will have a strictly greater ModifyIndex than any previous certs signed.
2019-01-11 16:04:57 -05:00
Aestek 4afbe792df Improve blocking queries on services that do not exist (#4810)
## Background

When making a blocking query on a missing service (was never registered, or is not registered anymore) the query returns as soon as any service is updated.
On clusters with frequent updates (5~10 updates/s in our DCs) these queries virtually do not block, and clients with no protections againt this waste ressources on the agent and server side. Clients that do protect against this get updates later than they should because of the backoff time they implement between requests.

## Implementation

While reducing the number of unnecessary updates we still want :
* Clients to be notified as soon as when the last instance of a service disapears.
* Clients to be notified whenever there's there is an update for the service.
* Clients to be notified as soon as the first instance of the requested service is added.

To reduce the number of unnecessary updates we need to block when a request to a missing service is made. However in the following case :

1. Client `client1` makes a query for service `foo`, gets back a node and X-Consul-Index 42
2. `foo` is unregistered 
3. `client1`  makes a query for `foo` with `index=42` -> `foo` does not exist, the query blocks and `client1` is not notified of the change on `foo` 

We could store the last raft index when each service was last alive to know wether we should block on the incoming query or not, but that list could grow indefinetly. 
We instead store the last raft index when a service was unregistered and use it when a query targets a service that does not exist. 
When a service `srv` is unregistered this "missing service index" is always greater than any X-Consul-Index held by the clients while `srv` was up, allowing us to immediatly notify them.

1. Client `client1` makes a query for service `foo`, gets back a node and `X-Consul-Index: 42`
2. `foo` is unregistered, we set the "missing service index" to 43 
3. `client1` makes a blocking query for `foo` with `index=42` -> `foo` does not exist, we check against the "missing service index" and return immediatly with `X-Consul-Index: 43`
4. `client1` makes a blocking query for `foo` with `index=43` -> we block
5. Other changes happen in the cluster, but foo still doesn't exist and "missing service index" hasn't changed, the query is still blocked
6. `foo` is registered again on index 62 -> `foo` exists and its index is greater than 43, we unblock the query
2019-01-11 09:26:14 -05:00
Matt Keeler baa8946ea6
cache: Pass through wait query param to the cache.Get (#5203)
This adds a MaxQueryTime field to the connect ca leaf cache request type and populates it via the wait query param. The cache will then do the right thing and timeout the operation as expected if no new leaf cert is available within that time.

Fixes #4462 

The reproduction scenario in the original issue now times out appropriately.
2019-01-10 11:23:37 -05:00
Aestek c043de5381 [Security] Allow blocking Write endpoints on Agent using Network Addresses (#4719)
* Add -write-allowed-nets option

* Add documentation for the new write_allowed_nets option
2019-01-10 09:27:26 -05:00
Matt Keeler 1048f3d5e7
acl: Prevent tokens from deleting themselves (#5210)
Fixes #4897 

Also apparently token deletion could segfault in secondary DCs when attempting to delete non-existant tokens. For that reason both checks are wrapped within the non-nil check.
2019-01-10 09:22:51 -05:00
Paul Banks 0638e09b6e
connect: agent leaf cert caching improvements (#5091)
* Add State storage and LastResult argument into Cache so that cache.Types can safely store additional data that is eventually expired.

* New Leaf cache type working and basic tests passing. TODO: more extensive testing for the Root change jitter across blocking requests, test concurrent fetches for different leaves interact nicely with rootsWatcher.

* Add multi-client and delayed rotation tests.

* Typos and cleanup error handling in roots watch

* Add comment about how the FetchResult can be used and change ca leaf state to use a non-pointer state.

* Plumb test override of root CA jitter through TestAgent so that tests are deterministic again!

* Fix failing config test
2019-01-10 12:46:11 +00:00
Kyle Havlovitz c07c5446a8 txn: clean up some state store/acl code 2019-01-09 11:59:23 -08:00
Hans Hasselberg 067027230b
connect: add tls config for vault connect ca provider (#5125)
* add tlsconfig for vault connect ca provider.
* add options to the docs
* add tests for new configuration
2019-01-08 17:09:22 +01:00
Alejandro Guirao Rodríguez 9f33353c14 agent/config: Fix typo in comment (#5202) 2019-01-08 16:27:22 +01:00
Paul Banks bb7145f27d
agent: add default weights to service in local state to prevent AE churn (#5126)
* Add default weights when adding a service with no weights to local state to prevent constant AE re-sync.

This fix was contributed by @42wim in https://github.com/hashicorp/consul/pull/5096 but was merged against the wrong base. This adds it to master and adds a test to cover the behaviour.

* Fix tests that broke due to comparing internal state which now has default weights
2019-01-08 10:13:49 +00:00
Paul Banks 0589525ae9
agent: Don't leave old errors around in cache (#5094)
* Fixes #4480. Don't leave old errors around in cache that can be hit in specific circumstances.

* Move error reset to cover extreme edge case of nil Value, nil err Fetch
2019-01-08 10:06:38 +00:00
Pierre Souchay ae7f88f995 Avoid to have infinite recursion in DNS lookups when resolving CNAMEs (#4918)
* Avoid to have infinite recursion in DNS lookups when resolving CNAMEs

This will avoid killing Consul when a Service.Address is using CNAME
to a Consul CNAME that creates an infinite recursion.

This will fix https://github.com/hashicorp/consul/issues/4907

* Use maxRecursionLevel = 3 to allow several recursions
2019-01-07 16:53:54 -05:00
Paul Banks b29bc906ee
bugfix: use ServiceTags to generate cache key hash (#4987)
* bugfix: use ServiceTags to generate cahce key hash

* update unit test

* update

* remote print log

* Update .gitignore

* Completely deprecate ServiceTag field internally for clarity

* Add explicit test for CacheInfo cases
2019-01-07 21:30:47 +00:00
Aestek 8709213d6e Prevent status flap when re-registering a check (#4904)
Fixes point `#2` of: https://github.com/hashicorp/consul/issues/4903

When registering a service each healthcheck status is saved and restored (https://github.com/hashicorp/consul/blob/master/agent/agent.go#L1914) to avoid unnecessary flaps in health state.
This change extends this feature to single check registration by moving this protection in `AddCheck()` so that both `PUT /v1/agent/service/register` and `PUT /v1/agent/check/register` behave in the same idempotent way.

#### Steps to reproduce
1. Register a check :
```
curl -X PUT \
  http://127.0.0.1:8500/v1/agent/check/register \
  -H 'Content-Type: application/json' \
  -d '{
  "Name": "my_check",
  "ServiceID": "srv",
  "Interval": "10s",
  "Args": ["true"]
}'
```
2. The check will initialize and change to `passing`
3. Run the same request again
4. The check status will quickly go from `critical` to `passing` (the delay for this transission is determined by https://github.com/hashicorp/consul/blob/master/agent/checks/check.go#L95)
2019-01-07 13:53:03 -05:00
Mitchell Hashimoto f76022fa63 CA Provider Plugins (#4751)
This adds the `agent/connect/ca/plugin` library for consuming/serving Connect CA providers as [go-plugin](https://github.com/hashicorp/go-plugin) plugins. This **does not** wire this up in any way to Consul itself, so this will not enable using these plugins yet. 

## Why?

We want to enable CA providers to be pluggable without modifying Consul so that any CA or PKI system can potentially back the Connect certificates. This CA system may also be used in the future for easier bootstrapping and internal cluster security.

### go-plugin

The benefit of `go-plugin` is that for the plugin consumer, the fact that the interface implementation is communicating over multi-process RPC is invisible. Internals of Consul will continue to just use `ca.Provider` interface implementations as if they're local. For plugin _authors_, they simply have to implement the interface. The network/transport/process management issues are handled by go-plugin itself.

The CA provider plugins support both `net/rpc` and gRPC transports. This enables easy authoring in any language. go-plugin handles the actual protocol handshake and connection. This is just a feature of go-plugin. 

`go-plugin` is already in production use for years by Packer, Terraform, Nomad, Vault, and Sentinel. We've shown stability for both desktop and server-side software. It is very mature.

## Implementation Details

### `map[string]interface{}`

The `Configure` method passes a `map[string]interface{}`. This map contains only Go primitives and containers of primitives (no funcs, chans, etc.). For `net/rpc` we encode as-is using Gob. For gRPC we marshal to JSON and transmit as a `bytes` type. This is the same approach we take with Vault and other software.

Note that this is just the transport protocol, the end software views it fully decoded.

### `x509.Certificate` and `CertificateRequest`

We transmit the raw ASN.1  bytes and decode on the other side. Unit tests are verifying we get the same cert/csrs across the wire.

### Testing

`go-plugin` exposes test helpers that enable testing the full plugin RPC over real loopback network connections. We test all endpoints for success and error for both `net/rpc` and gRPC.

### Vendoring

This PR doesn't introduce vendoring for two reasons:

  1. @banks's `f-envoy` branch introduces a lot of these and I didn't want conflict.
  2. The library isn't actually used yet so it doesn't introduce compile-time errors (it does introduce test errors).

## Next Steps

With this in place, we need to figure out the proper way to actually hook these up to Consul, load them, etc. This discussion can happen elsewhere, since regardless of approach this plugin library implementation is the exact same.
2019-01-07 12:48:44 -05:00
Grégoire Seux 4f62a3b528 Implement /v1/agent/health/service/<service name> endpoint (#3551)
This endpoint aggregates all checks related to <service id> on the agent
and return an appropriate http code + the string describing the worst
check.

This allows to cleanly expose service status to other component, hiding
complexity of multiple checks.
This is especially useful to use consul to feed a load balancer which
would delegate health checking to consul agent.

Exposing this endpoint on the agent is necessary to avoid a hit on
consul servers and avoid decreasing resiliency (this endpoint will work
even if there is no consul leader in the cluster).
2019-01-07 09:39:23 -05:00
Aestek 5960974db1 [Fix] Services sometimes not being synced with acl_enforce_version_8 = false (#4771)
Fixes: https://github.com/hashicorp/consul/issues/3676

This fixes a bug were registering an agent with a non-existent ACL token can prevent other 
services registered with a good token from being synced to the server when using 
`acl_enforce_version_8 = false`.

## Background

When `acl_enforce_version_8` is off the agent does not check the ACL token validity before 
storing the service in its state.
When syncing a service registered with a missing ACL token we fall into the default error 
handling case (https://github.com/hashicorp/consul/blob/master/agent/local/state.go#L1255)
and stop the sync (https://github.com/hashicorp/consul/blob/master/agent/local/state.go#L1082)
without setting its Synced property to true like in the permission denied case.
This means that the sync will always stop at the faulty service(s).
The order in which the services are synced is random since we iterate on a map. So eventually
all services with good ACL tokens will be synced, this can however take some time and is influenced 
by the cluster size, the bigger the slower because retries are less frequent.
Having a service in this state also prevent all further sync of checks as they are done after
the services.

## Changes 

This change modify the sync process to continue even if there is an error. 
This fixes the issue described above as well as making the sync more error tolerant: if the server repeatedly refuses
a service (the ACL token could have been deleted by the time the service is synced, the servers 
were upgraded to a newer version that has more strict checks on the service definition...). 
Then all services and check that can be synced will, and those that don't will be marked as errors in 
the logs instead of blocking the whole process.
2019-01-04 10:01:50 -05:00
Hans Hasselberg 0b4a879203
ui: serve /robots.txt when UI is enabled. (#5089)
* serve /robots.txt
* robots.txt: disallow everything
2018-12-17 19:35:03 +01:00
Kyle Havlovitz 995e728ea0 txn: fix an issue with querying nodes by name instead of ID 2018-12-12 12:46:33 -08:00
Pierre Souchay f4dc8b42e0 [Travis][UnstableTests] Fixed unstable tests in travis (#5013)
* [Travis][UnstableTests] Fixed unstable tests in travis as seen in https://travis-ci.org/hashicorp/consul/jobs/460824602

* Fixed unstable tests in https://travis-ci.org/hashicorp/consul/jobs/460857687
2018-12-12 12:09:42 -08:00
Kyle Havlovitz 67bac7a815 api: add support for new txn operations 2018-12-12 10:54:09 -08:00
Kyle Havlovitz de4dbf583e txn: add tests for RPC endpoint 2018-12-12 10:04:10 -08:00
Kyle Havlovitz 6a512e5c0f txn: add ACL enforcement/validation to new txn ops 2018-12-12 10:04:10 -08:00
Kyle Havlovitz 9467067432 state: add tests for new txn ops 2018-12-12 10:04:10 -08:00
Kyle Havlovitz 7759e9ea8b txn: add service operations 2018-12-12 10:04:10 -08:00