This fixes#7020.
There are two problems this PR solves:
* if the node info changes it is highly likely to get service and check registration permission errors unless those service tokens have node:write. Hopefully services you register don’t have this permission.
* the timer for a full sync gets reset for every partial sync which means that many partial syncs are preventing a full sync from happening
Instead of syncing node info last, after services and checks, and possibly saving one RPC because it is included in every service sync, I am syncing node info first. It is only ever going to be a single RPC that we are only doing when node info has changed. This way we are guaranteed to sync node info even when something goes wrong with services or checks which is more likely because there are more syncs happening for them.
Previously this happened to be validating only the chains in the default namespace. Now it will validate all chains in all namespaces when the global proxy-defaults is changed.
* Cleanup the discovery chain compilation route handling
Nothing functionally should be different here. The real difference is that when creating new targets or handling route destinations we use the router config entries name and namespace instead of that of the top level request. Today they SHOULD always be the same but that may not always be the case. This hopefully also makes it easier to understand how the router entries are handled.
* Refactor a small bit of the service manager tests in oss
We used to use the stringHash function to compute part of the filename where things would get persisted to. This has been changed in the core code to calling the StringHash method on the ServiceID type. It just so happens that the new method will output the same value for anything in the default namespace (by design actually). However, logically this filename computation in the test should do the same thing as the core code itself so I updated it here.
Also of note is that newer enterprise-only tests for the service manager cannot use the old stringHash function at all because it will produce incorrect results for non-default namespaces.
The previous value was too conservative and users with many instances
were having problems because of it. This change increases the limit to
8192 which reportedly fixed most of the issues with that.
Related: #4984, #4986, #5050.
This adds acl enforcement to the two endpoints that were missing it.
Note that in the case of getting a services health by its id, we still
must first lookup the service so we still "leak" information about a
service with that ID existing. There isn't really a way around it though
as ACLs are meant to check service names.
* Updates to the Txn API for namespaces
* Update agent/consul/txn_endpoint.go
Co-Authored-By: R.B. Boyer <rb@hashicorp.com>
Co-authored-by: R.B. Boyer <public@richardboyer.net>
The backing RPC already existed but the endpoint will be useful for other service syncing processes such as consul-k8s as this endpoint can return all services registered with a node regardless of namespacing.
* Fix segfault when removing both a service and associated check
updateSyncState creates entries in the services and checks maps for
remote services/checks that are not found locally, so that we can then
make sure to delete them in our reconciliation process. However, the
values added to the map are missing key fields that the rest of the code
expects to not be nil.
* Add comment stating Check field can be nil
Something similar already happens inside of the server
(agent/consul/server.go) but by doing it in the general config parsing
for the agent we can have agent-level code rely on the PrimaryDatacenter
field, too.
* Increase raft notify buffer.
Fixes https://github.com/hashicorp/consul/issues/6852.
Increasing the buffer helps recovering from leader flapping. It lowers
the chances of the flapping leader to get into a deadlock situation like
described in #6852.
- Explicitly wait to start the test until the initial AE sync of the node.
- Run the blocking query in the main goroutine to cut down on possible
poor goroutine scheduling issues being to blame for delays.
- If the blocking query is woken up with no index change, rerun the
query. This may happen if the CI server is loaded and time dilation is
happening.
Currently when using the built-in CA provider for Connect, root certificates are valid for 10 years, however secondary DCs get intermediates that are valid for only 1 year. There is no mechanism currently short of rotating the root in the primary that will cause the secondary DCs to renew their intermediates.
This PR adds a check that renews the cert if it is half way through its validity period.
In order to be able to test these changes, a new configuration option was added: IntermediateCertTTL which is set extremely low in the tests.
* Add CreateCSRWithSAN
* Use CreateCSRWithSAN in auto_encrypt and cache
* Copy DNSNames and IPAddresses to cert
* Verify auto_encrypt.sign returns cert with SAN
* provide configuration options for auto_encrypt dnssan and ipsan
* rename CreateCSRWithSAN to CreateCSR
* Use consts for well known tagged adress keys
* Add ipv4 and ipv6 tagged addresses for node lan and wan
* Add ipv4 and ipv6 tagged addresses for service lan and wan
* Use IPv4 and IPv6 address in DNS
Deregistering a service from the catalog automatically deregisters its
checks, however the agent still performs a deregister call for each
service checks even after the service has been deregistered.
With ACLs enabled this results in logs like:
"message:consul: "Catalog.Deregister" RPC failed to server
server_ip:8300: rpc error making call: rpc error making call: Unknown
check 'check_id'"
This change removes associated checks from the agent state when
deregistering a service, which results in less calls to the servers and
supresses the error logs.
* Renamed structs.IntentionWildcard to structs.WildcardSpecifier
* Refactor ACL Config
Get rid of remnants of enterprise only renaming.
Add a WildcardName field for specifying what string should be used to indicate a wildcard.
* Add wildcard support in the ACL package
For read operations they can call anyAllowed to determine if any read access to the given resource would be granted.
For write operations they can call allAllowed to ensure that write access is granted to everything.
* Make v1/agent/connect/authorize namespace aware
* Update intention ACL enforcement
This also changes how intention:read is granted. Before the Intention.List RPC would allow viewing an intention if the token had intention:read on the destination. However Intention.Match allowed viewing if access was allowed for either the source or dest side. Now Intention.List and Intention.Get fall in line with Intention.Matches previous behavior.
Due to this being done a few different places ACL enforcement for a singular intention is now done with the CanRead and CanWrite methods on the intention itself.
* Refactor Intention.Apply to make things easier to follow.
Sometimes, we have lots of errors in cross calls between DCs (several hundreds / sec)
Enrich the log in order to help diagnose the root cause of issue.
Before we were issuing 1 watch for every service in the services listing which would have caused the agent to process many more identical events simultaneously.
Restore a few more service-kind index updates so blocking in ServiceDump works in more cases
Namely one omission was that check updates for dumped services were not
unblocking.
Also adds a ServiceDump state store test and also fix a watch bug with the
normal dump.
Follow-on from #6916
• Renamed EnterpriseACLConfig to just Config
• Removed chained_authorizer_oss.go as it was empty
• Renamed acl.go to errors.go to more closely describe its contents
* Add updated github.com/miekg/dns to go modules
* Add updated github.com/miekg/dns to vendor
* Fix github.com/miekg/dns api breakage
* Decrease size when trimming UDP packets
Need more room for the header(?), if we don't decrease the size we get an
"overflow unpacking uint32" from the dns library
* Fix dns truncate tests with api changes
* Make windows build working again. Upgrade x/sys and x/crypto and vendor
This upgrade is needed because of API breakage in x/sys introduced
by the minimal x/sys dependency of miekg/dns
This API breakage has been fixed in commit
855e68c859
* relax requirements for auto_encrypt on server
* better error message when auto_encrypt and verify_incoming on
* docs: explain verify_incoming on Consul clients.
* Increase number to test ignore. Consul Enterprise has more flags and since we are trying to reduce the differences between both code bases, we are increasing the number in oss. The semantics don't change, it is just a cosmetic thing.
* Introduce agent.initEnterprise for enterprise related hooks.
* Sync test with ent version.
* Fix import order.
* revert error wording.
Ensure we close the Sentinel Evaluator so as not to leak go routines
Fix a bunch of test logging so that various warnings when starting a test agent go to the ltest logger and not straight to stdout.
Various canned ent meta types always return a valid pointer (no more nils). This allows us to blindly deref + assign in various places.
Update ACL index tracking to ensure oss -> ent upgrades will work as expected.
Update ent meta parsing to include function to disallow wildcarding.
Also update the Docs and fixup the HTTP API to return proper errors when someone attempts to use Namespaces with an OSS agent.
Add Namespace HTTP API docs
Make all API endpoints disallow unknown fields
* Implement endpoint to query whether the given token is authorized for a set of operations
* Updates to allow for remote ACL authorization via RPC
This is only used when making an authorization request to a different datacenter.
* Adds 'limits' field to the upstream configuration of a connect proxy
This allows a user to configure the envoy connect proxy with
'max_connections', 'max_queued_requests', and 'max_concurrent_requests'. These
values are defined in the local proxy on a per-service instance basis
and should thus NOT be thought of as a global-level or even service-level value.
* Update AWS SDK to use PCA features.
* Add AWS PCA provider
* Add plumbing for config, config validation tests, add test for inheriting existing CA resources created by user
* Unparallel the tests so we don't exhaust PCA limits
* Merge updates
* More aggressive polling; rate limit pass through on sign; Timeout on Sign and CA create
* Add AWS PCA docs
* Fix Vault doc typo too
* Doc typo
* Apply suggestions from code review
Co-Authored-By: R.B. Boyer <rb@hashicorp.com>
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Doc fixes; tests for erroring if State is modified via API
* More review cleanup
* Uncomment tests!
* Minor suggested clean ups
Replaces WaitForLeader with WaitForTestAgent. This waits to make sure
that the node itself is correctly registered in the catalog before
attempting additional registrations.
* Change CA Configure struct to pass Datacenter through
* Remove connect/ca/plugin as we don't have immediate plans to use it.
We still intend to one day but there are likely to be several changes to the CA provider interface before we do so it's better to rebuild from history when we do that work properly.
* Rename PrimaryDC; fix endpoint in secondary DCs
Refactor dns to have same behavior between A and SRV.
Current implementation returns the node name instead of the service
address.
With this fix when querying for SRV record service address is return in
the SRV record.
And when performing a simple dns lookup it returns a CNAME to the
service address.
* Support Connect CAs that can't cross sign
* revert spurios mod changes from make tools
* Add log warning when forcing CA rotation
* Fixup SupportsCrossSigning to report errors and work with Plugin interface (fixes tests)
* Fix failing snake_case test
* Remove misleading comment
* Revert "Remove misleading comment"
This reverts commit bc4db9cabed8ad5d0e39b30e1fe79196d248349c.
* Remove misleading comment
* Regen proto files messed up by rebase
* pass logger through to provider
* test for proper operation of NeedsLogger
* remove public testServer function
* Ooops actually set the logger in all the places we need it - CA config set wasn't and causing segfault
* Fix all the other places in tests where we set the logger
* Allow CA Providers to persist some state
* Update CA provider plugin interface
* Fix plugin stubs to match provider changes
* Update agent/connect/ca/provider.go
Co-Authored-By: R.B. Boyer <rb@hashicorp.com>
* Cleanup review comments
* add NeedsLogger to Provider interface
* implements NeedsLogger in default provider
* pass logger through to provider
* test for proper operation of NeedsLogger
* remove public testServer function
* Switch test to actually assert on logging output rather than reflection.
--amend
* Ooops actually set the logger in all the places we need it - CA config set wasn't and causing segfault
* Fix all the other places in tests where we set the logger
* Add TODO comment
* Allow RSA CA certs for consul and vault providers to correctly sign EC leaf certs.
* Ensure key type ad bits are populated from CA cert and clean up tests
* Add integration test and fix error when initializing secondary CA with RSA key.
* Add more tests, fix review feedback
* Update docs with key type config and output
* Apply suggestions from code review
Co-Authored-By: R.B. Boyer <rb@hashicorp.com>
Main Changes:
• method signature updates everywhere to account for passing around enterprise meta.
• populate the EnterpriseAuthorizerContext for all ACL related authorizations.
• ACL resource listings now operate like the catalog or kv listings in that the returned entries are filtered down to what the token is allowed to see. With Namespaces its no longer all or nothing.
• Modified the acl.Policy parsing to abstract away basic decoding so that enterprise can do it slightly differently. Also updated method signatures so that when parsing a policy it can take extra ent metadata to use during rules validation and policy creation.
Secondary Changes:
• Moved protobuf encoding functions out of the agentpb package to eliminate circular dependencies.
• Added custom JSON unmarshalers for a few ACL resource types (to support snake case and to get rid of mapstructure)
• AuthMethod validator cache is now an interface as these will be cached per-namespace for Consul Enterprise.
• Added checks for policy/role link existence at the RPC API so we don’t push the request through raft to have it fail internally.
• Forward ACL token delete request to the primary datacenter when the secondary DC doesn’t have the token.
• Added a bunch of ACL test helpers for inserting ACL resource test data.
Previously the logic for configuring RDS during LDS for L7 upstreams was
overapplied to TCP proxies resulting in a cluster name of <emptystring>
being used incorrectly.
Fixes#6621
If acls have not yet replicated to the secondary then authz requests
will be remotely resolved by the primary. Now these tests explicitly
wait until replication has caught up first.
* ACL Authorizer overhaul
To account for upcoming features every Authorization function can now take an extra *acl.EnterpriseAuthorizerContext. These are unused in OSS and will always be nil.
Additionally the acl package has received some thorough refactoring to enable all of the extra Consul Enterprise specific authorizations including moving sentinel enforcement into the stubbed structs. The Authorizer funcs now return an acl.EnforcementDecision instead of a boolean. This improves the overall interface as it makes multiple Authorizers easily chainable as they now indicate whether they had an authoritative decision or should use some other defaults. A ChainedAuthorizer was added to handle this Authorizer enforcement chain and will never itself return a non-authoritative decision.
* Include stub for extra enterprise rules in the global management policy
* Allow for an upgrade of the global-management policy
A check may be set to become passing/critical only if a specified number of successive
checks return passing/critical in a row. Status will stay identical as before until
the threshold is reached.
This feature is available for HTTP, TCP, gRPC, Docker & Monitor checks.
* Implement leader routine manager
Switch over the following to use it for go routine management:
• Config entry Replication
• ACL replication - tokens, policies, roles and legacy tokens
• ACL legacy token upgrade
• ACL token reaping
• Intention Replication
• Secondary CA Roots Watching
• CA Root Pruning
Also added the StopAll call into the Server Shutdown method to ensure all leader routines get killed off when shutting down.
This should be mostly unnecessary as `revokeLeadership` should manually stop each one but just in case we really want these to go away (eventually).
This only works so long as we use simplistic protobuf types. Constructs such as oneof or Any types that require type annotations for decoding properly will fail hard but that is by design. If/when we want to use any of that we will probably need to consider a v2 API.
* Add JSON and Binary Marshaler Generators for Protobuf Types
* Generate files with the correct version of gogo/protobuf
I have pinned the version in the makefile so when you run make tools you get the right version. This pulls the version out of go.mod so it should remain up to date.
The version at the time of this commit we are using is v1.2.1
* Fixup some shell output
* Update how we determine the version of gogo
This just greps the go.mod file instead of expecting the go mod cache to already be present
* Fixup vendoring and remove no longer needed json encoder functions
## HTTPAdapter (#5637)
## Ember upgrade 2.18 > 3.12 (#6448)
### Proxies can no longer get away with not calling _super
This means that we can't use create anymore to define dynamic methods.
Therefore we dynamically make 2 extended Proxies on demand, and then
create from those. Therefore we can call _super in the init method of
the extended Proxies.
### We aren't allowed to reset a service anymore
We never actually need to now anyway, this is a remnant of the refactor
from browser based confirmations. We fix it as simply as possible here
but will revisit and remove the old browser confirm functionality at a
later date
### Revert classes to use ES5 style to workaround babel transp. probs
Using a mixture of ES6 classes (and hence super) and arrow functions
means that when babel transpiles the arrow functions down to ES5, a
reference to this is moved before the call to super, hence causing a js
error.
Furthermore, we the testing environment no longer lets use use
apply/call on the constructor.
These errors only manifests during testing (only in the testing
environment), the application itself runs fine with no problems without
this change.
Using ES5 style class definitions give us freedom to do all of the above
without causing any errors, so we reverted these classes back to ES5
class definitions
### Skip test that seems to have changed due to a change in RSVP timing
This test tests a usecase/area of the API that will probably never ever
be used, it was more testing out the API. We've skipped the test for now
as this doesn't affect the application itself, but left a note to come
back here later to investigate further
### Remove enumerableContentDidChange
Initial testing looks like we don't need to call this function anymore,
the function no longer exists
### Rework Changeset.isSaving to take into account new ember APIs
Setting/hanging a computedProperty of an instantiated object no longer
works. Move to setting it on the prototype/class definition instead
### Change how we detect whether something requires listening
New ember API's have changed how you can detect whether something is a
computedProperty or not. It's not immediately clear if its even possible
now. Therefore we change how we detect whether something should be
listened to or not by just looking for presence of `addEventListener`
### Potentially temporary change of ci test scripts to ensure deps exist
All our tooling scripts run through a Makefile (for people familiar with
only using those), which then call yarn scripts which can be called
independently (for people familar with only using yarn).
The Makefile targets always check to make sure all the dependencies are
installed before running anything that requires them (building, testing
etc).
The CI scripts/targets didn't follow this same route and called the yarn
scripts directly (usually CI builds a cache of the dependencies first).
For some reason this cache isn't doing what it usually does, and it
looks as though, in CI, ember isn't installed.
This commit makes the CI scripts consistently use the same method as all
of the other tooling scripts (Makefile target > Install Deps if
required > call yarn script). This should install the dependencies if
for some reason the CI cache building doesn't complete/isn't successful.
Potentially this commit may be reverted if, the root of the problem is
elsewhere, although consistency is always good, so it might be a good
idea to leave this commit as is even if we need to debug and fix things
elsewhere.
### Make test-parallel consistent with the rest of the tooling scripts
As we are here making changes for CI purposes (making test-ci
consistent), we spotted that test-parallel is also inconsistent and also
the README manual instructions won't work without `ember` installed
globally.
This commit makes everything consistent and changes the manual
instructions to use the local ember instance that gets installed via
yarn
### Re-wrangle catchable to fit with new ember 3.12 APIs
In the upgrade from ember 3.8 > 3.12 the public interfaces for
ComputedProperties have changed slightly. `meta` is no longer a public
property of ComputedProperty but of a ComputedDecoratorImpl mixin
instead.
7e4ba1096e/packages/%40ember/-internals/metal/lib/computed.ts (L725)
There seems to be no way, by just using publically available
methods, to replicate this behaviour so that we can create our own
'ComputedProperty` factory via injecting the ComputedProperty class as
we did previously.
3f333bada1/ui-v2/app/utils/computed/factory.js (L1-L18)
Instead we dynamically hang our `Catchable` `catch` method off the
instantiated ComputedProperty. In doing it like this `ComputedProperty`
has already has its `meta` method mixed in so we don't have to manually
mix it in ourselves (which doesn't seem possible)
This functionality is only used during our work in trying to ensure
our EventSource/BlockingQuery work was as 'ember-like' as possible (i.e.
using the traditional Route.model hooks and ember-like Controller
properties). Our ongoing/upcoming work on a componentized approach to
data a.k.a `<DataSource />` means we will be able to remove the majority
of the code involved here now that it seems to be under an amount of
flux in ember.
### Build bindata_assetfs.go with new UI changes
* Add support for parameterizing the ACL config used with a TestAgent
Using tokens that are UUIDs will get rid of some warnings
* Refactor to allow setting all tokens and change the template to ignore unset values.
This fixes an issue where leaf certificates issued in secondary
datacenters would be reissued very frequently (every ~20 seconds)
because the logic meant to detect root rotation was errantly triggering
because a hash of the ultimate root (in the primary) was being compared
against a hash of the local intermediate root (in the secondary) and
always failing.
Fixes#6521
Ensure that initial failures to fetch an agent cache entry using the
notify API where the underlying RPC returns a synthetic index of 1
correctly recovers when those RPCs resume working.
The bug in the Cache.notifyBlockingQuery used to incorrectly "fix" the
index for the next query from 0 to 1 for all queries, when it should
have not done so for queries that errored.
Also fixed some things that made debugging difficult:
- config entry read/list endpoints send back QueryMeta headers
- xds event loops don't swallow the cache notification errors
In a previous PR I made it so that we had interfaces that would work enough to allow blockingQueries to work. However to complete this we need all fields to be settable and gettable.
Notes:
• If Go ever gets contracts/generics then we could get rid of all the Getters/Setters
• protoc / protoc-gen-gogo are going to generate all the getters for us.
• I copied all the getters/setters from the protobuf funcs into agent/structs/protobuf_compat.go
• Also added JSON marshaling funcs that use jsonpb for protobuf types.
Fixes: #5396
This PR adds a proxy configuration stanza called expose. These flags register
listeners in Connect sidecar proxies to allow requests to specific HTTP paths from outside of the node. This allows services to protect themselves by only
listening on the loopback interface, while still accepting traffic from non
Connect-enabled services.
Under expose there is a boolean checks flag that would automatically expose all
registered HTTP and gRPC check paths.
This stanza also accepts a paths list to expose individual paths. The primary
use case for this functionality would be to expose paths for third parties like
Prometheus or the kubelet.
Listeners for requests to exposed paths are be configured dynamically at run
time. Any time a proxy, or check can be registered, a listener can also be
created.
In this initial implementation requests to these paths are not
authenticated/encrypted.
Also:
* Finished threading replaceExistingChecks setting (from GH-4905)
through service manager.
* Respected the original configSource value that was used to register a
service or a check when restoring persisted data.
* Run several existing tests with and without central config enabled
(not exhaustive yet).
* Switch to ioutil.ReadFile for all types of agent persistence.
The fields in the certs are meant to hold the original binary
representation of this data, not some ascii-encoded version.
The only time we should be colon-hex-encoding fields is for display
purposes or marshaling through non-TLS mediums (like RPC).
This only affects vault versions >=1.1.1 because the prior code
accidentally relied upon a bug that was fixed in
https://github.com/hashicorp/vault/pull/6505
The existing tests should have caught this, but they were using a
vendored copy of vault version 0.10.3. This fixes the tests by running
an actual copy of vault instead of an in-process copy. This has the
added benefit of changing the dependency on vault to just vault/api.
Also update VaultProvider to use similar SetIntermediate validation code
as the ConsulProvider implementation.
* Add build system support for protobuf generation
This is done generically so that we don’t have to keep updating the makefile to add another proto generation.
Note: anything not in the vendor directory and with a .proto extension will be run through protoc if the corresponding namespace.pb.go file is not up to date.
If you want to rebuild just a single proto file you can do so with: make proto-rebuild PROTOFILES=<list of proto files to rebuild>
Providing the PROTOFILES var will override the default behavior of finding all the .proto files.
* Start adding types to the agent/proto package
These will be needed for some other work and are by no means comprehensive.
* Add ability to resolve/fixup the agentpb.ACLLinks structure in the state store.
* Use protobuf marshalling of raft requests instead of msgpack for protoc generated types.
This does not change any encoding of existing types.
* Removed structs package automatically encoding with protobuf marshalling
Instead the caller of raftApply that wants to opt-in to protobuf encoding will have to call `raftApplyProtobuf`
* Run update-vendor to fixup modules.txt
Nothing changed as far as dependencies go but the ordering of modules in that file depends on the time they are first seen and its not alphabetical.
* Rename some things and implement the structs.RPCInfo interface bits
agentpb.QueryOptions and agentpb.WriteRequest implement 3 of the 4 RPCInfo funcs and the new TargetDatacenter message type implements the fourth.
* Use the right encoding function.
* Renamed agent/proto package to agent/agentpb to prevent package name conflicts
* Update modules.txt to fix ordering
* Change blockingQuery to take in interfaces for the query options and meta
* Add %T to error output.
* Add/Update some comments
This should cut down on test flakiness.
Problems handled:
- If you had enough parallel test cases running, the former circular
approach to handling the port block could hand out the same port to
multiple cases before they each had a chance to bind them, leading to
one of the two tests to fail.
- The freeport library would allocate out of the ephemeral port range.
This has been corrected for Linux (which should cover CI).
- The library now waits until a formerly-in-use port is verified to be
free before putting it back into circulation.
In normal operations there is a read/write race related to request
QueryOptions fields. An example race:
WARNING: DATA RACE
Read at 0x00c000836950 by goroutine 30:
github.com/hashicorp/consul/agent/structs.(*ServiceConfigRequest).CacheInfo()
/go/src/github.com/hashicorp/consul/agent/structs/config_entry.go:506 +0x109
github.com/hashicorp/consul/agent/cache.(*Cache).getWithIndex()
/go/src/github.com/hashicorp/consul/agent/cache/cache.go:262 +0x5c
github.com/hashicorp/consul/agent/cache.(*Cache).notifyBlockingQuery()
/go/src/github.com/hashicorp/consul/agent/cache/watch.go:89 +0xd7
Previous write at 0x00c000836950 by goroutine 147:
github.com/hashicorp/consul/agent/cache-types.(*ResolvedServiceConfig).Fetch()
/go/src/github.com/hashicorp/consul/agent/cache-types/resolved_service_config.go:31 +0x219
github.com/hashicorp/consul/agent/cache.(*Cache).fetch.func1()
/go/src/github.com/hashicorp/consul/agent/cache/cache.go:495 +0x112
This patch does a lightweight copy of the request struct so that the
embedded QueryOptions fields that are mutated during Fetch() are scoped
to just that one RPC.
* Store primaries root in secondary after intermediate signature
This ensures that the intermediate exists within the CA root stored in raft and not just in the CA provider state. This has the very nice benefit of actually outputting the intermediate cert within the ca roots HTTP/RPC endpoints.
This change means that if signing the intermediate fails it will not set the root within raft. So far I have not come up with a reason why that is bad. The secondary CA roots watch will pull the root again and go through all the motions. So as soon as getting an intermediate CA works the root will get set.
* Make TestAgentAntiEntropy_Check_DeferSync less flaky
I am not sure this is the full fix but it seems to help for me.
When this test flakes sometimes this happens:
--- FAIL: TestCoordinate_Node (1.69s)
panic: interface conversion: interface {} is nil, not structs.Coordinates [recovered]
FAIL github.com/hashicorp/consul/agent 19.999s
Exit code: 1
panic: interface conversion: interface {} is nil, not structs.Coordinates [recovered]
panic: interface conversion: interface {} is nil, not structs.Coordinates
There is definitely a bug lurking, but the code seems to imply this can
only return nil on 404. The tests previously were not checking the
status code.
The underlying cause of the flake is unknown, but this should turn the
failure into a more normal test failure.
When there is an node name conflicts, such messages are displayed within Consul:
`consul.fsm: EnsureRegistration failed: failed inserting node: Error while renaming Node ID: "e1d456bc-f72d-98e5-ebb3-26ae80d785cf": Node name node001 is reserved by node 05f10209-1b9c-b90c-e3e2-059e64556d4a with name node001`
While it is easy to find the node that has reserved the name, it is hard to find
the node trying to aquire the name since it is not registered, because it
is not part of `consul members` output
This PR will display the IP of the offender and solve far more easily those issues.
The embedded `Server` field on a `DNSServer` is only set inside of the
`ListenAndServe` method. If that method fails for reasons like the
address being in use and is not bindable, then the `Server` field will
not be set and the overall `Agent.Start()` will fail.
This will trigger the inner loop of `TestAgent.Start()` to invoke
`ShutdownEndpoints` which will attempt to pretty print the DNS servers
using fields on that inner `Server` field. Because it was never set,
this causes a nil pointer dereference and crashes the test.
Previously `verify_incoming` was required when turning on `auto_encrypt.allow_tls`, but that doesn't work together with HTTPS UI in some scenarios. Adding `verify_incoming_rpc` to the allowed configurations.
AutoEncrypt needs the server-port because it wants to talk via RPC. Information from gossip might not be available at that point and thats why the server-port is being used.
- Bootstrap escape hatches are OK.
- Public listener/cluster escape hatches are OK.
- Upstream listener/cluster escape hatches are not supported.
If an unsupported escape hatch is configured and the discovery chain is
activated log a warning and act like it was not configured.
Fixes#6160
Compiling this will set an optional SNI field on each DiscoveryTarget.
When set this value should be used for TLS connections to the instances
of the target. If not set the default should be used.
Setting ExternalSNI will disable mesh gateway use for that target. It also
disables several service-resolver features that do not make sense for an
external service.
If the entry is updated for reasons other than protocol it is surprising
that the value is explicitly persisted as 'tcp' rather than leaving it
empty and letting it fall back dynamically on the proxy-defaults value.
Since generated envoy clusters all are named using (mostly) SNI syntax
we can have envoy read the various fields out of that structure and emit
it as stats labels to the various telemetry backends.
I changed the delimiter for the 'customization hash' from ':' to '~'
because ':' is always reencoded by envoy as '_' when generating metrics
keys.
Add parameter local-only to operator keyring list requests to force queries to only hit local servers (no WAN traffic).
HTTP API: GET /operator/keyring?local-only=true
CLI: consul keyring -list --local-only
Sending the local-only flag with any non-GET/list request will result in an error.
Failover is pushed entirely down to the data plane by creating envoy
clusters and putting each successive destination in a different load
assignment priority band. For example this shows that normally requests
go to 1.2.3.4:8080 but when that fails they go to 6.7.8.9:8080:
- name: foo
load_assignment:
cluster_name: foo
policy:
overprovisioning_factor: 100000
endpoints:
- priority: 0
lb_endpoints:
- endpoint:
address:
socket_address:
address: 1.2.3.4
port_value: 8080
- priority: 1
lb_endpoints:
- endpoint:
address:
socket_address:
address: 6.7.8.9
port_value: 8080
Mesh gateways route requests based solely on the SNI header tacked onto
the TLS layer. Envoy currently only lets you configure the outbound SNI
header at the cluster layer.
If you try to failover through a mesh gateway you ideally would
configure the SNI value per endpoint, but that's not possible in envoy
today.
This PR introduces a simpler way around the problem for now:
1. We identify any target of failover that will use mesh gateway mode local or
remote and then further isolate any resolver node in the compiled discovery
chain that has a failover destination set to one of those targets.
2. For each of these resolvers we will perform a small measurement of
comparative healths of the endpoints that come back from the health API for the
set of primary target and serial failover targets. We walk the list of targets
in order and if any endpoint is healthy we return that target, otherwise we
move on to the next target.
3. The CDS and EDS endpoints both perform the measurements in (2) for the
affected resolver nodes.
4. For CDS this measurement selects which TLS SNI field to use for the cluster
(note the cluster is always going to be named for the primary target)
5. For EDS this measurement selects which set of endpoints will populate the
cluster. Priority tiered failover is ignored.
One of the big downsides to this approach to failover is that the failover
detection and correction is going to be controlled by consul rather than
deferring that entirely to the data plane as with the prior version. This also
means that we are bound to only failover using official health signals and
cannot make use of data plane signals like outlier detection to affect
failover.
In this specific scenario the lack of data plane signals is ok because the
effectiveness is already muted by the fact that the ultimate destination
endpoints will have their data plane signals scrambled when they pass through
the mesh gateway wrapper anyway so we're not losing much.
Another related fix is that we now use the endpoint health from the
underlying service, not the health of the gateway (regardless of
failover mode).
In addition to exposing compilation over the API cleaned up the structures that would be exchanged to be cleaner and easier to support and understand.
Also removed ability to configure the envoy OverprovisioningFactor.
This should make them better for sending over RPC or the API.
Instead of a chain implemented explicitly like a linked list (nodes
holding pointers to other nodes) instead switch to a flat map of named
nodes with nodes linking other other nodes by name. The shipped
structure is just a map and a string to indicate which key to start
from.
Other changes:
* inline the compiler option InferDefaults as true
* introduce compiled target config to avoid needing to send back
additional maps of Resolvers; future target-specific compiled state
can go here
* move compiled MeshGateway out of the Resolver and into the
TargetConfig where it makes more sense.
* connect: reconcile how upstream configuration works with discovery chains
The following upstream config fields for connect sidecars sanely
integrate into discovery chain resolution:
- Destination Namespace/Datacenter: Compilation occurs locally but using
different default values for namespaces and datacenters. The xDS
clusters that are created are named as they normally would be.
- Mesh Gateway Mode (single upstream): If set this value overrides any
value computed for any resolver for the entire discovery chain. The xDS
clusters that are created may be named differently (see below).
- Mesh Gateway Mode (whole sidecar): If set this value overrides any
value computed for any resolver for the entire discovery chain. If this
is specifically overridden for a single upstream this value is ignored
in that case. The xDS clusters that are created may be named differently
(see below).
- Protocol (in opaque config): If set this value overrides the value
computed when evaluating the entire discovery chain. If the normal chain
would be TCP or if this override is set to TCP then the result is that
we explicitly disable L7 Routing and Splitting. The xDS clusters that
are created may be named differently (see below).
- Connect Timeout (in opaque config): If set this value overrides the
value for any resolver in the entire discovery chain. The xDS clusters
that are created may be named differently (see below).
If any of the above overrides affect the actual result of compiling the
discovery chain (i.e. "tcp" becomes "grpc" instead of being a no-op
override to "tcp") then the relevant parameters are hashed and provided
to the xDS layer as a prefix for use in naming the Clusters. This is to
ensure that if one Upstream discovery chain has no overrides and
tangentially needs a cluster named "api.default.XXX", and another
Upstream does have overrides for "api.default.XXX" that they won't
cross-pollinate against the operator's wishes.
Fixes#6159
* connect: validate upstreams and prevent duplicates
* Actually run Upstream.Validate() instead of ignoring it as dead code.
* Prevent two upstreams from declaring the same bind address and port.
It wouldn't work anyway.
* Prevent two upstreams from being declared that use the same
type+name+namespace+datacenter. Due to how the Upstream.Identity()
function worked this ended up mostly being enforced in xDS at use-time,
but it should be enforced more clearly at register-time.
Secondary CA initialization steps are:
• Wait until the primary will be capable of signing intermediate certs. We use serf metadata to check the versions of servers in the primary which avoids needing a token like the previous implementation that used RPCs. We require at least one alive server in the primary and the all alive servers meet the version requirement.
• Initialize the secondary CA by getting the primary to sign an intermediate
When a primary dc is configured, if no existing CA is initialized and for whatever reason we cannot initialize a secondary CA the secondary DC will remain without a CA. As soon as it can it will initialize the secondary CA by pulling the primaries roots and getting the primary to sign an intermediate.
This also fixes a segfault that can happen during leadership revocation. There was a spot in the secondaryCARootsWatch that was getting the CA Provider and executing methods on it without nil checking. Under normal circumstances it wont be nil but during leadership revocation it gets nil'ed out. Therefore there is a period of time between closing the stop chan and when the go routine is actually stopped where it could read a nil provider and cause a segfault.
Auto-encrypt meant to fallback to the default port when it wasn't provided, but it hadn't been because of an issue with the error handling. We were checking against an incomplete error value:
"missing port in address" vs "address $HOST: missing port in address"
Additionally, all RPCs to AutoEncrypt.Sign were using a.config.ServerPort, so those were updated to use ports resolved by resolveAddrs, if they are available.
* Allow setting the mesh gateway mode for an upstream in config files
* Add envoy integration test for mesh gateways
This necessitated many supporting changes in most of the other test cases.
Add remote mode mesh gateways integration test
The main change is that we no longer filter service instances by health,
preferring instead to render all results down into EDS endpoints in
envoy and merely label the endpoints as HEALTHY or UNHEALTHY.
When OnlyPassing is set to true we will force consul checks in a
'warning' state to render as UNHEALTHY in envoy.
Fixes#6171
* Display nicely Networks (CIDR) in runtime configuration
CIDR mask is displayed in binary in configuration.
This add support for nicely displaying CIDR in runtime configuration.
Currently, if a configuration contains the following lines:
"http_config": {
"allow_write_http_from": [
"127.0.0.0/8",
"::1/128"
]
}
A call to `/v1/agent/self?pretty` would display
"AllowWriteHTTPFrom": [
{
"IP": "127.0.0.0",
"Mask": "/wAAAA=="
},
{
"IP": "::1",
"Mask": "/////////////////////w=="
}
]
This PR fixes it and it will now display:
"AllowWriteHTTPFrom": [ "127.0.0.0/8", "::1/128" ]
* Added test for cidr nice rendering in `TestSanitize()`.
* Update go-bexpr to v0.1.1
This brings in:
• `in`/`not in` operators to do substring matching
• `matches` / `not matches` operators to perform regex string matching.
* Add the capability to auto-generate the filtering selector ops tables for our docs
This fixes pathological cases where the write throughput and snapshot size are both so large that more than 10k log entries are written in the time it takes to restore the snapshot from disk. In this case followers that restart can never catch up with leader replication again and enter a loop of constantly downloading a full snapshot and restoring it only to find that snapshot is already out of date and the leader has truncated its logs so a new snapshot is sent etc.
In general if you need to adjust this, you are probably abusing Consul for purposes outside its design envelope and should reconsider your usage to reduce data size and/or write volume.
All these changes should have no side-effects or change behavior:
- Use bytes.Buffer's String() instead of a conversion
- Use time.Since and time.Until where fitting
- Drop unnecessary returns and assignment
I can only assume we want to check for the retrieved `updatedToken` to not be
nil, before accessing it below.
`token` can't possibly be nil at this point, as we accessed `token.AccessorID`
just before.
* Ensure the mesh gateway configuration comes back in the api within each upstream
* Add a test for the MeshGatewayConfig in the ToAPI functions
* Ensure we don’t use gateways for dc local connections
* Update the svc kind index for deletions
* Replace the proxycfg.state cache with an interface for testing
Also start implementing proxycfg state testing.
* Update the state tests to verify some gateway watches for upstream-targets of a discovery chain.
Also:
- add back an internal http endpoint to dump a compiled discovery chain for debugging purposes
Before the CompiledDiscoveryChain.IsDefault() method would test:
- is this chain just one resolver step?
- is that resolver step just the default?
But what I forgot to test:
- is that resolver step for the same service that the chain represents?
This last point is important because if you configured just one config
entry:
kind = "service-resolver"
name = "web"
redirect {
service = "other"
}
and requested the chain for "web" you'd get back a **default** resolver
for "other". In the xDS code the IsDefault() method is used to
determine if this chain is "empty". If it is then we use the
pre-discovery-chain logic that just uses data embedded in the Upstream
object (and still lets the escape hatches function).
In the example above that means certain parts of the xDS code were going
to try referencing a cluster named "web..." despite the other parts of
the xDS code maintaining clusters named "other...".
Both 'consul config write' and server bootstrap config entries take a
decoding detour through mapstructure on the way from HCL to an actual
struct. They both may take in snake_case or CamelCase (for consistency)
so need very similar handling.
Unfortunately since they are operating on mirror universes of structs
(api.* vs structs.*) the code cannot be identitical, so try to share the
kind-configuration and duplicate the rest for now.
* Ensure we MapWalk the proxy config in the NodeService and ServiceNode structs
This gets rid of some json encoder errors in the catalog endpoints
* Allow passing explicit bind addresses to envoy
* Move map walking to the ConnectProxyConfig struct
Any place where this struct gets JSON encoded will benefit as opposed to having to implement it everywhere.
* Fail when a non-empty address is provided and not bindable
* camel case
* Update command/connect/envoy/envoy.go
Co-Authored-By: Paul Banks <banks@banksco.de>
* Retry the creation of the test server three times.
* Reduce the retry timeout for the API wait to 2 seconds, opting to fail faster and start over.
* Remove wait for leader from server creation. This wait can be added on a test by test basis now that the function is being exported.
* Remove wait for anti-entropy sync. This is built into the existing WaitForSerfCheck func, so that can be used if the anti-entropy wait is needed
Previously a sequence of events like:
Start
Stop
Start
Stop
would segfault on the second stop because the original ctx and cancel func were only initialized during the constructor and not during Start.
* Make cluster names SNI always
* Update some tests
* Ensure we check for prepared query types
* Use sni for route cluster names
* Proper mesh gateway mode defaulting when the discovery chain is used
* Ignore service splits from PatchSliceOfMaps
* Update some xds golden files for proper test output
* Allow for grpc/http listeners/cluster configs with the disco chain
* Update stats expectation
* connect: allow overriding envoy listener bind_address
* Update agent/xds/config.go
Co-Authored-By: Kyle Havlovitz <kylehav@gmail.com>
* connect: allow overriding envoy listener bind_port
* envoy: support unix sockets for grpc in bootstrap
Add AgentSocket BootstrapTplArgs which if set overrides the AgentAddress
and AgentPort to generate a bootstrap which points Envoy to a unix
socket file instead of an ip:port.
* Add a test for passing the consul addr as a unix socket
* Fix config formatting for envoy bootstrap tests
* Fix listeners test cases for bind addr/port
* Update website/source/docs/connect/proxies/envoy.md
maxIndexWatchTxn was only watching the IndexEntry of the max index of all the entries. It needed to watch all of them regardless of which was the max.
Also plumbed the query source through in the proxy config to help better track requests.
The clusters/endpoints test were still relying on deterministic ordering of clusters/endpoints which cannot be relied upon due to golang purposefully not providing any guarantee about consistent interation ordering of maps.
Also fixed a small bug in the connect proxy cluster generation that was causing the clusters slice to be double the size it needed to with the first half being all nil pointers.
* Fix some tests that I broke when refactoring the ConfigSnapshot
* Make sure the MeshGateway config is added to all the right api structs
* Fix some more tests
The general problem was that a the CA config which contained the trust domain was happening outside of the blocking mechanism so if the client started the blocking query before the primary dcs roots had been set then a state trust domain was being pushed down.
This was fixed here but in the future we should probably fixup the CA initialization code to not initialize the CA config twice when it doesn’t need to.
* Prune Servers from WAN and LAN
* cleaned up and fixed LAN to WAN
* moving things around
* force-leave remove from serfWAN, create pruneSerfWAN
* removed serfWAN remove, reduced complexity, fixed comments
* add another place to remove from serfWAN
* add nil check
* Update agent/consul/server.go
Co-Authored-By: Paul Banks <banks@banksco.de>
With this you should be able to fetch all of the relevant discovery
chain config entries from the state store in one query and then feed
them into the compiler outside of a transaction.
There are a lot of TODOs scattered through here, but they're mostly
around handling fun edge cases and can be deferred until more of the
plumbing works completely.
With ACLs enabled if an agent is wiped and restarted without a leave
it can no longer deregister the services it had previously registered
because it no longer has the tokens the services were registered with.
To remedy that we allow service deregistration from tokens with node
write permission.
* Add ui-content-path flag
* tests complete, regex validator on string, index.html updated
* cleaning up debugging stuff
* ui: Enable ember environment configuration to be set via the go binary at runtime (#5934)
* ui: Only inject {{.ContentPath}} if we are makeing a prod build...
...otherwise we just use the current rootURL
This gets injected into a <base /> node which solves the assets path
problem but not the ember problem
* ui: Pull out the <base href=""> value and inject it into ember env
See previous commit:
The <base href=""> value is 'sometimes' injected from go at index
serve time. We pass this value down to ember by overwriting the ember
config that is injected via a <meta> tag. This has to be done before
ember bootup.
Sometimes (during testing and development, basically not production)
this is injected with the already existing value, in which case this
essentially changes nothing.
The code here is slightly abstracted away from our specific usage to
make it easier for anyone else to use, and also make sure we can cope
with using this same method to pass variables down from the CLI through
to ember in the future.
* ui: We can't use <base /> move everything to javascript (#5941)
Unfortuantely we can't seem to be able to use <base> and rootURL
together as URL paths will get doubled up (`ui/ui/`).
This moves all the things that we need to interpolate with .ContentPath
to the `startup` javascript so we can conditionally print out
`{{.ContentPath}}` in lots of places (now we can't use base)
* fixed when we serve index.html
* ui: For writing a ContentPath, we also need to cope with testing... (#5945)
...and potentially more environments
Testing has more additional things in a separate index.html in `tests/`
This make the entire thing a little saner and uses just javascriopt
template literals instead of a pseudo handbrake synatx for our
templating of these files.
Intead of just templating the entire file this way, we still only
template `{{content-for 'head'}}` and `{{content-for 'body'}}`
in this way to ensure we support other plugins/addons
* build: Loosen up the regex for retrieving the CONSUL_VERSION (#5946)
* build: Loosen up the regex for retrieving the CONSUL_VERSION
1. Previously the `sed` replacement was searching for the CONSUL_VERSION
comment at the start of a line, it no longer does this to allow for
indentation.
2. Both `grep` and `sed` where looking for the omment at the end of the
line. We've removed this restriction here. We don't need to remove it
right now, but if we ever put the comment followed by something here the
searching would break.
3. Added `xargs` for trimming the resulting version string. We aren't
using this already in the rest of the scripts, but we are pretty sure
this is available on most systems.
* ui: Fix erroneous variable, and also force an ember cache clean on build
1. We referenced a variable incorrectly here, this fixes that.
2. We also made sure that every `make` target clears ember's `tmp` cache
to ensure that its not using any caches that have since been edited
everytime we call a `make` target.
* added docs, fixed encoding
* fixed go fmt
* Update agent/config/config.go
Co-Authored-By: R.B. Boyer <public@richardboyer.net>
* Completed Suggestions
* run gofmt on http.go
* fix testsanitize
* fix fullconfig/hcl by setting correct 'want'
* ran gofmt on agent/config/runtime_test.go
* Update website/source/docs/agent/options.html.md
Co-Authored-By: Hans Hasselberg <me@hans.io>
* Update website/source/docs/agent/options.html.md
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* remove contentpath from redirectFS struct
* Support for maximum size for Output of checks
This PR allows users to limit the size of output produced by checks at the agent
and check level.
When set at the agent level, it will limit the output for all checks monitored
by the agent.
When set at the check level, it can override the agent max for a specific check but
only if it is lower than the agent max.
Default value is 4k, and input must be at least 1.
If a KVSet is performed but does not update the entry, do not trigger
watches for this key.
This avoids releasing blocking queries for KV values that did not
actually changed.
Read requests performed during anti antropy full sync currently target
the leader only. This generates a non-negligible load on the leader when
the DC is large enough and can be offloaded to the followers following
the "eventually consistent" policy for the agent state.
We switch the AE read calls to use stale requests with a small (2s)
MaxStaleDuration value and make sure we do not read too fast after a
write.
This allows addresses to be tagged at the service level similar to what we allow for nodes already. The address translation that can be enabled with the `translate_wan_addrs` config was updated to take these new addresses into account as well.
* Upgrade xDS (go-control-plane) API to support Envoy 1.10.
This includes backwards compatibility shim to work around the ext_authz package rename in 1.10.
It also adds integration test support in CI for 1.10.0.
* Fix go vet complaints
* go mod vendor
* Update Envoy version info in docs
* Update website/source/docs/connect/proxies/envoy.md
The observed bug was that a full restart of a consul datacenter (servers
and clients) in conjunction with a restart of a connect-flavored
application with bring-your-own-service-registration logic would very
frequently cause the envoy sidecar service check to never reflect the
aliased service.
Over the course of investigation several bugs and unfortunate
interactions were corrected:
(1)
local.CheckState objects were only shallow copied, but the key piece of
data that gets read and updated is one of the things not copied (the
underlying Check with a Status field). When the stock code was run with
the race detector enabled this highly-relevant-to-the-test-scenario field
was found to be racy.
Changes:
a) update the existing Clone method to include the Check field
b) copy-on-write when those fields need to change rather than
incrementally updating them in place.
This made the observed behavior occur slightly less often.
(2)
If anything about how the runLocal method for node-local alias check
logic was ever flawed, there was no fallback option. Those checks are
purely edge-triggered and failure to properly notice a single edge
transition would leave the alias check incorrect until the next flap of
the aliased check.
The change was to introduce a fallback timer to act as a control loop to
double check the alias check matches the aliased check every minute
(borrowing the duration from the non-local alias check logic body).
This made the observed behavior eventually go away when it did occur.
(3)
Originally I thought there were two main actions involved in the data race:
A. The act of adding the original check (from disk recovery) and its
first health evaluation.
B. The act of the HTTP API requests coming in and resetting the local
state when re-registering the same services and checks.
It took awhile for me to realize that there's a third action at work:
C. The goroutines associated with the original check and the later
checks.
The actual sequence of actions that was causing the bad behavior was
that the API actions result in the original check to be removed and
re-added _without waiting for the original goroutine to terminate_. This
means for brief windows of time during check definition edits there are
two goroutines that can be sending updates for the alias check status.
In extremely unlikely scenarios the original goroutine sees the aliased
check start up in `critical` before being removed but does not get the
notification about the nearly immediate update of that check to
`passing`.
This is interlaced wit the new goroutine coming up, initializing its
base case to `passing` from the current state and then listening for new
notifications of edge triggers.
If the original goroutine "finishes" its update, it then commits one
more write into the local state of `critical` and exits leaving the
alias check no longer reflecting the underlying check.
The correction here is to enforce that the old goroutines must terminate
before spawning the new one for alias checks.
* Improve startup message to avoid confusing users when no error occurs
Several times, some users not very familiar with Consul get confused
by error message at startup:
`[INFO] agent: (LAN) joined: 1 Err: <nil>`
Having `Err: <nil>` seems weird to many users, I propose to have the
following instead:
* Success: `[INFO] agent: (LAN) joined: 1`
* Error: `[WARN] agent: (LAN) couldn't join: %d Err: ERROR`
Logic updated to evaluate with a boolean after the type assertion.
This allows us to check if the type assertion succeeded and be
more clear with errors.
* Fix race condition during a cache get
Check the entry we pulled out of the cache while holding the lock had Fetching set.
If it did then we should use the existing Waiter instead of calling fetch. The reason
this is better than just calling fetch is that fetch re-gets the entry out of the
entries map and the previous fetch may have finished. Therefore this prevents
erroneously starting a new fetch because we just missed the last update.
* Fix race condition fully
The first commit still allowed for the following scenario:
• No entry existing when checked in getWithIndex while holding the read lock
• Then by time we had reached fetch it had been created and finished.
* always use ok when returning
* comment mentioning the reading from entries.
* use cacheHit consistently
* Add integration test for central config; fix central config WIP
* Add integration test for central config; fix central config WIP
* Set proxy protocol correctly and begin adding upstream support
* Add upstreams to service config cache key and start new notify watcher if they change.
This doesn't update the tests to pass though.
* Fix some merging logic get things working manually with a hack (TODO fix properly)
* Simplification to not allow enabling sidecars centrally - it makes no sense without upstreams anyway
* Test compile again and obvious ones pass. Lots of failures locally not debugged yet but may be flakes. Pushing up to see what CI does
* Fix up service manageer and API test failures
* Remove the enable command since it no longer makes much sense without being able to turn on sidecar proxies centrally
* Remove version.go hack - will make integration test fail until release
* Remove unused code from commands and upstream merge
* Re-bump version to 1.5.0
* Add HTTP endpoints for config entry management
* Finish implementing decoding in the HTTP Config entry apply endpoint
* Add CAS operation to the config entry apply endpoint
Also use this for the bootstrapping and move the config entry decoding function into the structs package.
* First pass at the API client for the config entries
* Fixup some of the ConfigEntry APIs
Return a singular response object instead of a list for the ConfigEntry.Get RPC. This gets plumbed through the HTTP API as well.
Dont return QueryMeta in the JSON response for the config entry listing HTTP API. Instead just return a list of config entries.
* Minor API client fixes
* Attempt at some ConfigEntry api client tests
These don’t currently work due to weak typing in JSON
* Get some of the api client tests passing
* Implement reflectwalk magic to correct JSON encoding a ProxyConfigEntry
Also added a test for the HTTP endpoint that exposes the problem. However, since the test doesn’t actually do the JSON encode/decode its still failing.
* Move MapWalk magic into a binary marshaller instead of JSON.
* Add a MapWalk test
* Get rid of unused func
* Get rid of unused imports
* Fixup some tests now that the decoding from msgpack coerces things into json compat types
* Stub out most of the central config cli
Fully implement the config read command.
* Basic config delete command implementation
* Implement config write command
* Implement config list subcommand
Not entirely sure about the output here. Its basically the read output indented with a line specifying the kind/name of each type which is also duplicated in the indented output.
* Update command usage
* Update some help usage formatting
* Add the connect enable helper cli command
* Update list command output
* Rename the config entry API client methods.
* Use renamed apis
* Implement config write tests
Stub the others with the noTabs tests.
* Change list output format
Now just simply output 1 line per named config
* Add config read tests
* Add invalid args write test.
* Add config delete tests
* Add config list tests
* Add connect enable tests
* Update some CLI commands to use CAS ops
This also modifies the HTTP API for a write op to return a boolean indicating whether the value was written or not.
* Fix up the HTTP API CAS tests as I realized they weren’t testing what they should.
* Update config entry rpc tests to properly test CAS
* Fix up a few more tests
* Fix some tests that using ConfigEntries.Apply
* Update config_write_test.go
* Get rid of unused import
Also update some snapshot agent docs
* Enforce correct permissions when registering a check
Previously we had attempted to enforce service:write for a check associated with a service instead of node:write on the agent but due to how we decoded the health check from the request it would never do it properly. This commit fixes that.
* Update website/source/docs/commands/snapshot/agent.html.markdown.erb
Co-Authored-By: mkeeler <mkeeler@users.noreply.github.com>
* Add support for HTTP proxy listeners
* Add customizable bootstrap configuration options
* Debug logging for xDS AuthZ
* Add Envoy Integration test suite with basic test coverage
* Add envoy command tests to cover new cases
* Add tracing integration test
* Add gRPC support WIP
* Merged changes from master Docker. get CI integration to work with same Dockerfile now
* Make docker build optional for integration
* Enable integration tests again!
* http2 and grpc integration tests and fixes
* Fix up command config tests
* Store all container logs as artifacts in circle on fail
* Add retries to outer part of stats measurements as we keep missing them in CI
* Only dump logs on failing cases
* Fix typos from code review
* Review tidying and make tests pass again
* Add debug logs to exec test.
* Fix legit test failure caused by upstream rename in envoy config
* Attempt to reduce cases of bad TLS handshake in CI integration tests
* bring up the right service
* Add prometheus integration test
* Add test for denied AuthZ both HTTP and TCP
* Try ANSI term for Circle
When receiving a serf faild message for a node which is not in the
catalog, do not perform a register request to set is serf heath to
critical as it could overwrite the node information and services if it
was renamed.
Fixes : #5518
Roles are named and can express the same bundle of permissions that can
currently be assigned to a Token (lists of Policies and Service
Identities). The difference with a Role is that it not itself a bearer
token, but just another entity that can be tied to a Token.
This lets an operator potentially curate a set of smaller reusable
Policies and compose them together into reusable Roles, rather than
always exploding that same list of Policies on any Token that needs
similar permissions.
This also refactors the acl replication code to be semi-generic to avoid
3x copypasta.
This is mainly to avoid having the API return "0001-01-01T00:00:00Z" as
a value for the ExpirationTime field when it is not set. Unfortunately
time.Time doesn't respect the json marshalling "omitempty" directive.
These act like a special cased version of a Policy Template for granting
a token the privileges necessary to register a service and its connect
proxy, and read upstreams from the catalog.
* Move the watch package into the api module
It was already just a thin wrapper around the API anyways. The biggest change was to the testing. Instead of using a test agent directly from the agent package it now uses the binary on the PATH just like the other API tests.
The other big changes were to fix up the connect based watch tests so that we didn’t need to pull in the connect package (and therefore all of Consul)
The DNS config parameters `recursors` and `dns_config.*` are now hot
reloaded on SIGHUP or `consul reload` and do not need an agent restart
to be modified.
Config is stored in an atomic.Value and loaded at the beginning of each
request. Reloading only affects requests that start _after_ the
reload. Ongoing requests are not affected. To match the current
behavior the recursor handler is loaded and unloaded as needed on config
reload.
Due to an unintended order of operations issue with integer division
TestSessionTTLRenew was sleeping for 0s every time.
Also add explicit failures for when the various session renewal returns
nil unexpectedly.
Fixes: #4222
# Data Filtering
This PR will implement filtering for the following endpoints:
## Supported HTTP Endpoints
- `/agent/checks`
- `/agent/services`
- `/catalog/nodes`
- `/catalog/service/:service`
- `/catalog/connect/:service`
- `/catalog/node/:node`
- `/health/node/:node`
- `/health/checks/:service`
- `/health/service/:service`
- `/health/connect/:service`
- `/health/state/:state`
- `/internal/ui/nodes`
- `/internal/ui/services`
More can be added going forward and any endpoint which is used to list some data is a good candidate.
## Usage
When using the HTTP API a `filter` query parameter can be used to pass a filter expression to Consul. Filter Expressions take the general form of:
```
<selector> == <value>
<selector> != <value>
<value> in <selector>
<value> not in <selector>
<selector> contains <value>
<selector> not contains <value>
<selector> is empty
<selector> is not empty
not <other expression>
<expression 1> and <expression 2>
<expression 1> or <expression 2>
```
Normal boolean logic and precedence is supported. All of the actual filtering and evaluation logic is coming from the [go-bexpr](https://github.com/hashicorp/go-bexpr) library
## Other changes
Adding the `Internal.ServiceDump` RPC endpoint. This will allow the UI to filter services better.
* First conversion
* Use serf 0.8.2 tag and associated updated deps
* * Move freeport and testutil into internal/
* Make internal/ its own module
* Update imports
* Add replace statements so API and normal Consul code are
self-referencing for ease of development
* Adapt to newer goe/values
* Bump to new cleanhttp
* Fix ban nonprintable chars test
* Update lock bad args test
The error message when the duration cannot be parsed changed in Go 1.12
(ae0c435877d3aacb9af5e706c40f9dddde5d3e67). This updates that test.
* Update another test as well
* Bump travis
* Bump circleci
* Bump go-discover and godo to get rid of launchpad dep
* Bump dockerfile go version
* fix tar command
* Bump go-cleanhttp
* Connect: Fix Envoy getting stuck during load
Also in this PR:
- Enabled outlier detection on upstreams which will mark instances unhealthy after 5 failures (using Envoy's defaults)
- Enable weighted load balancing where DNS weights are configured
* Fix empty load assignments in the right place
* Fix import names from review
* Move millisecond parse to a helper function
* Make Connect health queryies unblock correctly in all cases and use optimal number of watch chans. Fixes#5506.
* Node check test cases and clearer bug test doc
* Comment update
Refs #4984.
Watching chans for every node we touch in a health query is wasteful. In #4984 it shows that if there are more than 682 service instances we always fallback to watching all services which kills performance.
We already have a record in MemDB that is reliably update whenever the service health result should change thanks to per-service watch indexes.
So in general, provided there is at least one service instances and we actually have a service index for it (we always do now) we only ever need to watch a single channel.
This saves us from ever falling back to the general index and causing the performance cliff in #4984, but it also means fewer goroutines and work done for every blocking health query.
It also saves some allocations made during the query because we no longer have to populate a WatchSet with 3 chans per service instance which saves the internal map allocation.
This passes all state store tests except the one that explicitly checked for the fallback behaviour we've now optimized away and in general seems safe.
acl: reduce complexity of token resolution process with alternative singleflighting
Switches acl resolution to use golang.org/x/sync/singleflight. For the
identity/legacy lookups this is a drop-in replacement with the same
overall approach to request coalescing.
For policies this is technically a change in behavior, but when
considered holistically is approximately performance neutral (with the
benefit of less code).
There are two goals with this blob of code (speaking specifically of
policy resolution here):
1) Minimize cross-DC requests.
2) Minimize client-to-server LAN requests.
The previous iteration of this code was optimizing for the case of many
possibly different tokens being resolved concurrently that have a
significant overlap in linked policies such that deduplication would be
worth the complexity. While this is laudable there are some things to
consider that can help to adjust expectations:
1) For v1.4+ policies are always replicated, and once a single policy
shows up in a secondary DC the replicated data is considered
authoritative for requests made in that DC. This means that our
earlier concerns about minimizing cross-DC requests are irrelevant
because there will be no cross-DC policy reads that occur.
2) For Server nodes the in-memory ACL policy cache is capped at zero,
meaning it has no caching. Only Client nodes run with a cache. This
means that instead of having an entire DC's worth of tokens (what a
Server might see) that can have policy resolutions coalesced these
nodes will only ever be seeing node-local token resolutions. In a
reasonable worst-case scenario where a scheduler like Kubernetes has
"filled" a node with Connect services, even that will only schedule
~100 connect services per node. If every service has a unique token
there will only be 100 tokens to coalesce and even then those requests
have to occur concurrently AND be hitting an empty consul cache.
Instead of seeing a great coalescing opportunity for cutting down on
redundant Policy resolutions, in practice it's far more likely given
node densities that you'd see requests for the same token concurrently
than you would for two tokens sharing a policy concurrently (to a degree
that would warrant the overhead of the current variation of
singleflighting.
Given that, this patch switches the Policy resolution process to only
singleflight by requesting token (but keeps the cache as by-policy).
This PR introduces reloading tls configuration. Consul will now be able to reload the TLS configuration which previously required a restart. It is not yet possible to turn TLS ON or OFF with these changes. Only when TLS is already turned on, the configuration can be reloaded. Most importantly the certificates and CAs.
Node updates were not updating the service indexes, which are used for
service related queries. This caused the X-Consul-Index to stay the same
after a node update as seen from a service query even though the node
data is returned in heath queries. If that happened in between queries
the client would miss this change.
We now update the indexes of the services on the node when it is
updated.
Fixes: #5450
In TestServer_LANReap autopilot is running, so the alternate flow
through the serf reaping function is possible. In that situation the
ReconnectTimeout is not relevant so for parity also override the
TombstoneTimeout value as well.
For additional parity update the TestServer_WANReap and
TestClient_LANReap versions of this test in the same way even though
autopilot is irrelevant here .
Fix error in detecting raft replication errors.
Detect redacted token secrets and prevent attempting to insert.
Add a Redacted field to the TokenBatchRead and TokenRead RPC endpoints
This will indicate whether token secrets have been redacted.
Ensure any token with a redacted secret in secondary datacenters is removed.
Test that redacted tokens cannot be replicated.
Prevent race between register and deregister requests by saving them
together in the local state on registration.
Also adds more cleaning in case of failure when registering services
/ checks.
Previously we were fixing up the token links directly on the *ACLToken returned by memdb. This invalidated some assumptions that a snapshot is immutable as well as potentially being able to cause a crash.
The fix here is to give the policy link fixing function copy on write semantics. When no fixes are necessary we can return the memdb object directly, otherwise we copy it and create a new list of links.
Eventually we might find a better way to keep those policy links in sync but for now this fixes the issue.
* Fix race condition in DNS when using cache
The healty node filtering was modifying the result from the cache, which
caused a crash when multiple queries were made to the same service
simultaneously.
We now copy the node slice before filtering to ensure we do not modify
the data stored in the cache.
* Fix wording in dns cache config doc
s/dns_max_age/cache_max_age/
This PR adds two features which will be useful for operators when ACLs are in use.
1. Tokens set in configuration files are now reloadable.
2. If `acl.enable_token_persistence` is set to `true` in the configuration, tokens set via the `v1/agent/token` endpoint are now persisted to disk and loaded when the agent starts (or during configuration reload)
Note that token persistence is opt-in so our users who do not want tokens on the local disk will see no change.
Some other secondary changes:
* Refactored a bunch of places where the replication token is retrieved from the token store. This token isn't just for replicating ACLs and now it is named accordingly.
* Allowed better paths in the `v1/agent/token/` API. Instead of paths like: `v1/agent/token/acl_replication_token` the path can now be just `v1/agent/token/replication`. The old paths remain to be valid.
* Added a couple new API functions to set tokens via the new paths. Deprecated the old ones and pointed to the new names. The names are also generally better and don't imply that what you are setting is for ACLs but rather are setting ACL tokens. There is a minor semantic difference there especially for the replication token as again, its no longer used only for ACL token/policy replication. The new functions will detect 404s and fallback to using the older token paths when talking to pre-1.4.3 agents.
* Docs updated to reflect the API additions and to show using the new endpoints.
* Updated the ACL CLI set-agent-tokens command to use the non-deprecated APIs.
This PR is based on #5366 and continues to centralise the tls configuration in order to be reloadable eventually!
This PR is another refactoring. No tests are changed, beyond calling other functions or cosmetic stuff. I added a bunch of tests, even though they might be redundant.
In order to be able to reload the TLS configuration, we need one way to generate the different configurations.
This PR introduces a `tlsutil.Configurator` which holds a `tlsutil.Config`. Afterwards it is responsible for rendering every `tls.Config`. In this particular PR I moved `IncomingHTTPSConfig`, `IncomingTLSConfig`, and `OutgoingTLSWrapper` into `tlsutil.Configurator`.
This PR is a pure refactoring - not a single feature added. And not a single test added. I only slightly modified existing tests as necessary.
Adds two new configuration parameters "dns_config.use_cache" and
"dns_config.cache_max_age" controlling how DNS requests use the agent
cache when querying servers.
* Start adding tests for cluster override
* Refactor tests for clusters
* Passing tests for custom upstream cluster override
* Added capability to customise local app cluster
* Rename config for local cluster override
Also in acl_endpoint_test.go:
* convert logical blocks in some token tests to subtests
* remove use of require.New
This removes a lot of noise in a later PR.
This way we can avoid unnecessary panics which cause other tests not to run.
This doesn't remove all the possibilities for panics causing other tests not to run, it just fixes the TestAgent
Currently the gRPC server assumes that if you have configured TLS
certs on the agent (for RPC) that you want gRPC to be encrypted.
If gRPC is bound to localhost this can be overkill. For the API we
let the user choose to offer HTTP or HTTPS API endpoints
independently of the TLS cert configuration for a similar reason.
This setting will let someone encrypt RPC traffic with TLS but avoid
encrypting local gRPC traffic if that is what they want to do by only
enabling TLS on gRPC if the HTTPS API port is enabled.
There was an errant early-return in PolicyDelete() that bypassed the
rest of the function. This was ok because the only caller of this
function ignores the results.
This removes the early-return making it structurally behave like
TokenDelete() and for both PolicyDelete and TokenDelete clarify the lone
callers to indicate that the return values are ignored.
We may wish to avoid the entire return value as well, but this patch
doesn't go that far.
`establishLeadership` invoked during leadership monitoring may use autopilot to do promotions etc. There was a race with doing that and having autopilot initialized and this fixes it.