* dns v2 - both empty string and default should be allowed for namespace and partition in Ce
* add changelog
* use default partition constant
* use constants in validation.
---------
Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com>
* NET-5879 - move the filter for non-passing to occur in the health RPC layer rather than the callers of the RPC
* fix import of slices
* NET-5879 - expose sameness group param on service health endpoint and move sameness group health fallback logic into HealthService RPC layer
* fixing deepcopy
* fix license headers
This refactors and relocates the following packages to live under internal/gossip instead of either in the toplevel lib or agent/consul:
- librtt : related to serf coordinates
- libserf : random serf stuff
* Define file-system-certificate config entry
* Collect file-system-certificate(s) referenced by api-gateway onto snapshot
* Add file-system-certificate to config entry kind allow lists
* Remove inapplicable validation
This validation makes sense for inline certificates since Consul server is holding the certificate; however, for file system certificates, Consul server never actually sees the certificate.
* Support file-system-certificate as source for listener TLS certificate
* Add more required mappings for the new config entry type
* Construct proper TLS context based on certificate kind
* Add support or SDS in xdscommon
* Remove unused param
* Adds back verification of certs for inline-certificates
* Undo tangential changes to TLS config consumption
* Remove stray curly braces
* Undo some more tangential changes
* Improve function name for generating API gateway secrets
* Add changelog entry
* Update .changelog/20873.txt
Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
* Add some nil-checking, remove outdated TODO
* Update test assertions to include file-system-certificate
* Add documentation for file-system-certificate config entry
Add new doc to nav
* Fix grammar mistake
* Rename watchmaps, remove outdated TODO
---------
Co-authored-by: Melisa Griffin <melisa.griffin@hashicorp.com>
Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
This operation would previously fail due to unconsumed bytes in the
decoder buffer when reading the Ent snapshot (the first byte of the
record would be misinterpreted as a type indicator, and the remaining
bytes would fail to be deserialized or read as invalid data).
Ensure restore succeeds by decoding the ignored record as an
interface{}, which will consume the record bytes without requiring a
concrete target struct, then moving on to the next record.
* update go-control-plane envoy dependency to 0.12.0
* add changelog
* go mod tidy
* fix linting issues
* add agent/grpc-internal to the list of SA1019 ignores
* Include SNI + root PEMs from peered cluster on terminating gw filter chain
This allows an external service registered on a terminating gateway to be exported to and reachable from a peered cluster
* Abstract existing logic into re-usable function
* Regenerate golden files w/ new listener logic
* Add changelog entry
* Use peering bundles that are stable across test runs
* put conditionals are hcp initialization for consul server
* put more things behind configuration flags
* add changelog
* TestServer_hcpManager
* fix TestAgent_scadaProvider
Currently, when a client starts a blocking query and an ACL token expires within
that time, Consul will return ACL not found error with a 403 status code. However,
sometimes if an ACL token is invalidated at the same time as the query's deadline is reached,
Consul will instead return an empty response with a 200 status code.
This is because of the events being executed.
1. Client issues a blocking query request with timeout `t`.
2. ACL is deleted.
3. Server detects a change in ACLs and force closes the gRPC stream.
4. Client resubscribes with the same token and resets its state (view).
5. Client sees "ACL not found" error.
If ACL is deleted before step 4, the client is unaware that the stream was closed due to
an ACL error and will return an empty view (from the reset state) with the 200 status code.
To fix this problem, we introduce another state to the subsciption to indicate when a change
to ACLs has occured. If the server sees that there was an error due to ACL change, it will
re-authenticate the request and return an error if the token is no longer valid.
Fixes#20790
* Fix streaming RPCs for agentless.
This PR fixes an issue where cross-dc RPCs were unable to utilize
the streaming backend due to having the node name set. The result
of this was the agent-cache being utilized, which would cause high
cpu utilization and memory consumption due to the fact that it
keeps queries alive for 72 hours before purging inactive entries.
This resource consumption is compounded by the fact that each pod
in consul-k8s gets a unique token. Since the agent-cache uses the
token as a component of the key, the same query is duplicated for
each pod that is deployed.
* Add changelog.
* Fix xDS deadlock due to syncLoop termination.
This fixes an issue where agentless xDS streams can deadlock permanently until
a server is restarted. When this issue occurs, no new proxies are able to
successfully connect to the server.
Effectively, the trigger for this deadlock stems from the following return
statement:
https://github.com/hashicorp/consul/blob/v1.18.0/agent/proxycfg-sources/catalog/config_source.go#L199-L202
When this happens, the entire `syncLoop()` terminates and stops consuming from
the following channel:
https://github.com/hashicorp/consul/blob/v1.18.0/agent/proxycfg-sources/catalog/config_source.go#L182-L192
Which results in the `ConfigSource.cleanup()` function never receiving a
response and holding a mutex indefinitely:
https://github.com/hashicorp/consul/blob/v1.18.0/agent/proxycfg-sources/catalog/config_source.go#L241-L247
Because this mutex is shared, it effectively deadlocks the server's ability to
process new xDS streams.
----
The fix to this issue involves removing the `chan chan struct{}` used like an
RPC-over-channels pattern and replacing it with two distinct channels:
+ `stopSyncLoopCh` - indicates that the `syncLoop()` should terminate soon. +
`syncLoopDoneCh` - indicates that the `syncLoop()` has terminated.
Splitting these two concepts out and deferring a `close(syncLoopDoneCh)` in the
`syncLoop()` function ensures that the deadlock above should no longer occur.
We also now evict xDS connections of all proxies for the corresponding
`syncLoop()` whenever it encounters an irrecoverable error. This is done by
hoisting the new `syncLoopDoneCh` upwards so that it's visible to the xDS delta
processing. Prior to this fix, the behavior was to simply orphan them so they
would never receive catalog-registration or service-defaults updates.
* Add changelog.
* Shuffle the list of servers returned by `pbserverdiscovery.WatchServers`.
This randomizes the list of servers to help reduce the chance of clients
all connecting to the same server simultaneously. Consul-dataplane is one
such client that does not randomize its own list of servers.
* Fix potential goroutine leak in xDS recv loop.
This commit ensures that the goroutine which receives xDS messages from
proxies will not block forever if the stream's context is cancelled but
the `processDelta()` function never consumes the message (due to being
terminated).
* Add changelog.
* disable terminating gateway auto host rewrite
* add changelog
* clean up unneeded additional snapshot fields
* add new field to docs
* squash
* fix test
* Update agent/dns.go
Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com>
* PR feedback
* split tests out into multiple files.
* Extract responsibilities from router into discoveryResultsFetcher, messageSerializer, responseGenerator.
* adding recordmaker tests
* add response generator test coverage.
* changing tests case name based on PR feedback
---------
Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com>
* NET-7813 - DNS : SERVFAIL when resolving PTR records
* Update agent/dns.go
Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com>
* PR feedback
---------
Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com>
Ensure all topics are refreshed on FSM restore and add supervisor loop to v1 controller subscriptions
This PR fixes two issues:
1. Not all streams were force closed whenever a snapshot restore happened. This means that anything consuming data from the stream (controllers, queries, etc) were unaware that the data they have is potentially stale / invalid. This first part ensures that all topics are purged.
2. The v1 controllers did not properly handle stream errors (which are likely to appear much more often due to 1 above) and so it introduces a supervisor thread to restart the watches when these errors occur.
Fix so that link API values are used over env vars
When a link is created via the API, those values should take precedence over
the values set by environment variables. This change loads all the env vars
initially as part of the config builder rather than on demand.
test(v2dns): Add Catalog v2 integration test
Add a basic integration test covering major functionality tested against
Catalog v2 resources. This complements existing tests that ensure
compatibility between v1 and v2 DNS when testing against Catalog v1
resources.
* Add function to get update channel for watching HCP Link
* Add MonitorHCPLink function
This function can be called in a goroutine to manage the lifecycle
of the HCP manager.
* Update HCP Manager config in link monitor before starting
This updates HCPMonitorLink so it updates the HCP manager
with an HCP client and management token when a Link is upserted.
* Let MonitorHCPManager handle lifecycle instead of link controller
* Remove cleanup from Link controller and move it to MonitorHCPLink
Previously, the Link Controller was responsible for cleaning up the
HCP-related files on the file system. This change makes it so
MonitorHCPLink handles this cleanup. As a result, we are able to remove
the PlacementEachServer placement strategy for the Link controller
because it no longer needs to do this per-node cleanup.
* Remove HCP Manager dependency from Link Controller
The Link controller does not need to have HCP Manager
as a dependency anymore, so this removes that dependency
in order to simplify the design.
* Add Linked prefix to Linked status variables
This is in preparation for adding a new status type to the
Link resource.
* Add new "validated" status type to link resource
The link resource controller will now set a "validated" status
in addition to the "linked" status. This is needed so that other
components (eg the HCP manager) know when the Link is ready to link
with HCP.
* Fix tests
* Handle new 'EndOfSnapshot' WatchList event
* Fix watch test
* Remove unnecessary config from TestAgent_scadaProvider
Since the Scada provider is now started on agent startup
regardless of whether a cloud config is provided, this removes
the cloud config override from the relevant test.
This change is not exactly related to the changes from this PR,
but rather is something small and sort of related that was noticed
while working on this PR.
* Simplify link watch test and remove sleep from link watch
This updates the link watch test so that it uses more mocks
and does not require setting up the infrastructure for the HCP Link
controller.
This also removes the time.Sleep delay in the link watcher loop in favor
of an error counter. When we receive 10 consecutive errors, we shut down
the link watcher loop.
* Add better logging for link validation. Remove EndOfSnapshot test.
* Refactor link monitor test into a table test
* Add some clarifying comments to link monitor
* Simplify link watch test
* Test a bunch more errors cases in link monitor test
* Use exponential backoff instead of errorCounter in LinkWatch
* Move link watch and link monitor into a single goroutine called from server.go
* Refactor HCP link watcher to use single go-routine.
Previously, if the WatchClient errored, we would've never recovered
because we never retry to create the stream. With this change,
we have a single goroutine that runs for the life of the server agent
and if the WatchClient stream ever errors, we retry the creation
of the stream with an exponential backoff.
Decouple xds capacity controller and autopilot
This prevents a potential bug where autopilot deadlocks while attempting
to execute `AutopilotDelegate.NotifyState()` on an xdscapacity controller
that stopped consuming messages.