* Include SNI + root PEMs from peered cluster on terminating gw filter chain
This allows an external service registered on a terminating gateway to be exported to and reachable from a peered cluster
* Abstract existing logic into re-usable function
* Regenerate golden files w/ new listener logic
* Add changelog entry
* Use peering bundles that are stable across test runs
Update python SDKs
The original python-consul is unmaintained with no activity for 6 years.
The python-consul2 fork has had no activity for 3 years, whether it's commits or responding to PRs and issues.
* put conditionals are hcp initialization for consul server
* put more things behind configuration flags
* add changelog
* TestServer_hcpManager
* fix TestAgent_scadaProvider
* Adds docs to upgrade-specific page to include the removal of the deprecated API Gateway stanza for 1.19
* Apply suggestions from code review
Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
* Remove legacy api-gateway from helm docs
* change .Values.apiGateway to .apiGateway
---------
Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
Currently, when a client starts a blocking query and an ACL token expires within
that time, Consul will return ACL not found error with a 403 status code. However,
sometimes if an ACL token is invalidated at the same time as the query's deadline is reached,
Consul will instead return an empty response with a 200 status code.
This is because of the events being executed.
1. Client issues a blocking query request with timeout `t`.
2. ACL is deleted.
3. Server detects a change in ACLs and force closes the gRPC stream.
4. Client resubscribes with the same token and resets its state (view).
5. Client sees "ACL not found" error.
If ACL is deleted before step 4, the client is unaware that the stream was closed due to
an ACL error and will return an empty view (from the reset state) with the 200 status code.
To fix this problem, we introduce another state to the subsciption to indicate when a change
to ACLs has occured. If the server sees that there was an error due to ACL change, it will
re-authenticate the request and return an error if the token is no longer valid.
Fixes#20790
* Fix streaming RPCs for agentless.
This PR fixes an issue where cross-dc RPCs were unable to utilize
the streaming backend due to having the node name set. The result
of this was the agent-cache being utilized, which would cause high
cpu utilization and memory consumption due to the fact that it
keeps queries alive for 72 hours before purging inactive entries.
This resource consumption is compounded by the fact that each pod
in consul-k8s gets a unique token. Since the agent-cache uses the
token as a component of the key, the same query is duplicated for
each pod that is deployed.
* Add changelog.
* Fix xDS deadlock due to syncLoop termination.
This fixes an issue where agentless xDS streams can deadlock permanently until
a server is restarted. When this issue occurs, no new proxies are able to
successfully connect to the server.
Effectively, the trigger for this deadlock stems from the following return
statement:
https://github.com/hashicorp/consul/blob/v1.18.0/agent/proxycfg-sources/catalog/config_source.go#L199-L202
When this happens, the entire `syncLoop()` terminates and stops consuming from
the following channel:
https://github.com/hashicorp/consul/blob/v1.18.0/agent/proxycfg-sources/catalog/config_source.go#L182-L192
Which results in the `ConfigSource.cleanup()` function never receiving a
response and holding a mutex indefinitely:
https://github.com/hashicorp/consul/blob/v1.18.0/agent/proxycfg-sources/catalog/config_source.go#L241-L247
Because this mutex is shared, it effectively deadlocks the server's ability to
process new xDS streams.
----
The fix to this issue involves removing the `chan chan struct{}` used like an
RPC-over-channels pattern and replacing it with two distinct channels:
+ `stopSyncLoopCh` - indicates that the `syncLoop()` should terminate soon. +
`syncLoopDoneCh` - indicates that the `syncLoop()` has terminated.
Splitting these two concepts out and deferring a `close(syncLoopDoneCh)` in the
`syncLoop()` function ensures that the deadlock above should no longer occur.
We also now evict xDS connections of all proxies for the corresponding
`syncLoop()` whenever it encounters an irrecoverable error. This is done by
hoisting the new `syncLoopDoneCh` upwards so that it's visible to the xDS delta
processing. Prior to this fix, the behavior was to simply orphan them so they
would never receive catalog-registration or service-defaults updates.
* Add changelog.
* Shuffle the list of servers returned by `pbserverdiscovery.WatchServers`.
This randomizes the list of servers to help reduce the chance of clients
all connecting to the same server simultaneously. Consul-dataplane is one
such client that does not randomize its own list of servers.
* Fix potential goroutine leak in xDS recv loop.
This commit ensures that the goroutine which receives xDS messages from
proxies will not block forever if the stream's context is cancelled but
the `processDelta()` function never consumes the message (due to being
terminated).
* Add changelog.
* Revert "feat: add alert to link to hcp modal to ask a user refresh a page; up… (#20682)"
This reverts commit dd833d9a36.
* Revert "chor: change cluster name param to have datacenter.name as default value (#20644)"
This reverts commit 8425cd0f90.
* Revert "chor: adds informative error message when acls disabled and read-only… (#20600)"
This reverts commit 9d712ccfc7.
* Revert "Cc 7147 link to hcp modal (#20474)"
This reverts commit 8c05e57ac1.
* Revert "Add nav bar item to show HCP link status and encourage folks to link (#20370)"
This reverts commit 22e6ce0df1.
* Revert "Cc 7145 hcp link status api (#20330)"
This reverts commit 049ca102c4.
* Revert "💜 Cc 7187/purple banner for linking existing clusters (#20275)"
This reverts commit 5119667cd1.
* disable terminating gateway auto host rewrite
* add changelog
* clean up unneeded additional snapshot fields
* add new field to docs
* squash
* fix test
* Update agent/dns.go
Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com>
* PR feedback
* split tests out into multiple files.
* Extract responsibilities from router into discoveryResultsFetcher, messageSerializer, responseGenerator.
* adding recordmaker tests
* add response generator test coverage.
* changing tests case name based on PR feedback
---------
Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com>