mirror of https://github.com/status-im/consul.git
9a485cdb49
Receiving an "acl not found" error from an RPC in the agent cache and the streaming/event components will cause any request loops to cease under the assumption that they will never work again if the token was destroyed. This prevents log spam (#14144, #9738). Unfortunately due to things like: - authz requests going to stale servers that may not have witnessed the token creation yet - authz requests in a secondary datacenter happening before the tokens get replicated to that datacenter - authz requests from a primary TO a secondary datacenter happening before the tokens get replicated to that datacenter The caller will get an "acl not found" *before* the token exists, rather than just after. The machinery added above in the linked PRs will kick in and prevent the request loop from looping around again once the tokens actually exist. For `consul-dataplane` usages, where xDS is served by the Consul servers rather than the clients ultimately this is not a problem because in that scenario the `agent/proxycfg` machinery is on-demand and launched by a new xDS stream needing data for a specific service in the catalog. If the watching goroutines are terminated it ripples down and terminates the xDS stream, which CDP will eventually re-establish and restart everything. For Consul client usages, the `agent/proxycfg` machinery is ahead-of-time launched at service registration time (called "local" in some of the proxycfg machinery) so when the xDS stream comes in the data is already ready to go. If the watching goroutines terminate it should terminate the xDS stream, but there's no mechanism to re-spawn the watching goroutines. If the xDS stream reconnects it will see no `ConfigSnapshot` and will not get one again until the client agent is restarted, or the service is re-registered with something changed in it. This PR fixes a few things in the machinery: - there was an inadvertent deadlock in fetching snapshot from the proxycfg machinery by xDS, such that when the watching goroutine terminated the snapshots would never be fetched. This caused some of the xDS machinery to get indefinitely paused and not finish the teardown properly. - Every 30s we now attempt to re-insert all locally registered services into the proxycfg machinery. - When services are re-inserted into the proxycfg machinery we special case "dead" ones such that we unilaterally replace them rather that doing that conditionally. |
||
---|---|---|
.. | ||
internal/watch | ||
api_gateway.go | ||
connect_proxy.go | ||
data_sources.go | ||
data_sources_oss.go | ||
deep-copy.sh | ||
ingress_gateway.go | ||
manager.go | ||
manager_test.go | ||
mesh_gateway.go | ||
mesh_gateway_oss.go | ||
naming.go | ||
naming_oss.go | ||
naming_test.go | ||
proxycfg.deepcopy.go | ||
proxycfg.go | ||
snapshot.go | ||
snapshot_test.go | ||
state.go | ||
state_oss_test.go | ||
state_test.go | ||
terminating_gateway.go | ||
testing.go | ||
testing_api_gateway.go | ||
testing_connect_proxy.go | ||
testing_ingress_gateway.go | ||
testing_mesh_gateway.go | ||
testing_oss.go | ||
testing_peering.go | ||
testing_terminating_gateway.go | ||
testing_tproxy.go | ||
testing_upstreams.go | ||
upstreams.go |