mirror of https://github.com/status-im/consul.git
3c1e17cbd5
Occasionally we had seen the TestWatchServers_ACLToken_PermissionDenied be flagged as flaky in circleci. This change should fix that. Why it fixes it is complicated. The test was failing with a panic when a mocked ACL Resolver was being called more times than expected. I struggled for a while to determine how that could be. This test should call authorize once and only once and the error returned should cause the stream to be terminated and the error returned to the gRPC client. Another oddity was no amount of running this test locally seemed to be able to reproduce the issue. I ran the test hundreds of thousands of time and it always passed. It turns out that there is nothing wrong with the test. It just so happens that the panic from unexpected invocation of a mocked call happened during the test but was caused by a previous test (specifically the TestWatchServers_StreamLifecycle test) The stream from the previous test remained open after all the test Cleanup functions were run and it just so happened that when the EventPublisher eventually picked up that the context was cancelled during cleanup, it force closes all subscriptions which causes some loops to be re-entered and the streams to be reauthorized. Its that looping in response to forced subscription closures that causes the mock to eventually panic. All the components, publisher, server, client all operate based on contexts. We cancel all those contexts but there is no syncrhonous way to know when they are stopped. We could have implemented a syncrhonous stop but in the context of an actual running Consul, context cancellation + async stopping is perfectly fine. What we (Dan and I) eventually thought was that the behavior of grpc streams such as this when a server was shutting down wasn’t super helpful. What we would want is for a client to be able to distinguish between subscription closed because something may have changed requiring re-authentication and subscription closed because the server is shutting down. That way we can send back appropriate error messages to detail that the server is shutting down and not confuse users with potentially needing to resubscribe. So thats what this PR does. We have introduced a shutting down state to our event subscriptions and the various streaming gRPC services that rely on the event publisher will all just behave correctly and actually stop the stream (not attempt transparent reauthorization) if this particular error is the one we get from the stream. Additionally the error that gets transmitted back through gRPC when this does occur indicates to the consumer that the server is going away. That is more helpful so that a client can then attempt to reconnect to another server. |
||
---|---|---|
.. | ||
ae | ||
auto-config | ||
cache | ||
cache-types | ||
checks | ||
config | ||
configentry | ||
connect | ||
consul | ||
debug | ||
dns | ||
exec | ||
grpc | ||
local | ||
metadata | ||
mock | ||
pool | ||
proxycfg | ||
router | ||
routine-leak-checker | ||
rpc | ||
rpcclient/health | ||
structs | ||
submatview | ||
systemd | ||
token | ||
uiserver | ||
xds | ||
acl.go | ||
acl_endpoint.go | ||
acl_endpoint_legacy.go | ||
acl_endpoint_legacy_test.go | ||
acl_endpoint_test.go | ||
acl_oss.go | ||
acl_test.go | ||
agent.go | ||
agent_endpoint.go | ||
agent_endpoint_oss.go | ||
agent_endpoint_oss_test.go | ||
agent_endpoint_test.go | ||
agent_oss.go | ||
agent_test.go | ||
apiserver.go | ||
apiserver_test.go | ||
catalog_endpoint.go | ||
catalog_endpoint_oss.go | ||
catalog_endpoint_test.go | ||
check.go | ||
config_endpoint.go | ||
config_endpoint_test.go | ||
connect_auth.go | ||
connect_ca_endpoint.go | ||
connect_ca_endpoint_test.go | ||
coordinate_endpoint.go | ||
coordinate_endpoint_test.go | ||
delegate_mock_test.go | ||
denylist.go | ||
denylist_test.go | ||
discovery_chain_endpoint.go | ||
discovery_chain_endpoint_test.go | ||
dns.go | ||
dns_oss.go | ||
dns_test.go | ||
enterprise_delegate_oss.go | ||
event_endpoint.go | ||
event_endpoint_test.go | ||
federation_state_endpoint.go | ||
health_endpoint.go | ||
health_endpoint_test.go | ||
http.go | ||
http_decode_test.go | ||
http_oss.go | ||
http_oss_test.go | ||
http_register.go | ||
http_test.go | ||
intentions_endpoint.go | ||
intentions_endpoint_oss_test.go | ||
intentions_endpoint_test.go | ||
keyring.go | ||
keyring_test.go | ||
kvs_endpoint.go | ||
kvs_endpoint_test.go | ||
metrics.go | ||
metrics_test.go | ||
nodeid.go | ||
nodeid_test.go | ||
notify.go | ||
notify_test.go | ||
operator_endpoint.go | ||
operator_endpoint_oss.go | ||
operator_endpoint_test.go | ||
peering_endpoint.go | ||
peering_endpoint_oss_test.go | ||
peering_endpoint_test.go | ||
prepared_query_endpoint.go | ||
prepared_query_endpoint_test.go | ||
reload.go | ||
remote_exec.go | ||
remote_exec_test.go | ||
retry_join.go | ||
retry_join_test.go | ||
service_checks_test.go | ||
service_manager.go | ||
service_manager_test.go | ||
session_endpoint.go | ||
session_endpoint_test.go | ||
setup.go | ||
setup_oss.go | ||
sidecar_service.go | ||
sidecar_service_test.go | ||
signal_unix.go | ||
signal_windows.go | ||
snapshot_endpoint.go | ||
snapshot_endpoint_test.go | ||
status_endpoint.go | ||
status_endpoint_test.go | ||
streaming_test.go | ||
testagent.go | ||
testagent_test.go | ||
translate_addr.go | ||
txn_endpoint.go | ||
txn_endpoint_test.go | ||
ui_endpoint.go | ||
ui_endpoint_oss_test.go | ||
ui_endpoint_test.go | ||
user_event.go | ||
user_event_test.go | ||
util.go | ||
util_test.go | ||
watch_handler.go | ||
watch_handler_test.go |