Due to timing, a transparent proxy could have two upstreams to dial
directly with the same address.
For example:
- The orders service can dial upstreams shipping and payment directly.
- An instance of shipping at address 10.0.0.1 is deregistered.
- Payments is scaled up and scheduled to have address 10.0.0.1.
- The orders service receives the event for the new payments instance
before seeing the deregistration for the shipping instance. At this
point two upstreams have the same passthrough address and Envoy will
reject the listener configuration.
To disambiguate this commit considers the Raft index when storing
passthrough addresses. In the example above, 10.0.0.1 would only be
associated with the newer payments service instance.
Transparent proxies can set up filter chains that allow direct
connections to upstream service instances. Services that can be dialed
directly are stored in the PassthroughUpstreams map of the proxycfg
snapshot.
Previously these addresses were not being cleaned up based on new
service health data. The list of addresses associated with an upstream
service would only ever grow.
As services scale up and down, eventually they will have instances
assigned to an IP that was previously assigned to a different service.
When IP addresses are duplicated across filter chain match rules the
listener config will be rejected by Envoy.
This commit updates the proxycfg snapshot management so that passthrough
addresses can get cleaned up when no longer associated with a given
upstream.
There is still the possibility of a race condition here where due to
timing an address is shared between multiple passthrough upstreams.
That concern is mitigated by #12195, but will be further addressed
in a follow-up.
We noticed that the Service Instance listing on both Node and Service views where not taking into account proxy instance health. This fixes that up so that the small health check information in each Service Instance row includes the proxy instances health checks when displaying Service Instance health (afterall if the proxy instance is unhealthy then so is the service instance that it should be proxying)
* Refactor Consul::InstanceChecks with docs
* Add to-hash helper, which will return an object keyed by a prop
* Stop using/relying on ember-data type things, just use a hash lookup
* For the moment add an equivalent "just give me proxies" model prop
* Start stitching things together, this one requires an extra HTTP request
..previously we weren't even requesting proxies instances here
* Finish up the stitching
* Document Consul::ServiceInstance::List while I'm here
* Fix up navigation mocks Name > Service
Fixes#11876
This enforces that multiple xDS mutations are not issued on the same ADS connection at once, so that we can 100% control the order that they are applied. The original code made assumptions about the way multiple in-flight mutations were applied on the Envoy side that was incorrect.