more readable in CI.
```
Running primary verification step for case-ingress-gateway-multiple-services...
�[34;1mverify.bats
�[0m�[1G ingress proxy admin is up on :20000�[K�[75G 1/12�[2G�[1G ✓ ingress proxy admin is up on :20000�[K
�[0m�[1G s1 proxy admin is up on :19000�[K�[75G 2/12�[2G�[1G ✓ s1 proxy admin is up on :19000�[K
�[0m�[1G s2 proxy admin is up on :19001�[K�[75G 3/12�[2G�[1G ✓ s2 proxy admin is up on :19001�[K
�[0m�[1G s1 proxy listener should be up and have right cert�[K�[75G 4/12�[2G�[1G ✓ s1 proxy listener should be up and have right cert�[K
�[0m�[1G s2 proxy listener should be up and have right cert�[K�[75G 5/12�[2G�[1G ✓ s2 proxy listener should be up and have right cert�[K
�[0m�[1G ingress-gateway should have healthy endpoints for s1�[K�[75G 6/12�[2G�[31;1m�[1G ✗ ingress-gateway should have healthy endpoints for s1�[K
�[0m�[31;22m (from function `assert_upstream_has_endpoints_in_status' in file /workdir/primary/bats/helpers.bash, line 385,
```
versus
```
Running primary verification step for case-ingress-gateway-multiple-services...
1..12
ok 1 ingress proxy admin is up on :20000
ok 2 s1 proxy admin is up on :19000
ok 3 s2 proxy admin is up on :19001
ok 4 s1 proxy listener should be up and have right cert
ok 5 s2 proxy listener should be up and have right cert
not ok 6 ingress-gateway should have healthy endpoints for s1
not ok 7 s1 proxy should have been configured with max_connections in services
ok 8 ingress-gateway should have healthy endpoints for s2
```
* feat(ingress gateway: support configuring limits in ingress-gateway config entry
- a new Defaults field with max_connections, max_pending_connections, max_requests
is added to ingress gateway config entry
- new field max_connections, max_pending_connections, max_requests in
individual services to overwrite the value in Default
- added unit test and integration test
- updated doc
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
Without this change, you'd see this error:
```
./run-tests.sh: line 49: LAMBDA_TESTS_ENABLED: unbound variable
./run-tests.sh: line 49: LAMBDA_TESTS_ENABLED: unbound variable
```
Locally, always run integration tests using amd64, even if running
on an arm mac. This ensures the architecture locally always matches
the CI/CD environment.
In addition:
* Use consul:local for envoy integration and upgrade tests. Previously,
consul:local was used for upgrade tests and consul-dev for integration
tests. I didn't see a reason to use separate images as it's more
confusing.
* By default, disable the requirement that aws credentials are set.
These are only needed for the lambda tests and make it so you
can't run any tests locally, even if you're not running the
lambda tests. Now they'll only run if the LAMBDA_TESTS_ENABLED
env var is set.
* Split out the building of the Docker image for integration
tests into its own target from `dev-docker`. This allows us to always
use an amd64 image without messing up the `dev-docker` target.
* Add support for passing GO_TEST_FLAGs to `test-envoy-integ` target.
* Add a wait_for_leader function because tests were failing locally
without it.
* defaulting to false because peering will be released as beta
* Ignore peering disabled error in bundles cachetype
Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
Co-authored-by: freddygv <freddy@hashicorp.com>
Co-authored-by: Matt Keeler <mjkeeler7@gmail.com>
Now that peered upstreams can generate envoy resources (#13758), we need a way to disambiguate local from peered resources in our metrics. The key difference is that datacenter and partition will be replaced with peer, since in the context of peered resources partition is ambiguous (could refer to the partition in a remote cluster or one that exists locally). The partition and datacenter of the proxy will always be that of the source service.
Regexes were updated to make emitting datacenter and partition labels mutually exclusive with peer labels.
Listener filter names were updated to better match the existing regex.
Cluster names assigned to peered upstreams were updated to be synthesized from local peer name (it previously used the externally provided primary SNI, which contained the peer name from the other side of the peering). Integration tests were updated to assert for the new peer labels.
Ensure that the peer stream replication rpc can successfully be used with TLS activated.
Also:
- If key material is configured for the gRPC port but HTTPS is not
enabled now TLS will still be activated for the gRPC port.
- peerstream replication stream opened by the establishing-side will now
ignore grpc.WithBlock so that TLS errors will bubble up instead of
being awkwardly delayed or suppressed
Peer replication is intended to be between separate Consul installs and
effectively should be considered "external". This PR moves the peer
stream replication bidirectional RPC endpoint to the external gRPC
server and ensures that things continue to function.
Require use of mesh gateways in order for service mesh data plane
traffic to flow between peers.
This also adds plumbing for envoy integration tests involving peers, and
one starter peering test.
* tidy code and add some doc strings
* add doc strings to tests
* add partitions tests, need to adapt to run in both oss and ent
* split oss and enterprise versions
* remove parallel tests
* add error
* fix queryBackend in test
* revert unneeded change
* fix failing tests
* add a sample
* Consul cluster test
* add build dockerfile
* add tests to cover mixed versions tests
* use flag to pass docker image name
* remove default config and rely on flags to inject the right image to test
* add cluster abstraction
* fix imports and remove old files
* fix imports and remove old files
* fix dockerIgnore
* make a `Node interface` and encapsulate ConsulContainer
* fix a test bug where we only check the leader against a single node.
* add upgrade tests to CI
* fix yaml alignment
* fix alignment take 2
* fix flag naming
* fix image to build
* fix test run and go mod tidy
* add a debug command
* run without RYUK
* fix parallel run
* add skip reaper code
* make tempdir in local dir
* chmod the temp dir to 0777
* chmod the right dir name
* change executor to use machine instead of docker
* add docker layer caching
* remove setup docker
* add gotestsum
* install go version
* use variable for GO installed version
* add environment
* add environment in the right place
* do not disable RYUK in CI
* add service check to tests
* assertions outside routines
* add queryBackend to the api query meta.
* check if we are using the right backend for those tests (streaming)
* change the tested endpoint to use one that have streaming.
* refactor to test multiple scenarios for streaming
* Fix dockerfile
Co-authored-by: FFMMM <FFMMM@users.noreply.github.com>
* rename Clients to clients
Co-authored-by: FFMMM <FFMMM@users.noreply.github.com>
* check if cluster have 0 node
* tidy code and add some doc strings
* use uuid instead of random string
* add doc strings to tests
* add queryBackend to the api query meta.
* add a changelog
* fix for api backend query
* add missing require
* fix q.QueryBackend
* Revert "fix q.QueryBackend"
This reverts commit cd0e5f7b1a1730e191673d624f8e89b591871c05.
* fix circle ci config
* tidy go mod after merging main
* rename package and fix test scenario
* update go download url
* address review comments
* rename flag in CI
* add readme to the upgrade tests
* fix golang download url
* fix golang arch downloaded
* fix AddNodes to handle an empty cluster case
* use `parseBool`
* rename circle job and add comment
* update testcontainer to 0.13
* fix circle ci config
* remove build docker file and use `make dev-docker` instead
* Apply suggestions from code review
Co-authored-by: Dan Upton <daniel@floppy.co>
* fix a typo
Co-authored-by: FFMMM <FFMMM@users.noreply.github.com>
Co-authored-by: Dan Upton <daniel@floppy.co>
* Add partition fields to targets like service route destinations
* Update validation to prevent cross-DC + cross-partition references
* Handle partitions when reading config entries for disco chain
* Encode partition in compiled targets
The secondary DC now takes longer to populate the MGW snapshot because
it needs to wait for the secondary CA to be initialized before it can
receive roots and generate xDS config.
Previously MGWs could receive empty roots before the CA was
initialized. This wasn't necessarily a problem since the cluster ID in
the trust domain isn't verified.
We launch one container as part of the test with --pid=host but
apparently within that container it launches a copy of "tini" as a
process supervisor that prefers to be PID 1.
Because it's not PID 1 it logs a warning message about this to the envoy
integration test logs that can lead to thinking somehow that a test
failure is related when in fact it's completely unrelated.
Adding this environment variable avoids the warning.
The only thing that needed fixing up pertained to this section of the 1.18.x release notes:
> grpc_stats: the default value for stats_for_all_methods is switched from true to false, in order to avoid possible memory exhaustion due to an untrusted downstream sending a large number of unique method names. The previous default value was deprecated in version 1.14.0. This only changes the behavior when the value is not set. The previous behavior can be used by setting the value to true. This behavior change by be overridden by setting runtime feature envoy.deprecated_features.grpc_stats_filter_enable_stats_for_all_methods_by_default.
For now to maintain status-quo I'm explicitly setting `stats_for_all_methods=true` in all versions to avoid relying upon the default.
Additionally the naming of the emitted metrics for these gRPC requests changed slightly so the integration test assertions for `case-grpc` needed adjusting.
This adds support for the Incremental xDS protocol when using xDS v3. This is best reviewed commit-by-commit and will not be squashed when merged.
Union of all commit messages follows to give an overarching summary:
xds: exclusively support incremental xDS when using xDS v3
Attempts to use SoTW via v3 will fail, much like attempts to use incremental via v2 will fail.
Work around a strange older envoy behavior involving empty CDS responses over incremental xDS.
xds: various cleanups and refactors that don't strictly concern the addition of incremental xDS support
Dissolve the connectionInfo struct in favor of per-connection ResourceGenerators instead.
Do a better job of ensuring the xds code uses a well configured logger that accurately describes the connected client.
xds: pull out checkStreamACLs method in advance of a later commit
xds: rewrite SoTW xDS protocol tests to use protobufs rather than hand-rolled json strings
In the test we very lightly reuse some of the more boring protobuf construction helper code that is also technically under test. The important thing of the protocol tests is testing the protocol. The actual inputs and outputs are largely already handled by the xds golden output tests now so these protocol tests don't have to do double-duty.
This also updates the SoTW protocol test to exclusively use xDS v2 which is the only variant of SoTW that will be supported in Consul 1.10.
xds: default xds.Server.AuthCheckFrequency at use-time instead of construction-time
Note that this does NOT upgrade to xDS v3. That will come in a future PR.
Additionally:
- Ignored staticcheck warnings about how github.com/golang/protobuf is deprecated.
- Shuffled some agent/xds imports in advance of a later xDS v3 upgrade.
- Remove support for envoy 1.13.x but don't add in 1.17.x yet. We have to wait until the xDS v3 support is added in a follow-up PR.
Fixes#8425
To fix failing integration tests. The latest version (`1.7.4.0-r0`)
appears to not be catting all the bytes, so the expected metrics are
missing in the output.
This PR updates the tags that we generate for Envoy stats.
Several of these come with breaking changes, since we can't keep two stats prefixes for a filter.
This has the biggest impact on enterprise test cases that use namespaced
registrations, which prior to this change sometimes failed the initial
registration because the namespace was not yet created.
Added a new option `ui_config.metrics_proxy.path_allowlist`. This defaults to `["/api/v1/query", "/api/v1/query_range"]` when the metrics provider is set to `prometheus`.
Requests that do not use one of the allow-listed paths (via exact match) get a 403 Forbidden response instead.
The key thing here is to use `curl --no-keepalive` so that envoy
pre-1.15 tests will reliably use the latest listener every time.
Extra:
- Switched away from editing line-item intentions the legacy way.
- Removed some teardown scripts, as we don't share anything between cases anyway
- Removed unnecessary use of `run` in some places.
There is a delay between an intentions change being made, and it being
reflected in the Envoy runtime configuration. Now that the enforcement
happens inside of Envoy instead of over in the agent, our tests need to
explicitly wait until the xDS reconfiguration is complete before
attempting to assert intentions worked.
Also remove a few double retry loops.
This speeds up individual envoy integration test runs from ~23m to ~14m.
It's also a pre-req for possibly switching to doing the tests entirely within Go (no shell-outs).
Extend Consul’s intentions model to allow for request-based access control enforcement for HTTP-like protocols in addition to the existing connection-based enforcement for unspecified protocols (e.g. tcp).
Related changes:
- hard-fail the xDS connection attempt if the envoy version is known to be too old to be supported
- remove the RouterMatchSafeRegex proxy feature since all supported envoy versions have it
- stop using --max-obj-name-len (due to: envoyproxy/envoy#11740)
A port can be sent in the Host header as defined in the HTTP RFC, so we
take any hosts that we want to match traffic to and also add another
host with the listener port added.
Also fix an issue with envoy integration tests not running the
case-ingress-gateway-tls test.
Previously, we did not require the 'service-name.*' host header value
when on a single http service was exposed. However, this allows a user
to get into a situation where, if they add another service to the
listener, suddenly the previous service's traffic might not be routed
correctly. Thus, we always require the Host header, even if there is
only 1 service.
Also, we add the make the default domain matching more restrictive by
matching "service-name.ingress.*" by default. This lines up better with
the namespace case and more accurately matches the Consul DNS value we
expect people to use in this case.
The DNS resolution will be handled by Envoy and defaults to LOGICAL_DNS. This discovery type can be overridden on a per-gateway basis with the envoy_dns_discovery_type Gateway Option.
If a service contains an instance with a hostname as an address we set the Envoy cluster to use DNS as the discovery type rather than EDS. Since both mesh gateways and terminating gateways route to clusters using SNI, whenever there is a mix of hostnames and IP addresses associated with a service we use the hostname + CDS rather than the IPs + EDS.
Note that we detect hostnames by attempting to parse the service instance's address as an IP. If it is not a valid IP we assume it is a hostname.
The previous change, which moved test running to Go, appears to have
broken log capturing. I am not entirely sure why, but the run_tests
function seems to exit on the first error.
This change moves test teardown and log capturing out of run_test, and
has the go test runner call them when necessary.
* test/integration: only run against 1 envoy version
These tests are slow enough that it seems unlikely that anyone is
running multiple versions locally. If someone wants to, a for loop
outside of run_test.sh should do the right thing.
Remove unused vars.
* Remove logic to iterate over test cases, run a single case
* Add a golang runner for integration tests
* Use build tags for envoy integration tests
And add junit-xml report
We require any non-wildcard services to match the protocol defined in
the listener on write, so that we can maintain a consistent experience
through ingress gateways. This also helps guard against accidental
misconfiguration by a user.
- Update tests that require an updated protocol for ingress gateways
- Validate that this cannot be set on a 'tcp' listener nor on a wildcard
service.
- Add Hosts field to api and test in consul config write CLI
- xds: Configure envoy with user-provided hosts from ingress gateways
This commit adds the necessary changes to allow an ingress gateway to
route traffic from a single defined port to multiple different upstream
services in the Consul mesh.
To do this, we now require all HTTP requests coming into the ingress
gateway to specify a Host header that matches "<service-name>.*" in
order to correctly route traffic to the correct service.
- Differentiate multiple listener's route names by port
- Adds a case in xds for allowing default discovery chains to create a
route configuration when on an ingress gateway. This allows default
services to easily use host header routing
- ingress-gateways have a single route config for each listener
that utilizes domain matching to route to different services.
* Implements a simple, tcp ingress gateway workflow
This adds a new type of gateway for allowing Ingress traffic into Connect from external services.
Co-authored-by: Chris Piraino <cpiraino@hashicorp.com>
This fixes this bats warning:
duplicate test name(s) in /workdir/primary/bats/verify.bats: test_s1_upstream_made_1_connection
Test was already defined at line 42, rename it to avoid test name duplication
This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch:
There are several distinct chunks of code that are affected:
* new flags and config options for the server
* retry join WAN is slightly different
* retry join code is shared to discover primary mesh gateways from secondary datacenters
* because retry join logic runs in the *agent* and the results of that
operation for primary mesh gateways are needed in the *server* there are
some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur
at multiple layers of abstraction just to pass the data down to the right
layer.
* new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers
* the function signature for RPC dialing picked up a new required field (the
node name of the destination)
* several new RPCs for manipulating a FederationState object:
`FederationState:{Apply,Get,List,ListMeshGateways}`
* 3 read-only internal APIs for debugging use to invoke those RPCs from curl
* raft and fsm changes to persist these FederationStates
* replication for FederationStates as they are canonically stored in the
Primary and replicated to the Secondaries.
* a special derivative of anti-entropy that runs in secondaries to snapshot
their local mesh gateway `CheckServiceNodes` and sync them into their upstream
FederationState in the primary (this works in conjunction with the
replication to distribute addresses for all mesh gateways in all DCs to all
other DCs)
* a "gateway locator" convenience object to make use of this data to choose
the addresses of gateways to use for any given RPC or gossip operation to a
remote DC. This gets data from the "retry join" logic in the agent and also
directly calls into the FSM.
* RPC (`:8300`) on the server sniffs the first byte of a new connection to
determine if it's actually doing native TLS. If so it checks the ALPN header
for protocol determination (just like how the existing system uses the
type-byte marker).
* 2 new kinds of protocols are exclusively decoded via this native TLS
mechanism: one for ferrying "packet" operations (udp-like) from the gossip
layer and one for "stream" operations (tcp-like). The packet operations
re-use sockets (using length-prefixing) to cut down on TLS re-negotiation
overhead.
* the server instances specially wrap the `memberlist.NetTransport` when running
with gateway federation enabled (in a `wanfed.Transport`). The general gist is
that if it tries to dial a node in the SAME datacenter (deduced by looking
at the suffix of the node name) there is no change. If dialing a DIFFERENT
datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh
gateways to eventually end up in a server's :8300 port.
* a new flag when launching a mesh gateway via `consul connect envoy` to
indicate that the servers are to be exposed. This sets a special service
meta when registering the gateway into the catalog.
* `proxycfg/xds` notice this metadata blob to activate additional watches for
the FederationState objects as well as the location of all of the consul
servers in that datacenter.
* `xds:` if the extra metadata is in place additional clusters are defined in a
DC to bulk sink all traffic to another DC's gateways. For the current
datacenter we listen on a wildcard name (`server.<dc>.consul`) that load
balances all servers as well as one mini-cluster per node
(`<node>.server.<dc>.consul`)
* the `consul tls cert create` command got a new flag (`-node`) to help create
an additional SAN in certs that can be used with this flavor of federation.
* add 1.12.2
* add envoy 1.13.0
* Introduce -envoy-version to get 1.10.0 passing.
* update old version and fix consul-exec case
* add envoy_version and fix check
* Update Envoy CLI tests to account for the 1.13 compatibility changes.
Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
* Expose Envoy /stats for statsd agents; Add testcases
* Remove merge conflict leftover
* Add support for prefix instead of path; Fix docstring to mirror these changes
* Add new config field to docs; Add testcases to check that /stats/prometheus is exposed as well
* Parametrize matchType (prefix or path) and value
* Update website/source/docs/connect/proxies/envoy.md
Co-Authored-By: Paul Banks <banks@banksco.de>
Co-authored-by: Paul Banks <banks@banksco.de>
* Adds 'limits' field to the upstream configuration of a connect proxy
This allows a user to configure the envoy connect proxy with
'max_connections', 'max_queued_requests', and 'max_concurrent_requests'. These
values are defined in the local proxy on a per-service instance basis
and should thus NOT be thought of as a global-level or even service-level value.
* Allow RSA CA certs for consul and vault providers to correctly sign EC leaf certs.
* Ensure key type ad bits are populated from CA cert and clean up tests
* Add integration test and fix error when initializing secondary CA with RSA key.
* Add more tests, fix review feedback
* Update docs with key type config and output
* Apply suggestions from code review
Co-Authored-By: R.B. Boyer <rb@hashicorp.com>
Previously the logic for configuring RDS during LDS for L7 upstreams was
overapplied to TCP proxies resulting in a cluster name of <emptystring>
being used incorrectly.
Fixes#6621
Fixes: #5396
This PR adds a proxy configuration stanza called expose. These flags register
listeners in Connect sidecar proxies to allow requests to specific HTTP paths from outside of the node. This allows services to protect themselves by only
listening on the loopback interface, while still accepting traffic from non
Connect-enabled services.
Under expose there is a boolean checks flag that would automatically expose all
registered HTTP and gRPC check paths.
This stanza also accepts a paths list to expose individual paths. The primary
use case for this functionality would be to expose paths for third parties like
Prometheus or the kubelet.
Listeners for requests to exposed paths are be configured dynamically at run
time. Any time a proxy, or check can be registered, a listener can also be
created.
In this initial implementation requests to these paths are not
authenticated/encrypted.
Since FUNCNAME is not defined when running outside a function,
trap does not work and display wrong error message.
Example from https://circleci.com/gh/hashicorp/consul/69506 :
```
⨯ FAIL
/home/circleci/project/test/integration/connect/envoy/run-tests.sh: line 1: FUNCNAME[0]: unbound variable
make: *** [GNUmakefile:363: test-envoy-integ] Error 1
```
This fix will avoid this error message and display the real cause.
Since generated envoy clusters all are named using (mostly) SNI syntax
we can have envoy read the various fields out of that structure and emit
it as stats labels to the various telemetry backends.
I changed the delimiter for the 'customization hash' from ':' to '~'
because ':' is always reencoded by envoy as '_' when generating metrics
keys.
Failover is pushed entirely down to the data plane by creating envoy
clusters and putting each successive destination in a different load
assignment priority band. For example this shows that normally requests
go to 1.2.3.4:8080 but when that fails they go to 6.7.8.9:8080:
- name: foo
load_assignment:
cluster_name: foo
policy:
overprovisioning_factor: 100000
endpoints:
- priority: 0
lb_endpoints:
- endpoint:
address:
socket_address:
address: 1.2.3.4
port_value: 8080
- priority: 1
lb_endpoints:
- endpoint:
address:
socket_address:
address: 6.7.8.9
port_value: 8080
Mesh gateways route requests based solely on the SNI header tacked onto
the TLS layer. Envoy currently only lets you configure the outbound SNI
header at the cluster layer.
If you try to failover through a mesh gateway you ideally would
configure the SNI value per endpoint, but that's not possible in envoy
today.
This PR introduces a simpler way around the problem for now:
1. We identify any target of failover that will use mesh gateway mode local or
remote and then further isolate any resolver node in the compiled discovery
chain that has a failover destination set to one of those targets.
2. For each of these resolvers we will perform a small measurement of
comparative healths of the endpoints that come back from the health API for the
set of primary target and serial failover targets. We walk the list of targets
in order and if any endpoint is healthy we return that target, otherwise we
move on to the next target.
3. The CDS and EDS endpoints both perform the measurements in (2) for the
affected resolver nodes.
4. For CDS this measurement selects which TLS SNI field to use for the cluster
(note the cluster is always going to be named for the primary target)
5. For EDS this measurement selects which set of endpoints will populate the
cluster. Priority tiered failover is ignored.
One of the big downsides to this approach to failover is that the failover
detection and correction is going to be controlled by consul rather than
deferring that entirely to the data plane as with the prior version. This also
means that we are bound to only failover using official health signals and
cannot make use of data plane signals like outlier detection to affect
failover.
In this specific scenario the lack of data plane signals is ok because the
effectiveness is already muted by the fact that the ultimate destination
endpoints will have their data plane signals scrambled when they pass through
the mesh gateway wrapper anyway so we're not losing much.
Another related fix is that we now use the endpoint health from the
underlying service, not the health of the gateway (regardless of
failover mode).
* connect: reconcile how upstream configuration works with discovery chains
The following upstream config fields for connect sidecars sanely
integrate into discovery chain resolution:
- Destination Namespace/Datacenter: Compilation occurs locally but using
different default values for namespaces and datacenters. The xDS
clusters that are created are named as they normally would be.
- Mesh Gateway Mode (single upstream): If set this value overrides any
value computed for any resolver for the entire discovery chain. The xDS
clusters that are created may be named differently (see below).
- Mesh Gateway Mode (whole sidecar): If set this value overrides any
value computed for any resolver for the entire discovery chain. If this
is specifically overridden for a single upstream this value is ignored
in that case. The xDS clusters that are created may be named differently
(see below).
- Protocol (in opaque config): If set this value overrides the value
computed when evaluating the entire discovery chain. If the normal chain
would be TCP or if this override is set to TCP then the result is that
we explicitly disable L7 Routing and Splitting. The xDS clusters that
are created may be named differently (see below).
- Connect Timeout (in opaque config): If set this value overrides the
value for any resolver in the entire discovery chain. The xDS clusters
that are created may be named differently (see below).
If any of the above overrides affect the actual result of compiling the
discovery chain (i.e. "tcp" becomes "grpc" instead of being a no-op
override to "tcp") then the relevant parameters are hashed and provided
to the xDS layer as a prefix for use in naming the Clusters. This is to
ensure that if one Upstream discovery chain has no overrides and
tangentially needs a cluster named "api.default.XXX", and another
Upstream does have overrides for "api.default.XXX" that they won't
cross-pollinate against the operator's wishes.
Fixes#6159
* Allow setting the mesh gateway mode for an upstream in config files
* Add envoy integration test for mesh gateways
This necessitated many supporting changes in most of the other test cases.
Add remote mode mesh gateways integration test
The main change is that we no longer filter service instances by health,
preferring instead to render all results down into EDS endpoints in
envoy and merely label the endpoints as HEALTHY or UNHEALTHY.
When OnlyPassing is set to true we will force consul checks in a
'warning' state to render as UNHEALTHY in envoy.
Fixes#6171
Additionally:
- wait for bootstrap config entries to be applied
- run the verify container in the host's PID namespace so we can kill
envoys without mounting the docker socket
* assert that we actually send HEALTHY and UNHEALTHY endpoints down in EDS during failover
Also:
- add back an internal http endpoint to dump a compiled discovery chain for debugging purposes
Before the CompiledDiscoveryChain.IsDefault() method would test:
- is this chain just one resolver step?
- is that resolver step just the default?
But what I forgot to test:
- is that resolver step for the same service that the chain represents?
This last point is important because if you configured just one config
entry:
kind = "service-resolver"
name = "web"
redirect {
service = "other"
}
and requested the chain for "web" you'd get back a **default** resolver
for "other". In the xDS code the IsDefault() method is used to
determine if this chain is "empty". If it is then we use the
pre-discovery-chain logic that just uses data embedded in the Upstream
object (and still lets the escape hatches function).
In the example above that means certain parts of the xDS code were going
to try referencing a cluster named "web..." despite the other parts of
the xDS code maintaining clusters named "other...".
When the envoy healthy panic threshold was explicitly disabled as part
of L7 traffic management it changed how envoy decided to load balance to
endpoints in a cluster. This only matters when envoy is in "panic mode"
aka "when you have a bunch of unhealthy endpoints". Panic mode sends
traffic to unhealthy instances in certain circumstances.
Note: Prior to explicitly disabling the healthy panic threshold, the
default value is 50%.
What was happening is that the test harness was bringing up consul the
sidecars, and the service instances all at once and sometimes the
proxies wouldn't have time to be checked by consul to be labeled as
'passing' in the catalog before a round of EDS happened.
The xDS server in consul effectively queries /v1/health/connect/s2 and
gets 1 result, but that one result has a 'critical' check so the xDS
server sends back that endpoint labeled as UNHEALTHY.
Envoy sees that 100% of the endpoints in the cluster are unhealthy and
would enter panic mode and still send traffic to s2. This is why the
test suites PRIOR to disabling the healthy panic threshold worked. They
were _incorrectly_ passing.
When the healthy panic threshol is disabled, envoy never enters panic
mode in this situation and thus the cluster has zero healthy endpoints
so load balancing goes nowhere and the tests fail.
Why does this only affect the test suites for envoy 1.8.0? My guess is
that https://github.com/envoyproxy/envoy/pull/4442 was merged into the
1.9.x series and somehow that plays a role.
This PR modifies the bats scripts to explicitly wait until the upstream
sidecar is healthy as measured by /v1/health/connect/s2?passing BEFORE
trying to interrogate envoy which should make the tests less racy.
* Make cluster names SNI always
* Update some tests
* Ensure we check for prepared query types
* Use sni for route cluster names
* Proper mesh gateway mode defaulting when the discovery chain is used
* Ignore service splits from PatchSliceOfMaps
* Update some xds golden files for proper test output
* Allow for grpc/http listeners/cluster configs with the disco chain
* Update stats expectation
* Make exec test assert Envoy version - it was not rebuilding before and so often ran against wrong version. This makes 1.10 fail consistenty.
* Switch Envoy exec to use a named pipe rather than FD magic since Envoy 1.10 doesn't support that.
* Refactor to use an internal shim command for piping the bootstrap through.
* Fmt. So sad that vscode golang fails so often these days.
* go mod tidy
* revert go mod tidy changes
* Revert "ignore consul-exec tests until fixed (#5986)"
This reverts commit 683262a686.
* Review cleanups
* Upgrade xDS (go-control-plane) API to support Envoy 1.10.
This includes backwards compatibility shim to work around the ext_authz package rename in 1.10.
It also adds integration test support in CI for 1.10.0.
* Fix go vet complaints
* go mod vendor
* Update Envoy version info in docs
* Update website/source/docs/connect/proxies/envoy.md
* Make central conf test work when run in a suite.
This switches integration tests to hard restart Consul each time which causes less surpise when some tests need to set configs that don't work on consul reload. This also increases the isolation and repeatability of the tests by dropping Consul's state entirely for each case run.
* Remove aborted attempt to make restart optional.
* Add integration test for central config; fix central config WIP
* Add integration test for central config; fix central config WIP
* Set proxy protocol correctly and begin adding upstream support
* Add upstreams to service config cache key and start new notify watcher if they change.
This doesn't update the tests to pass though.
* Fix some merging logic get things working manually with a hack (TODO fix properly)
* Simplification to not allow enabling sidecars centrally - it makes no sense without upstreams anyway
* Test compile again and obvious ones pass. Lots of failures locally not debugged yet but may be flakes. Pushing up to see what CI does
* Fix up service manageer and API test failures
* Remove the enable command since it no longer makes much sense without being able to turn on sidecar proxies centrally
* Remove version.go hack - will make integration test fail until release
* Remove unused code from commands and upstream merge
* Re-bump version to 1.5.0
* Add support for HTTP proxy listeners
* Add customizable bootstrap configuration options
* Debug logging for xDS AuthZ
* Add Envoy Integration test suite with basic test coverage
* Add envoy command tests to cover new cases
* Add tracing integration test
* Add gRPC support WIP
* Merged changes from master Docker. get CI integration to work with same Dockerfile now
* Make docker build optional for integration
* Enable integration tests again!
* http2 and grpc integration tests and fixes
* Fix up command config tests
* Store all container logs as artifacts in circle on fail
* Add retries to outer part of stats measurements as we keep missing them in CI
* Only dump logs on failing cases
* Fix typos from code review
* Review tidying and make tests pass again
* Add debug logs to exec test.
* Fix legit test failure caused by upstream rename in envoy config
* Attempt to reduce cases of bad TLS handshake in CI integration tests
* bring up the right service
* Add prometheus integration test
* Add test for denied AuthZ both HTTP and TCP
* Try ANSI term for Circle