* [API Gateway] Add integration test for conflicted TCP listeners
* [API Gateway] Update simple test to leverage intentions and multiple listeners
* Fix broken unit test
* PR suggestions
Prior to this commit, secondary datacenters could not be initialized
as peering acceptors if ACLs were enabled. This is due to the fact that
internal server-to-server API calls would fail because the management
token was not generated. This PR makes it so that both primary and
secondary datacenters generate their own management token whenever
a leader is elected in their respective clusters.
1. Upgraded agent can inherit the persisted token and join the cluster
2. Agent token prior to upgrade is still valid after upgrade
3. Enable ACL in the agent configuration
* rate limit test
* Have tests for the 3 modes
* added assertions for logs and metrics
* add comments to test sections
* add check for rate limit exceeded text in log assertion section.
* fix linting error
* updating test to use KV get and put. move log assertion tolast.
* Adding logging for blocking messages in enforcing mode. refactoring tests.
* modified test description
* formatting
* Apply suggestions from code review
Co-authored-by: Dan Upton <daniel@floppy.co>
* Update test/integration/consul-container/test/ratelimit/ratelimit_test.go
Co-authored-by: Dhia Ayachi <dhia@hashicorp.com>
* expand log checking so that it ensures both logs are they when they are supposed to be and not there when they are not expected to be.
* add retry on test
* Warn once when rate limit exceed regardless of enforcing vs permissive.
* Update test/integration/consul-container/test/ratelimit/ratelimit_test.go
Co-authored-by: Dan Upton <daniel@floppy.co>
Co-authored-by: Dan Upton <daniel@floppy.co>
Co-authored-by: Dhia Ayachi <dhia@hashicorp.com>
- remove dep on consul main module
- use 'consul tls' subcommands instead of tlsutil
- use direct json config construction instead of agent/config structs
- merge libcluster and libagent packages together
- more widely use BuildContext
- get the OSS/ENT runner stuff working properly
- reduce some flakiness
- fix some correctness related to http/https API
* Protobuf Modernization
Remove direct usage of golang/protobuf in favor of google.golang.org/protobuf
Marshallers (protobuf and json) needed some changes to account for different APIs.
Moved to using the google.golang.org/protobuf/types/known/* for the well known types including replacing some custom Struct manipulation with whats available in the structpb well known type package.
This also updates our devtools script to install protoc-gen-go from the right location so that files it generates conform to the correct interfaces.
* Fix go-mod-tidy make target to work on all modules
* Refactoring the peering integ test to accommodate coming changes of other upgrade scenarios.
- Add a utils package under test that contains methods to set up various test scenarios.
- Deduplication: have a single CreatingPeeringClusterAndSetup replace
CreatingAcceptingClusterAndSetup and CreateDialingClusterAndSetup.
- Separate peering cluster creation and server registration.
* Apply suggestions from code review
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
Adds automation for generating the map of `gRPC Method Name → Rate Limit Type`
used by the middleware introduced in #15550, and will ensure we don't forget
to add new endpoints.
Engineers must annotate their RPCs in the proto file like so:
```
rpc Foo(FooRequest) returns (FooResponse) {
option (consul.internal.ratelimit.spec) = {
operation_type: READ,
};
}
```
When they run `make proto` a protoc plugin `protoc-gen-consul-rate-limit` will
be installed that writes rate-limit specs as a JSON array to a file called
`.ratelimit.tmp` (one per protobuf package/directory).
After running Buf, `make proto` will execute a post-process script that will
ingest all of the `.ratelimit.tmp` files and generate a Go file containing the
mappings in the `agent/grpc-middleware` package. In the enterprise repository,
it will write an additional file with the enterprise-only endpoints.
If an engineer forgets to add the annotation to a new RPC, the plugin will
return an error like so:
```
RPC Foo is missing rate-limit specification, fix it with:
import "proto-public/annotations/ratelimit/ratelimit.proto";
service Bar {
rpc Foo(...) returns (...) {
option (hashicorp.consul.internal.ratelimit.spec) = {
operation_type: OPERATION_READ | OPERATION_WRITE | OPERATION_EXEMPT,
};
}
}
```
In the future, this annotation can be extended to support rate-limit
category (e.g. KV vs Catalog) and to determine the retry policy.
* feat(ingress-gateway): support outlier detection of upstream service for ingress gateway
* changelog
Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>
* integ-test: fix flaky test - case-cfg-splitter-peering-ingress-gateways
* add retry peering to all peering cases
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
* integ-test: test consul upgrade from the snapshot of a running cluster
* use Target version as default
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
* update go version to 1.18 for api and sdk, go mod tidy
* removes ioutil usage everywhere which was deprecated in go1.16 in favour of io and os packages. Also introduces a lint rule which forbids use of ioutil going forward.
Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com>
This is instead of the current behavior where we feed the config entries in using the config_entries.bootstrap configuration which oddly races against other setup code in some circumstances.
I converted ALL tests to explicitly create config entries.
The integration test TestEnvoy/case-ingress-gateway-multiple-services is flaky
and this possibly reduces the flakiness by explicitly waiting for services to show
up in the catalog as healthy before waiting for them to show up in envoy as
healthy which gives it just a bit more time to sync.
* updating to serf v0.10.1 and memberlist v0.5.0 to get memberlist size metrics and memberlist broadcast queue depth metric
* update changelog
* update changelog
* correcting changelog
* adding "QueueCheckInterval" for memberlist to test
* updating integration test containers to grab latest api
more readable in CI.
```
Running primary verification step for case-ingress-gateway-multiple-services...
�[34;1mverify.bats
�[0m�[1G ingress proxy admin is up on :20000�[K�[75G 1/12�[2G�[1G ✓ ingress proxy admin is up on :20000�[K
�[0m�[1G s1 proxy admin is up on :19000�[K�[75G 2/12�[2G�[1G ✓ s1 proxy admin is up on :19000�[K
�[0m�[1G s2 proxy admin is up on :19001�[K�[75G 3/12�[2G�[1G ✓ s2 proxy admin is up on :19001�[K
�[0m�[1G s1 proxy listener should be up and have right cert�[K�[75G 4/12�[2G�[1G ✓ s1 proxy listener should be up and have right cert�[K
�[0m�[1G s2 proxy listener should be up and have right cert�[K�[75G 5/12�[2G�[1G ✓ s2 proxy listener should be up and have right cert�[K
�[0m�[1G ingress-gateway should have healthy endpoints for s1�[K�[75G 6/12�[2G�[31;1m�[1G ✗ ingress-gateway should have healthy endpoints for s1�[K
�[0m�[31;22m (from function `assert_upstream_has_endpoints_in_status' in file /workdir/primary/bats/helpers.bash, line 385,
```
versus
```
Running primary verification step for case-ingress-gateway-multiple-services...
1..12
ok 1 ingress proxy admin is up on :20000
ok 2 s1 proxy admin is up on :19000
ok 3 s2 proxy admin is up on :19001
ok 4 s1 proxy listener should be up and have right cert
ok 5 s2 proxy listener should be up and have right cert
not ok 6 ingress-gateway should have healthy endpoints for s1
not ok 7 s1 proxy should have been configured with max_connections in services
ok 8 ingress-gateway should have healthy endpoints for s2
```
* feat(ingress gateway: support configuring limits in ingress-gateway config entry
- a new Defaults field with max_connections, max_pending_connections, max_requests
is added to ingress gateway config entry
- new field max_connections, max_pending_connections, max_requests in
individual services to overwrite the value in Default
- added unit test and integration test
- updated doc
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
Without this change, you'd see this error:
```
./run-tests.sh: line 49: LAMBDA_TESTS_ENABLED: unbound variable
./run-tests.sh: line 49: LAMBDA_TESTS_ENABLED: unbound variable
```
Locally, always run integration tests using amd64, even if running
on an arm mac. This ensures the architecture locally always matches
the CI/CD environment.
In addition:
* Use consul:local for envoy integration and upgrade tests. Previously,
consul:local was used for upgrade tests and consul-dev for integration
tests. I didn't see a reason to use separate images as it's more
confusing.
* By default, disable the requirement that aws credentials are set.
These are only needed for the lambda tests and make it so you
can't run any tests locally, even if you're not running the
lambda tests. Now they'll only run if the LAMBDA_TESTS_ENABLED
env var is set.
* Split out the building of the Docker image for integration
tests into its own target from `dev-docker`. This allows us to always
use an amd64 image without messing up the `dev-docker` target.
* Add support for passing GO_TEST_FLAGs to `test-envoy-integ` target.
* Add a wait_for_leader function because tests were failing locally
without it.
* defaulting to false because peering will be released as beta
* Ignore peering disabled error in bundles cachetype
Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
Co-authored-by: freddygv <freddy@hashicorp.com>
Co-authored-by: Matt Keeler <mjkeeler7@gmail.com>
Now that peered upstreams can generate envoy resources (#13758), we need a way to disambiguate local from peered resources in our metrics. The key difference is that datacenter and partition will be replaced with peer, since in the context of peered resources partition is ambiguous (could refer to the partition in a remote cluster or one that exists locally). The partition and datacenter of the proxy will always be that of the source service.
Regexes were updated to make emitting datacenter and partition labels mutually exclusive with peer labels.
Listener filter names were updated to better match the existing regex.
Cluster names assigned to peered upstreams were updated to be synthesized from local peer name (it previously used the externally provided primary SNI, which contained the peer name from the other side of the peering). Integration tests were updated to assert for the new peer labels.
Ensure that the peer stream replication rpc can successfully be used with TLS activated.
Also:
- If key material is configured for the gRPC port but HTTPS is not
enabled now TLS will still be activated for the gRPC port.
- peerstream replication stream opened by the establishing-side will now
ignore grpc.WithBlock so that TLS errors will bubble up instead of
being awkwardly delayed or suppressed
Peer replication is intended to be between separate Consul installs and
effectively should be considered "external". This PR moves the peer
stream replication bidirectional RPC endpoint to the external gRPC
server and ensures that things continue to function.
- fix sg: need remote access to test server
- Give the load generator a name
- Update loadtest hcl filename in readme
- Add terraform init
- Disable access to the server machine by default
Require use of mesh gateways in order for service mesh data plane
traffic to flow between peers.
This also adds plumbing for envoy integration tests involving peers, and
one starter peering test.