* Revert "refactor the resource client (#20343)"
This reverts commit 3c5cb04b0f.
* Revert "clean up http client (#20342)"
This reverts commit 2b89025eab.
* remove deprecated peer
* fix the typo
* remove forwarding test as it tests grpc, should add it back
The docker image used in CICD was referencing `registry.k8s.io/pause:3.3`,
which appears to no longer function correctly. This commit swaps over to a
Hashicorp mirrored image that shouldn't have rate limits or disappearing
images.
Add missing import
Add explicit enum case for deny action
Remove extra comments
Add build tags to ent and ce tests
Add copyright headers for the ce files
Fix case statements for ce validator
Remove ce tests with Deny traffic permissions
Fix more integration tests
Split more ce and ent tests, add back ent deny tests for traffic permissions controller
temp rename before rebase
Readd ent deny tests for traffic permissions controller
Add case insensitive param on service route match
This commit adds in a new feature that allows service routers to specify that
paths and path prefixes should ignore upper / lower casing when matching URLs.
Co-authored-by: Derek Menteer <105233703+hashi-derek@users.noreply.github.com>
* Implement In-Process gRPC for use by controller caching/indexing
This replaces the pipe base listener implementation we were previously using. The new style CAN avoid cloning resources which our controller caching/indexing is taking advantage of to not duplicate resource objects in memory.
To maintain safety for controllers and for them to be able to modify data they get back from the cache and the resource service, the client they are presented in their runtime will be wrapped with an autogenerated client which clones request and response messages as they pass through the client.
Another sizable change in this PR is to consolidate how server specific gRPC services get registered and managed. Before this was in a bunch of different methods and it was difficult to track down how gRPC services were registered. Now its all in one place.
* Fix race in tests
* Ensure the resource service is registered to the multiplexed handler for forwarding from client agents
* Expose peer streaming on the internal handler
* updating usage of http2_protocol_options and access_log_path
* add changelog
* update template for AdminAccessLogConfig
* remove mucking with AdminAccessLogConfig
* Add a make target to run lint-consul-retry on all the modules
* Cleanup sdk/testutil/retry
* Fix a bunch of retry.Run* usage to not use the outer testing.T
* Fix some more recent retry lint issues and pin to v1.4.0 of lint-consul-retry
* Fix codegen copywrite lint issues
* Don’t perform cleanup after each retry attempt by default.
* Use the common testutil.TestingTB interface in test-integ/tenancy
* Fix retry tests
* Update otel access logging extension test to perform requests within the retry block
* add build tags/import k8s specific proto packages
* fix generated import paths
* fix gomod linting issue
* mod tidy every go mod file
* revert protobuff version, take care of in different pr
* cleaned up new lines
* added newline to end of file
As the V2 architecture hinges on eventual consistency and controllers reconciling the existing state in response to writes, there are potential issues we could run into regarding ordering and timing of operations. We want to be able to guarantee that given a set of resources the system will always eventually get to the desired correct state. The order of resource writes and delays in performing those writes should not alter the final outcome of reaching the desired state.
To that end, this commit introduces arbitrary randomized delays before performing resources writes into the `resourcetest.Client`. Its `PublishResources` method was already randomizing the order of resource writes. By default, no delay is added to normal writes and deletes but tests can opt-in via either passing hard coded options when creating the `resourcetest.Client` or using the `resourcetest.ConfigureTestCLIFlags` function to allow processing of CLI parameters.
In addition to allowing configurability of the request delay min and max, the client also has a configurable random number generator seed. When Using the CLI parameter helpers, a test log will be written noting the currently used settings. If the test fails then you can reproduce the same delays and order randomizations by providing the seed during the previous test failure.
This should align between CE ef35525 and ENT 7f95226dbe40151c8f17dd4464784b60cf358dc1 in:
- testing/integration/consul-container
- test-integ
- testing/deployer
This updates the testing/deployer (aka "topology test") framework to allow for a
v2-oriented topology to opt services into enabling TransparentProxy. The restrictions
are similar to that of #19046
The multiport Ports map that was added in #19046 was changed to allow for the
protocol to be specified at this time, but for now the only supported protocol is TCP
as only L4 functions currently on main.
As part of making transparent proxy work, the DNS server needed a new zonefile
for responding to virtual.consul requests, since there is no Kubernetes DNS and
the Consul DNS work for v2 has not happened yet. Once Consul DNS supports v2 we should switch over. For now the format of queries is:
<service>--<namespace>--<partition>.virtual.consul
Additionally:
- All transparent proxy enabled services are assigned a virtual ip in the 10.244.0/24
range. This is something Consul will do in v2 at a later date, likely during 1.18.
- All services with exposed ports (non-mesh) are assigned a virtual port number for use
with tproxy
- The consul-dataplane image has been made un-distroless, and gotten the necessary
tools to execute consul connect redirect-traffic before running dataplane, thus simulating
a kubernetes init container in plain docker.
This updates the testing/deployer (aka "topology test") framework to conditionally
configure and launch catalog constructs using v2 resources. This is controlled via a
Version field on the Node construct in a topology.Config. This only functions for a
dataplane type and has other restrictions that match the rest of v2 (no peering, no
wanfed, no mesh gateways).
Like config entries, you can statically provide a set of initial resources to be synced
when bringing up the cluster (beyond those that are generated for you such as
workloads, services, etc).
If you want to author a test that can be freely converted between v1 and v2 then that
is possible. If you switch to the multi-port definition on a topology.Service (aka
"workload/instance") then that makes v1 ineligible.
This also adds a starter set of "on every PR" integration tests for single and multiport
under test-integ/catalogv2